Posixcafe Who's asking

Evading Get-InjectedThread using API hooking (2021/01/27)

Get-InjectedThread is a power shell utility for allowing the user to look through running processes and find threads which seem to be the spawn of code that has been injected in to memory one way or another. How it accomplishes this is by checking running threads to see if their start address is on a page marked as MEM_IMAGE. It does the querying using the VirtualQuery function in kernel32.dll, which itself is a small wrapper around the NtQueryVirtualMemory system call.

For evading this, we simply need to ensure that the start address that gets passed to CreateThread points to a valid MEM_IMAGE mapped area of virtual memory. Now this is easy to do for smaller programs in which you can hand write a shim and use that in place of CreateThread, but this gets a bit harder when the goal is a more general purpose way of side loading.

However there is a fairly simple way around this problem by making use of API hooking and direct systemcalls. As long as the injector and the injected code continue to reside within the same process it is possible for the injector to hook API calls within the process and through that patch CreateThread() to point to a shim function within the injector's virtual memory when the call is made by the injected code. This gives the injected code free reign to call CreateThread while still skirting detection from Get-InjectedThread. The following example code illustrates one way to achieve this:


struct {
	HANDLE *mutex;
	LPTHREAD_START_ROUTINE lpStartAddress;
	LPVOID lpParamater;
	BOOL launched;
} BouncerInfo;

void
SetupPivot(void)
{
	BouncerInfo.mutex = CreateMutexA(NULL, FALSE, NULL);
	BouncerInfo.launched = TRUE;
}


void __stdcall
ThreadPivot(void *param)
{
	WaitForSingleObject(BouncerInfo.mutex, INFINITE);
	LPTHREAD_START_ROUTINE f = BouncerInfo.lpStartAddress;
	LPVOID p = BouncerInfo.lpParamater;
	BouncerInfo.launched = TRUE;
	ReleaseMutex(BouncerInfo.mutex);
	printf("[*] Pivoting to passed function\n");
	f(p);
}

//Function that can be injected over CreateThread from kernel32.dll
HANDLE __stdcall
HookedCreateThread(LPSECURITY_ATTRIBUTES lpThreadAttributes, SIZE_T dwStackSize, LPTHREAD_START_ROUTINE lpStartAddress, LPVOID lpParamater, DWORD dwCreationFlags, LPDWORD lpThreadId)
{
	HANDLE ThreadHandle = NULL;
	NTSTATUS res;
	WaitForSingleObject(BouncerInfo.mutex, INFINITE);
	//It's possible that we get two CreateThread calls before a pivot
	//occurs, this is a bad way of dealing with it but it shouldn't happen often
	while(BouncerInfo.launched == FALSE){
		printf("[!] Double entry detected, spin locking...\n");
		ReleaseMutex(BouncerInfo.mutex);
		WaitForSingleObject(BouncerInfo.mutex, INFINITE);
	}
	BouncerInfo.lpStartAddress = lpStartAddress;
	BouncerInfo.lpParamater = lpParamater;
	BouncerInfo.launched = FALSE;
	//Direct system call shim function
	res = NtCreateThreadEx10(&ThreadHandle, GENERIC_ALL, NULL, GetCurrentProcess(), ThreadPivot, lpParamater, FALSE, 0, 0, 0, NULL);
	ReleaseMutex(BouncerInfo.mutex);
	if(res != STATUS_SUCCESS){
		printf("[!] HookedCreateThread error: %lx\n", res);
		return NULL;
	}
	return ThreadHandle;
}
The code is a bit complex due to the asynchronous nature of CreateThread. We first setup a global struct that will store the last set of arguments that were given to our hook as well as a mutex to lock our variables between our threads. We also use a boolean flag variable within the struct to make sure that a pivot for a given address happens before another call to CreateThread is made and it overwrites the start address. This is needed as control can return to the caller of CreateThread before the new thread has a chance to run, which can result in multiple calls happening before a pivot can be made on the original address.

Doing this will cause new threads to have their start address set to PivotThread which is located in the MEM_IMAGE flagged area of memory from our injector.

For information on hooking and direct system calls I have code snippets for each available.

9chroot (2020/06/16)

I've recently set up and automated a process for building nightly ISO files for 9front, mostly for use in using them or installing virtual machines that I would like to be up to date but don't expect to live long. One thing I wanted with this machine was to use a dedicated disk for building and keep a clean system to build the ISO files from. However it would also be nice to have the build machine using my existing file server such that it could do internal builds as well. To solve this I decided to see if I it was possible to have specific programs on a cpu server run under a different root filesystem.

First up was getting the machine alive and on 9front. Putting a disk in the machine and running through the typical install procedure left me with a terminal using a local hjfs disk as it's root. To add it in to my existing network I configured plan9.ini properly and initialized nvram to make this new machine a standard cpu node for my network.

However using this for building the ISO files left me with three problems:

To solve this I figured I could make use of the existing clean hjfs filesystem on the disk for building. The first step of this is starting hjfs on system boot.

#start hjfs on boot and post to /srv/hjfs
echo 'hjfs -f /dev/sdE2/fs -n hjfs -m 2011' > /cfg/$sysname/cpustart

#bind it in by default when I rcpu in
echo 'bind -c #s/hjfs /n/hjfs' > /cfg/$sysname/namespace

Next we construct a namespace file for the build script to use.

# Replace the use of '#s/boot' to instead use the hjfs instance
sed 's/boot/hjfs/ /lib/namespace > /lib/namespace.build

# Test that everything works by changing in to the new namespace
auth/newns -n /lib/namespace.build

This leaves us with what looks like a typical 'chroot' enviornment that can be invoked for specific programs. With this I can set something up in the cron file of my auth server like this:

40 5 * * * una auth/newns -n /lib/namespace.build /usr/glenda/bin/rc/nightly > /sys/log/build

This will run the nightly script every morning under the new namespace while saving the output to my normal cwfs filesystem. It's worth noting that if you plan to have this namespace be usable for the none user then /srv/hjfs must be read-writable from the none user, adding a chmod o+rw /srv/hjfs after hjfs is started to /cfg/$sysname/cpustart will fix this issue.

9front Bare Bones Kernel (2020/05/05)

I have recently been interested in reading and understanding the processes of kernel development. In that effort I have been spending some time reading the OSDev wiki as well as this fantastic set of blogs for writing a kernel in rust. However I quickly ran in to two problems:

As such I thought it might be worth while to take a peek on getting a barebones kernel setup using the common tools that are available on the OS that I do most of my development in, 9front. As such I set out to first learn how 9front manages its kernel, and then see if I could strip out just the minimum to get myself a little "hello world".

Knowing where to look

Let's start by looking at how 9front organizes it's kernel code. All of the kernels are located in /sys/src/9/$objtype/ with port referring to portable code between them. For our purposes we're only going to look at the amd64 kernel. There are three files that are good to look at first

Also worth noting is a couple additional directories:

Start putting stuff together

Copying over the l.s file we see tons of stuff that we wont need, so lets trim it down a bit. Reading it quickly we find that we call our main funciton in _start64v, so let's delete everything after that. We also can see that l.s requires a mem.h so let's grab that as well. Then let's write our own very tiny kern.c with a void main(void) entry point. For now simply enterying a infinite loop will suffice.

#include <u.h>

u32int MemMin; //Filled by l.s, thus the symbol must be somewhere

void main(void) { for(;;); }

Now let's get each of these compiled/assembled.

; 6c kern.c && 6a l.s

Now if we check the 9pc64 mkfile for how to link them we see something a bit out of the norm. First we see a KTZERO variable declared and then it being passed to the linker through the -T flag.

Looking at the man page for the linker, we see that the -T flag tells the linker where to start placing the .TEXT section for the binary. To understand why this is needed, let's remind ourselves of what goes on in the average boot(in relation to our kernel).

When we first get to our kernel we have not set up virtual memory, so our first sets of jumps and addressing must use the physical addresses. Looking at mem.h we can see that KZERO (kernel zero) is set to 0xffffffff80000000, so this must be where 9boot puts the start of our kernel binary. However, the start of the binary is not the start of executable code, that would be the .TEXT section. So we must have a common definition of where our executable code starts between our linker and our l.s code. This allows l.s to tell 9boot where exactly in physical memory to jump to.

To acomplish this we pick a common starting point, define it in our source code, and make sure to pass it to the linker so things lign up. So now that we know what is going on, let's link our kernel:

6l -o kern -T0xffffffff80110000 -l l.6 kern.6

It's worth noting that l.6 must come first or else our dance to get the .TEXT section aligned will be pointless, as kern.6 will be placed first in to the section.

Now let's verify that we indeed set things up right by using file(1). The output should look like:

kern: amd64 plan 9 boot image

Booting our new kernel

We have one more step before we can actually get our fresh kernel booted in something like QEMU. We need to create a cdrom iso image that contains both our kernel as well as 9boot. For this we will take a look at the existing script for iso generation on 9front: /sys/lib/dist/mkfile.

Lets start by creating a new plan9.ini for 9boot to point to our new kernel:

echo 'bootfile=/amd64/9pc64' > plan9.ini

We'll also want a local copy of /sys/lib/sysconfig/proto/9bootproto so that we can add our kernel path to it.

Now that we have those, let's create our iso using disk/mk9660 like so:

; @{rfork n
# Setup our root
bind /root /n/src9
bind plan9.ini /n/src9/cfg/plan9.ini
bind kern /n/src9/amd64/9pc64
disk/mk9660 -c9j -B 386/9bootiso \
-p 9bootproto \
-s /n/src9 -v 'Plan 9 BareBones' kern.iso
}

With that, you should have a bootable iso image fit for use in something like QEMU.

Source

The source code is available on my github. It adds a small print message in kern.c as well as a mkfile from what is shown here.

Using fossil and venti as auxiliary storage on plan9. (2019/12/30)

For those that are not familiar, venti and fossil are disk file systems for the plan9 operating system. Venti acts as append only storage of data blocks, and fossil is a local copy of these blocks represented as a traditional file system. The most notable advantages to these systems is that venti has aggressive dedup and is append only. Data is addressed by its sha1 hash, and can never be deleted once stored. Fossil can sync data back to venti, generating a 'vac', this hash is a key to fossils 'window' in to the venti data, allowing it to construct local filesystem trees out of it. This means that essentially, venti archives all of the actual storage of the files, and vacs act as keys in to snapshots of that data that fossil can work with. Perhaps the most interesting part about fossil is that it is 'lazy loaded', meaning that data is pulled from venti only when absolutely necessary, making for the ability to create very small fossils for viewing previous vacs. Recently, I had thoughts this system would sound great for an auxiliary addition to my current plan9 setup. My thoughts were to store source code of mine and others(really just my $home/src) in to it, to both test and see if the benefits were worth. This post will kind of walk through my processes in creating configuration. This was done on 9ants5, but could be done an a normal 9front release if desired.

First though some history and thoughts on fossil and venti themselves. When used for a rootfs, it allows for multiple fossil installs based around one central venti. Since the information is dedup'd one fossil and 50 fossils use the same space on the venti server. This works well with the traditional idea of 9 being a 'grid' system, making the addition of new machines pratically free. The other great part is the snapshot capabilities, the snapshots themselves are ephimeral, and you can quite easily create a new fossil pointing to an existing snapshot. This would allow you, for example, to spin up a new machine/VM with a snapshot of last months code to test for comparisons. Because the whole system is snapshotted, you can tests with the context of the entire system. This is in contract to the idea of using modern VCS to manage individual projects, where it can be hard to truly rewind infrastructure to a specific point in time. Fossil even allows for the ability of trees, if you want to test a slight deviation you can use a base vac and create sperate branches of modifications, each getting their own new vac. With some simple tooling, this could be used to some very efficent version control systems(admitably, this currently does not exist to my knowledge). As for history, fossil was the original plan9 file system when it was first open sourced by Bell Labs. However, a nasty bug plauged it until 2012, leading it to give a bad impersonation, and eventual migration away from it. Since then a lot of experimentation and support has been done by the creator of 9ants, mycroftiv. One notable project is his 'spawngrid' system, which leverages ramfossils for on the demand containers.

For getting fossil+venti up and running pop open your disk(or file) in disk/edisk and create one large "plan9" partition. Then we need to divy this partition up in to our two venti partitions. disk/prep -a arenas -a isect /dev/sdE0/plan9 should do just fine. Next we need to write a config file for venti. The following should do:

; cat >/tmp/venti.conf <<.
index main
arenas /dev/sdE0/areans
isect /dev/sdE0/isect
mem 32m
bcmem 48m
icmem 64m
httpaddr tcp!*!8000
addr tcp!*!17034
.

This points venti to our partitions and tells it to start listening and accept connections from any origin. Now to actually format the partitions and write our config to venti.

; venti/fmtarenas arenas /dev/sdE0/areans
; venti/fmtisect isect /dev/sdE0/isect
; venti/fmtindex /tmp/venti.conf
; venti/conf -w /dev/sdE0/areans /tmp/venti.conf

We should be all ready to go, let's start it up with
; venti/venti -c /tmp/venti.conf.
You can verify by looking at http://$host:8000/storage. Which should give you a summary of the available storage.

Now on to the more fun part, let's make a fossil for this venti. Since we don't really plan to have much data on this, we can use a file through ramfs to get started.


; ramfs
; dd -if /dev/zero -of /tmp/fossil -count 200000 #create a 200M file
; cat >/tmp/initfossil.conf <<.
fsys main config /tmp/fossil
fsys main open -AWPV
fsys main
create /active/adm adm sys d775
create /active/adm/users adm sys 664
users -w
uname upas :upas
uname adm +glenda
uname upas +glenda
srv -p fscons.newfs
srv -A fossil.newfs
.
; venti=$ventihost
; fossil/flfmt /tmp/fossil
; fossil/conf -w /tmp/fossil /tmp/initfossil.conf
; fossil/fossil -f /tmp/fossil

Now that it is up and running, lets connect to the fs console and do some additional configuring and take a snap shot.

; con /srv/fsons #ctl-\ + q quits
fsys main
uname $user :$user #add our user to fossil
create /active/src $user $user d775 #add a directory for our user to write to
sync # sync all of our changes
snap # write to venti, when it is done you should get a vac
# quit out of con
# clean up
; kill fossil | rc
; unmount /tmp

Now you should be able to take advantage of the 9ants ramfossil script to access the data stored, just keep track of the vac, and remember that if you want changes to persist, you need to snap the filesystem and note the new vac. If you don't want to keep setting the $venti environment variable, add it as a value for your system under /lib/ndb/local just like $auth. You could add a script to your $home/lib/profile to test for /srv/ramfossil and start ramfossil if needed and then bind it over $home/src(what I did). You may also want to change the ramdisk fossil size by editing the ramdisk script itself, I found 1G to be fairly reasonable. If the fossil ever fills up, simply snap it to venti and start a new one from the newly generated vac. The lazy loading aspect should keep you covered for actually/writing 1G in one session.

For the sake of completeness, I will also walk through the processes of creating a persistent fossil filesystem. Start by using disk/edisk like before then


; ramfs
; cat >/tmp/fossil.conf <<.
fsys main config /dev/sdG0/fossil
fsys main open -c 3000
fsys main snaptime -a 0500
srv -p fscons
srv -A fossil
.
; disk/prep -a fossil /dev/sdG0/plan9
; fossil/flfmt /dev/sdG0/fossil
; fossil/conf -w /dev/sdG0/fossil /tmp/fossil.conf
; venti=$myventi fossil/fossil -f /dev/sdG0/fossil
; unmount /tmp

This will set the fossil to snapshot at 5am to the venti, 9ants' fshalt scripts should handle snycing it when the system goes down(note it will not snapshot). Huge thanks to mycroftiv and his 9ants scripts for helping me figure my way through the install, setup processes, and keeping the fossil and venti system alive and usable for a standard user.