Monday, July 23, 2012

Domain-Specific Sign-In with BrowserID

BrowserID (Persona) is Mozilla's login authentication system that treats emails as identities and usernames. By default, BrowserID works by simply providing verification that a user actually owns the email which they are using to log in. There are no additional checks made before the user is enrolled as a "user" on the site. This functionality is great for websites that want to simplify logins and allow anyone to sign up. But suppose your website needs to limit signups to valid users of your organization (i.e. everyone with a yourcompany.com email)?

Recently, while working on a project with Mozilla, I came across the need to restrict signups for a site I was working on. Although there has been some attempt to do this in the past (some Mozilla projects use BrowserID and still require additional verification), I could not find much documentation on restricting signups at the moment of login using email addresses. So I made my own and here it is!

Prerequisites

To start this guide is written for Django projects, specifically those using Mozilla's Playdoh framework. If you aren't using Playdoh, I suggest trying it out - it really simplifies Django development and helps get projects started in seconds. Also, Playdoh comes pre-setup with BrowserID. If you decide not to use Playdoh, you can still follow this tutorial, you'll just need to setup BrowserID on your own first. There are a number of guides for doing that (such as this one: http://django-browserid.readthedocs.org/en/latest/).

Step 1 - Modify Project Settings

There are two settings files you need to edit (assuming Playdoh is being used; if not, look for settings.py in your project): settings/base.py and settings/local.py.

In settings/base.py:

Add the following lines in the "BrowserID" section (or at the bottom of the page):

BROWSERID_CREATE_USER = 'project.app.util.create_user'
ACCEPTED_USER_DOMAINS = [
    
]

Replace "project" with the name of your project and "app" with the name of your app.

Save the file.

In settings/local.py:

Add the following line:

ACCEPTED_USER_DOMAINS = [
    #example.com,
]

Replace the commented line with a list of domains, comma-separated from which you would like to allow users. For example, the project I'm working on has the following setup:

ACCEPTED_USER_DOMAINS = [
    'mozilla.com',
    'mozilla.org',
]

Save the file.

Step 2 - Create a util File

In your application's home directory (not the project directory), create a file called "util.py." Add these lines to that file:

from django.contrib.auth.models import User
from django.conf import settings
from project import app

def create_user(email):
    domain = email.rsplit('@', 1)[1]
    if domain in settings.ACCEPTED_USER_DOMAINS:
            return User.objects.create_user(email, email)

Replace "project" and "app" with your project's and app's names.

Finish

Now, when your users click the "Sign In with BrowserID" button, they must use an accepted domain before their account will be created. If not, they will be redirected to the homepage without being logged in.

Video

If you prefer video instruction you can follow along with, here you go:

Friday, July 13, 2012

Defeating X-Frame-Options with Scraping

Introduction

Iframes are an element of web design that are loved and hated. Web developers (used to) love them because they easily allowed resources from various sites to be loaded on-demand within a webpage. Security professionals hate them because they allow content of one site (such as a login page) to be loaded within another site that may not be trusted. This introduces a security concern known as click-jacking where a malicious site overlays invisible elements over what the user believes is a safe login form.

The Solution

Since these concerns arose, the X-Frame-Options header was developed to prevent the loading of one site within an iframe of another. This header is supported by all major browsers and includes two options:
  • SAMEORIGIN - the site can only be loaded within pages of the same domain
  • DENY - the page cannot be loaded in a frame at all

Page Scraping

The goal of X-Frame-Options, as described above, was to prevent the loading of one site within another, potentially malicious site. However, there are multiple ways a site's contents can be displayed, and an iframe is only one. Page scraping can be done via a server-side PHP, Python, or other language script. The code below is an example of how a page's code can be loaded using PHP:

<?php

$userAgent = 'Googlebot/2.1 (http://www.googlebot.com/bot.html)';
$url = "https://www.yahoo.com/";
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_FAILONERROR, true);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,true);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$html = curl_exec($ch);

echo $html;

?>

This small bit of code, when loaded on any website, at any URL, will cause the contents of Yahoo's home page to be displayed. A malicious user could overlay hidden elements over the $html echoed out and easily execute the same attack X-Frame-Options prevents.

Protections

Luckily, some large websites like Google and Facebook are aware of issues like these and use a complex combination of user-agent and IP address checks to prevent server-based scripts from loading their content. Replace yahoo.com with facebook.com/your_username in the code above and you'll notice that nothing loads.

You Don't Even Need a Server!

This problem is not typically able to be replicated on a simple HTML page without server-side code because JavaScript has cross-domain policies that prevent it from retrieving content from a different domain (in other words, you can't scrape the HTML using just JavaScript). However, through some clever trickery, http://call.jsonlib.com/ has enabled JavaScript scraping. What is actually happening is the JavaScript makes a call to a server hosted on the public Internet which serves as a middle-man. So the same process is happening (server-side scraping), it is just being called locally using JavaScript.

For this to work, save the file: http://call.jsonlib.com/jsonlib.js locally. Then, create a simple HTML page with the following code:

<script type="text/javascript" src="jsonlib.js"></script>
<script type="text/javascript">
    function fetchPage(url) {
        jsonlib.fetch(url, function(m) { document.getElementById('test').innerHTML=m.content; });
    }
</script>

<body onload="fetchPage('https://donate.mozilla.org/page/contribute/join-mozilla?source=join_link')">
<div id="test">
</div>

Most likely many large websites block sites that enable this middle-man functionality. However, it is easy for anyone with access to a server to recreate the process. X-Frame-Options are very useful, but there is no need to use iframes any longer when copying the entire site's code locally works just as well. To protect yourself against these kinds of attacks, always make sure that you do not enter sensitive data on domains you do not trust. Always check the URL bar before typing!