Wednesday, January 28, 2015
NYC Blizzard 2015
I am going to break with my traditional technology-based posts to share some images I took during the "blizzard" this past Monday. NYC only wound up getting about eight inches of snow - a lot less than expected.
Thursday, January 15, 2015
Using IAM Roles and S3 to Securely Load Application Credentials
Many applications require certain information to be provided to them at run-time in order to utilize additional services. For example, applications that connect to a database require the database connection URL, a port, a username, and a password. Developers frequently utilize environment variables, which are set on the machine on which the application is running, to provide these credentials to the underlying code. Some developers, against all recommendations, will hard-code the credentials into an application, which then gets checked into git, distributed, etc.
Ideally, the credentials required by an application should not be hard-coded at all, or even accessible to processes outside of the one running the application itself. To achieve this, the application must determine what additional credentials it needs and load them prior to starting its main command.
Many applications that run on Amazon's Web Services platform have the added advantage of being hosted on EC2 instances that can assume specific IAM roles. An IAM role is essentially a definition of access rights that are provided to a particular AWS resource (an EC2 instance in this case). AWS takes care of generating temporary credentials for that instance, rotating them, and ensuring they are provided only to the assigned instance. Additionally, the AWS command line tools and various AWS language-specific SDKs will detect that they are being run on an instance using an IAM role and automatically load the necessary credentials.
As developers, we can take advantage of IAM roles to provide access to credentials that are stored in a private S3 bucket. When the application loads, it will use its IAM role to download the credentials and load them into the environment variables of the process. Then, wherever they are needed, they can simply be called by accessing the environment variable that has been defined.
As an example of this setup, here are the steps I would take to run a Node.js web server that requires some database credentials:
1. Create an S3 bucket called "organization-unique-name-credentials".
2. If you plan to have multiple applications, create a new folder for each within the bucket: "organization-unique-name-credentials/web-app," "organization-unique-name-credentials/app-two," etc. Ensure the proper access rights to each for your existing AWS users.
3. Set encryption on the bucket (you can use either AWS' key or your own).
4. Create a file called credentials.json that looks like this:
{
"DB_URL" : "some-database-connection-string.com",
"DB_PORT" : 3306,
"DB_USER" : "app_user",
"DB_PASS" : "securepass"
}
5. Upload the file to the right S3 bucket and folder (be sure to enable encryption or the upload will fail, assuming you required it for the bucket)
6. Create an IAM role for your instance. In the IAM console, click "Roles," then create a new role, enter a name, select "EC2 Service Role," and give it the following policy (add any other rights the app may need if it accesses other AWS resources):
{
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::organization-unique-name-credentials/web-app/credentials.json"
]
}
]
}
7. Launch your EC2 instance, selecting the role you just created.
8. In your code, do the following (node pseudo-code):
var AWS = require('aws-sdk);
AWS.config.region = 'us-east-1';
var s3 = new AWS.S3();
var params = {
Bucket: 'organization-unique-name-credentials',
Key: 'web-app/credentials.json'
}
s3.getObject(params, function(err, data) {
if (err) {
console.log(err);
} else {
data = JSON.parse(data.Body.toString());
for (i in data) {
console.log('Setting environment variable: ' + i);
process.env[i] = data[i];
}
// Load database via db.conn({user:process.env['DB_USER'], password:process.env[
'DB_PASS']}); etc...
}
});
9. Run the app, and you will notice that the environment variables are downloaded from S3 and are set before the database connection is attempted.
If you're using Node.js, I made a module that does exactly this: https://www.npmjs.com/package/secure-credentials
If you're not using Node.js, this technique can be applied to any language that AWS has an SDK for. While it isn't 100% hacker-proof (if someone managed to log into your instance as root, he or she could still modify the source code to display the credentials), but combined with other AWS security mechanisms such as security groups, VPCs with proper network ACLs, etc. it can certainly help. Additionally, it keeps credentials out of the source code.
One final note: if you're running this app locally to test, your machine will obviously not have an EC2 IAM role. When testing locally, it's okay to use AWS keys and secrets, but be sure to keep them in a separate file that is excluded with .gitignore.
Tuesday, January 13, 2015
AWS Cross-Account IAM Roles in CloudFormation
The AWS documentation is relatively sparse when it comes to creating specific IAM role types using CloudFormation. It describes the process of setting up standard roles, attaching roles to instances, etc. but doesn't mention that all of the other role types can also be created using CloudFormation.
For example, when you log into the AWS console and click on "IAM," you see a number of different roles you can create:
AWS Service Roles
Role for Cross-Account Access
Role for Identity Provider Access
However, these role types are merely just different adaptations of the same concept. In the following steps, I'll show how to create a Cross-Account Role using CloudFormation.
1. Add the following to the "Resources" section of your CloudFormation template:
"CrossAccountRole" : {
"Type" : "AWS::IAM::Role",
"Properties" : {
"AssumeRolePolicyDocument" : {
"Statement" : [
{
"Effect" : "Allow",
"Principal" : {
"AWS": "arn:aws:iam::ACCOUNT_NUMBER_HERE:root"
},
"Action" : [
"sts:AssumeRole"
]
}
]
}
}
},
2. Add another resource for the policy:
"CrossAccountPolicy" : {
"Type" : "AWS::IAM::Policy",
"Properties" : {
"PolicyName" : "IAMInstancePolicy",
"PolicyDocument" : {
"Statement" : [
{
"Effect" : "Allow",
"Action" : [
"*"
],
"Resource" : [
"*"
]
}
]
},
"Roles" : [
{ "Ref" : "CrossAccountRole" }
]
}
},
3. Adjust the account number and resources as needed:
This policy gives admin access to any account you specify. To restrict permissions, change the statement section of the policy document as desired.
Monday, January 5, 2015
Quickly Find What is Using Disk Space on Linux
Here's a quick command to find out what folder is consuming the most space on Linux. This will also sort the results to show the most space-consuming folders at the bottom:
du -sh * | sort -h
Run this in the current directory to find the most consuming subdirectory.
Subscribe to:
Posts (Atom)