Cloud infrastructure services have allowed our field to gradually abstract computation tasks from long-standing physical restraints. As cloud infrastructure adoption increased, we realized the power and efficiencies of quick deployments and elastic scaling, giving birth to the DevOps movement. We’ve been steadily directing more of our attention and resources to what matters most: the applications that differentiate our organizations and create value. We can do this because we spend fewer scarce resources managing and maintaining bare metal infrastructure.
This, however, is a gradual transition, not an overnight change. It takes time to recognize what changes are needed (or even possible) in this new paradigm. Consequently, it’s natural that we continue to employ both acknowledged and unacknowledged anachronisms from the pre-cloud era. We started out in the only way we could, creating a complete virtual emulation of a physical computing platform, including just about every part but the cooling fan. This ensured that existing software, particularly operating systems, would run on them. In doing so, we carried along a lot of assumptions and components that made sense for long-lived servers–including those that caused us many problems over the years.
As we’ve moved toward a “paper-cup,” ephemeral computing model, these “virtual” components are becoming skeuomorphs; that is, they are features of computing instances that resemble facets of the physical computers they’ve replaced, but are not essential to the new model. Because we still use and value these components, we continue to suffer many of the complexities, inefficiencies, and insecurities that have plagued physical servers for years.
Startups don’t care about security.
We hear this a lot. It may be a descendant of “developers don’t care about security… that’s InfoSec’s concern,” a situation where at least someone in the organization was paying attention to security. In the developer-dominated world of tech startups, such a statement would be nonsensical. If a startup has dedicated InfoSec staff, they’re probably not a startup anymore.
To be fair, early-stage startups have a lot on their plate: fundraising, product development, acquiring customers. Speed is of the essence for startups and they need to avoid distractions that can slow them down. Worrying about security too early can feel a lot like building at scale when you only have five customers. In most cases, a focus on security doesn’t contribute to the bottom line and can appear the opposite. It’s natural to feel like “we’re too small to be a target… it won’t happen to us.”
Software agents are everywhere in the cloud. These little programs perform often complex or repetitive functions on our behalf so we don’t have to. Some agents help us keep our systems updated and avoid configuration drift. Others roam our compute infrastructure in an attempt to keep everything safe from threats. Software agents are designed to make our job easier.
However, in cases of large and complex systems where the true value of the agent should be realized, the opposite can occur. Getting approval to install agent software on machines can involve a lot of red tape. Deploying and managing hundreds of agents on multiple hosts can be a real hassle. They can sap compute resources and impede performance. And while agents help us monitor our systems, who’s monitoring the agents?
Some organizations – typically government agencies and critical infrastructure industries – have policies that strictly forbid the use of agents in their systems. And for one good reason: Security. Since agents reach out and write to your runtime environment, they represent a serious security risk. Agents require open ports on servers in order for them to do their jobs (although not necessarily with administrative access). Agents can make changes to code on servers. Agents therefore are an attack vector, and one that usually has privileges that allow serious harm to be done.
In short, if a black hat compromises an agent, they may very well have the keys to the castle.
The greatest opportunity for Amazon Web Services to grow in the short term lies in convincing large enterprises to move their computing into the cloud. Given the sheer volume of enterprise on-premise installations, AWS is counting on the incremental, and in many cases wholesale, migration of legacy operations to the cloud to fuel the next phase of its explosive growth. But despite this being the year of the enterprise at this month’s AWS re:Invent conference, the drumbeat of AWS as a fertile ground for startups remained loud and steady.
Everybody understands the rationale for cloud computing: no capital investment, near infinite scaling, and a constellation of ancillary services to address security, analytics, storage, and other requirements. It is now possible to build a substantial business with minimal technical infrastructure and staff. As such, the promise of cloud computing for startups is very enticing, and AWS spared no opportunity to drive this point home. In panel after panel with VCs and angel networks, the message was repeated. If you are a technology startup expecting to get funding, you had better have a pretty good reason for not basing your operations in the cloud.
The Luminal team headed to Las Vegas last week to attend the second Amazon Web Services (AWS) re:Invent conference. As we’re actively developing on AWS, we were eager to learn about new AWS services offerings, explore the AWS ecosystem of developers, Independent Solution Vendors (ISV) and Systems Integrators (SI), and connect with AWS staff to learn more about how we can build smarter and faster.
The level of energy and excitement at re:Invent was something none of us had experienced at a software conference before. There are likely many reasons for this, but we primarily attribute it to the fact that AWS has established itself as the undisputed frontrunner in cloud with significant momentum among developers and ISVs. Attempts by competitors to change the subject with bus advertisements and scantily clad cowgirls backfired spectacularly.
The entire Luminal team will be attending AWS re:Invent 2013 this week. Since we are still in stealth mode we won’t have a booth, but we will have a product preview and a detailed white paper on our first product, Fugue.
If you’re attending re:Invent and would like to talk to us, drop me an email and I’ll reach out to you. We are actively recruiting alpha customers for Q1 2014 who want declarative control, native security and simplified operations & maintenance on AWS.
Hope to see you in Vegas!
< Read Part 1 < Read Part 2
In the last two posts in this series, I illustrated how an unconsidered VPC architecture can lead to inefficiency and poor resiliency. In this post, I’ll show how to get to an efficient, secure and highly resilient VPC design. Keep in mind that there are many successful patterns to building VPC and this is only one of them, but is in most cases the most logical starting design.
In order to succeed in creating high fidelity, resiliency and efficiency, you’ll want to keep things simple, design for multi-AZ and use Security Groups…
This Friday marks the 49th birthday of the ideas behind one of the most powerful characters in the command shell (both *nix and Windows): the pipe. For those who don’t know, the pipe is this character: | It’s Shift-Backslash on a US keyboard, and it is used to send the output of one process to the input of another.
Doug McIlroy with former colleague Dennis Ritchie at the Japan Prize Foundation ceremony in May 2011.
The pipe, and its counterparts stdin and stdout, were essentially described in a memorandum written by Doug McIlroy on October 11, 1964. At the time, McIlroy was just beginning to lead the Computing Techniques Research Department at Bell Labs, where UNIX was born. The key sentence we’re celebrating is this one:
We should have some ways of coupling programs like garden hose–screw in another segment when it becomes necessary to massage data in another way.
McIlroy’s memo would be of great importance even if it could only be said to be the origin of pipes. However, given that the memo was written at such a nascent time in computing, it seems fair to say that what McIlroy described was much broader: a standard software interface, as well as the accompanying modularity.
These concepts are now so ingrained into our thinking as programmers that it might feel as if they were always there. Thankfully, in UNIX they always were. But we’re taking time this week to be grateful for the spark of genius that conceived them. No doubt these ideas seem inevitable; but then again, that is a mark of great ideas: they always seem obvious in retrospect.
So this week, raise a (pipe-like) mug of beer, maybe with a slice of (pipe-like) pumpkin roll, and celebrate the genius of Doug McIlroy and the CTRD team at Bell Labs. October 11: It’s | Day!
< Read Part 1
Dennis, an engineer at Complicado Corporation, has decided to try porting his company’s web application to AWS. Dennis does a little reading and realizes that he should use VPC so his database server is in a private subnet and hits the AWS web console. He fires up the Start VPC Wizard. Scanning the options, Dennis sees “VPC With Public and Private Subnets”. Cool – Dennis’ work is done!
He leaves the defaults alone and ends up with a network that looks like this:
Dennis starts creating EC2 instances and notices that they are instantiated into a particular subnet, so Dennis drops his web server into his Public subnet and his database server into Private. Dennis slaps an Elastic IP onto his web server, creates some DNS entries in Route 53 and is off to the AWS races.
Most of the features of Amazon Web Services (AWS) are low risk in terms of changing your mind later. Don’t like an EC2 instance type? Just stop it and start it with a new type. Want a larger EBS volume? Simply snapshot the current one and create a larger volume from it. This flexibility and low costs of errors are some of the great features of the AWS platform.
However, one place on the AWS platform where you really need to get things right from the start is in your Virtual Private Cloud (VPC) design. Unfortunately there isn’t a lot of wisdom imparted through the defaults or documentation provided. The purpose of this post is to lay out some best practices so you won’t find yourself up a creek later. If you’ve already gone partway up a creek, you’ll be fine – AWS is a pretty agile canoe and there is no shortage of paddles.