Cloud infrastructure services have allowed our field to gradually abstract computation tasks from long-standing physical restraints. As cloud infrastructure adoption increased, we realized the power and efficiencies of quick deployments and elastic scaling, giving birth to the DevOps movement. We’ve been steadily directing more of our attention and resources to what matters most: the applications that differentiate our organizations and create value. We can do this because we spend fewer scarce resources managing and maintaining bare metal infrastructure.
This, however, is a gradual transition, not an overnight change. It takes time to recognize what changes are needed (or even possible) in this new paradigm. Consequently, it’s natural that we continue to employ both acknowledged and unacknowledged anachronisms from the pre-cloud era. We started out in the only way we could, creating a complete virtual emulation of a physical computing platform, including just about every part but the cooling fan. This ensured that existing software, particularly operating systems, would run on them. In doing so, we carried along a lot of assumptions and components that made sense for long-lived servers–including those that caused us many problems over the years.
As we’ve moved toward a “paper-cup,” ephemeral computing model, these “virtual” components are becoming skeuomorphs; that is, they are features of computing instances that resemble facets of the physical computers they’ve replaced, but are not essential to the new model. Because we still use and value these components, we continue to suffer many of the complexities, inefficiencies, and insecurities that have plagued physical servers for years.
Here at Luminal, work on a major component of Fugue began in Python 2.7. For this component, we had some early deadlines and a lot of architecture to figure out and prove, so for implementation, we went with what was familiar. We think this was the right decision.
However, after we met our deadlines, we took some time to reconsider our platform decision before committing to the existing code base. We knew Python was the right choice, but we had lingering doubts about our decision to continue avoiding Python 3. Upon a closer look, we found that things had changed drastically since the last time we’d seriously considered this question, and the scale was no longer decidedly tipped against Python 3. We ultimately made the decision to port to it. In this post, we’ll go over some of the key factors driving our decision.
March 8 was International Women’s Day. Some celebrated. Some scoffed. Some lives are so tough that calendars mean little. In the U.S., a Presidential Proclamation highlights the entire month of March; it’s an eloquent document with compelling reminders of sacrifices made, achievements earned, brutalities endured, present and past, by women. The genderless, luminous being attached to my beautifully gendered identity and sexed body laments the necessity of these kinds of declarations.
But, I would use any tool, including “March,” to spell out history and reality in the public forum, in a persistent attempt to stop vicious patterns from repeating themselves. I remain certain that an honorable alien trying to understand humanity, Googling rape statistics alone (much less employment disparity), would marvel in disgust at such mass carnage against body and soul and agree with Thomas Hobbes: What nasty, brutish, and short lives those humans lead! Zap them and put them out of their misery!
Hang on, alien (and anthropologically-minded friends), hang on. Continue reading
Startups don’t care about security.
We hear this a lot. It may be a descendant of “developers don’t care about security… that’s InfoSec’s concern,” a situation where at least someone in the organization was paying attention to security. In the developer-dominated world of tech startups, such a statement would be nonsensical. If a startup has dedicated InfoSec staff, they’re probably not a startup anymore.
To be fair, early-stage startups have a lot on their plate: fundraising, product development, acquiring customers. Speed is of the essence for startups and they need to avoid distractions that can slow them down. Worrying about security too early can feel a lot like building at scale when you only have five customers. In most cases, a focus on security doesn’t contribute to the bottom line and can appear the opposite. It’s natural to feel like “we’re too small to be a target… it won’t happen to us.”
Software agents are everywhere in the cloud. These little programs perform often complex or repetitive functions on our behalf so we don’t have to. Some agents help us keep our systems updated and avoid configuration drift. Others roam our compute infrastructure in an attempt to keep everything safe from threats. Software agents are designed to make our job easier.
However, in cases of large and complex systems where the true value of the agent should be realized, the opposite can occur. Getting approval to install agent software on machines can involve a lot of red tape. Deploying and managing hundreds of agents on multiple hosts can be a real hassle. They can sap compute resources and impede performance. And while agents help us monitor our systems, who’s monitoring the agents? Continue reading
The greatest opportunity for Amazon Web Services to grow in the short term lies in convincing large enterprises to move their computing into the cloud. Given the sheer volume of enterprise on-premise installations, AWS is counting on the incremental, and in many cases wholesale, migration of legacy operations to the cloud to fuel the next phase of its explosive growth. But despite this being the year of the enterprise at this month’s AWS re:Invent conference, the drumbeat of AWS as a fertile ground for startups remained loud and steady.
Everybody understands the rationale for cloud computing: no capital investment, near infinite scaling, and a constellation of ancillary services to address security, analytics, storage, and other requirements. It is now possible to build a substantial business with minimal technical infrastructure and staff. As such, the promise of cloud computing for startups is very enticing, and AWS spared no opportunity to drive this point home. In panel after panel with VCs and angel networks, the message was repeated. If you are a technology startup expecting to get funding, you had better have a pretty good reason for not basing your operations in the cloud.
The Luminal team headed to Las Vegas last week to attend the second Amazon Web Services (AWS) re:Invent conference. As we’re actively developing on AWS, we were eager to learn about new AWS services offerings, explore the AWS ecosystem of developers, Independent Solution Vendors (ISV) and Systems Integrators (SI), and connect with AWS staff to learn more about how we can build smarter and faster.
The level of energy and excitement at re:Invent was something none of us had experienced at a software conference before. There are likely many reasons for this, but we primarily attribute it to the fact that AWS has established itself as the undisputed frontrunner in cloud with significant momentum among developers and ISVs. Attempts by competitors to change the subject with bus advertisements and scantily clad cowgirls backfired spectacularly.
The entire Luminal team will be attending AWS re:Invent 2013 this week. Since we are still in stealth mode we won’t have a booth, but we will have a product preview and a detailed white paper on our first product, Fugue.
If you’re attending re:Invent and would like to talk to us, drop me an email and I’ll reach out to you. We are actively recruiting alpha customers for Q1 2014 who want declarative control, native security and simplified operations & maintenance on AWS.
Hope to see you in Vegas!
< Read Part 1 < Read Part 2
In the last two posts in this series, I illustrated how an unconsidered VPC architecture can lead to inefficiency and poor resiliency. In this post, I’ll show how to get to an efficient, secure and highly resilient VPC design. Keep in mind that there are many successful patterns to building VPC and this is only one of them, but is in most cases the most logical starting design.
In order to succeed in creating high fidelity, resiliency and efficiency, you’ll want to keep things simple, design for multi-AZ and use Security Groups…
This Friday marks the 49th birthday of the ideas behind one of the most powerful characters in the command shell (both *nix and Windows): the pipe. For those who don’t know, the pipe is this character: | It’s Shift-Backslash on a US keyboard, and it is used to send the output of one process to the input of another.
Doug McIlroy with former colleague Dennis Ritchie at the Japan Prize Foundation ceremony in May 2011.
The pipe, and its counterparts stdin and stdout, were essentially described in a memorandum written by Doug McIlroy on October 11, 1964. At the time, McIlroy was just beginning to lead the Computing Techniques Research Department at Bell Labs, where UNIX was born. The key sentence we’re celebrating is this one:
We should have some ways of coupling programs like garden hose–screw in another segment when it becomes necessary to massage data in another way.
McIlroy’s memo would be of great importance even if it could only be said to be the origin of pipes. However, given that the memo was written at such a nascent time in computing, it seems fair to say that what McIlroy described was much broader: a standard software interface, as well as the accompanying modularity.
These concepts are now so ingrained into our thinking as programmers that it might feel as if they were always there. Thankfully, in UNIX they always were. But we’re taking time this week to be grateful for the spark of genius that conceived them. No doubt these ideas seem inevitable; but then again, that is a mark of great ideas: they always seem obvious in retrospect.
So this week, raise a (pipe-like) mug of beer, maybe with a slice of (pipe-like) pumpkin roll, and celebrate the genius of Doug McIlroy and the CTRD team at Bell Labs. October 11: It’s | Day!