Containers Don’t Really Boot

Docker has been a great advancement for mass consumption of linux based containers.  The maturation of the virtual machine boom that has been happening since the early 2000’s led to mass acceptance and deployment in public and private clouds.  To be sure, asking for bare metal today can be seen as a faux pas without some well-defined use case (like super high IO).  So, now that folks have accepted that slices of CPU, memory, and disk are good enough through well-known hypervisors (kvm, esxi, xen) for most workloads, taking the next step to containers will not be that big of a leap.  Except that now it is more common to run containers on VMs than bare metal.  So now we get a slice of a slice of a slice of resources!
slice-slice
Virtual machines are just what their name implies:  full machines that are virtualized.  This means they have virtual hardware that virtually boots an OS kernel and mounts a filesystem.  The OS doesn’t really know that the hardware it is running on is not real.  Containers on the other hand are not virtual machines.  Containers are fully sandboxed processes using the host machine OS Kernel.  So, when running on a VM, they are slices of VM vCPU, memory, and disk for fully sandboxed processes.  This is the part that had me perplexed for a while until I ventured to understand exactly what happens when an lxc container starts versus a virtual machine.

Boot or Start?

Let’s compare boots of CentOS Linux on virtual machines versus containers:
Virtual Machine/Bare Metal:
  • Power on, the system BIOS loads and looks for sector 0, cylinder 0 of the boot drive (Typically /dev/hda, or /dev/sda)
  • The boot drive contains the MBR which then uses a boot loader such as GRUB (typically in /boot) which locates the kernel and loads it (based on GRUB config)
  • The kernel (vmlinuz) then uncompresses itself into memory
  • Load the temporary RO root filesystem via initramfs (configured in GRUB)
  • The Kernel locates and launches the /init program from within initramfs (/sbin/init)
  • Init determines run level via /etc/inittab and executes startup scripts
  • Per fstab entry, root filesystem completes integrity check and then is re-mounted as RW.
  • You get a shell via /bin/sh
Container:
  • Docker tells LXC (now libcontainer) to start a new container using the config in your Dockerfile
    sudo docker run -i -t centos7 /bin/bash 
    • Runs lxc-create or libcontainer equivalent with params (example)
      lxc-create -t mytemplate -f lxc.conf -n mycontainer
  • Docker on rhel/centos/fedora systems use device mapper which uses a sparse file for holding container images here:
    /var/lib/docker/devicemapper/devicemapper/data
  • Docker starts the base image (directory structure) as read only, and creates the new RW (CoW) layer on top of it.
  • Docker gives you a shell via /bin/sh (if you asked for it in the Docker run or a Dockerfile config)Docker configured union filesystem (AUFS, devicemapper, overlayfs) is used for mounted root filesystem

“It is perhaps more precise to say that a linux environment is started rather than booted with containers.”

The entire linux “boot” process that a normal virtual machine goes through is essentially skipped and only the last steps where the root filesystem is loaded and a shell is launched happens.  It is perhaps more precise to say that a linux environment is “started rather than booted”.  I was also further confused by the Docker terminology which uses the word “image” to describe something different from cloud images.  When I hear “image” I think of AMI style full virtual machine images as used in clouds.  These images are different from container images used by Docker.  Docker uses the term “image” to describe what is really a minimal layered root filesystem (union mounted).  This is all a bit confusing at first until you remember that everything in Linux is a file.  If you dig around and take a look at some of the utilities to create these “base images” such as febootstrap/supermin or debootstrap you will see that they are creating clean, consistent directories and files for the linux distro output in various formats such as .img or .tar.  So, the docker “images” are really nothing more than directories and files with pre-populated with the minimum viable set of linux components and utilities you need for a functional linux system.

“This is all a bit confusing at first until you remember that everything in Linux is a file.”

When Docker LXC/libcontainer based containers boot they are really just starting a kind of super “chroot” of processes with totally sandboxed storage and networking.  They don’t need to do a full boot since they are just leveraging the OS kernel of the host system.  All they seem to need are the minimum viable linux system directory structure tools and utilities.  Additionally, because Docker caches content, things run even faster since there is less to download.  These are reasons why containers “boot” or more precisely “start” incredibly fast.  Because you don’t have to go through a fully virtualized system boot process like a virtual machine or bare metal machine, you get productive “process-wise” rapidly in your own super sandboxed linux environment.

Union File Systems and the Neo Image Zeitgeist

One cool thing Docker introduced is the use of union mount layered file systems to control the craziness of working with images.  When I say image “craziness” I might need to clarify with a refresher for those youngsters who didn’t go through the whole “anti-image” zeitgeist of the past 5 years.  Let’s rewind to the early 2000’s when people discovered the ability to create sector by sector disk copies and saved all the time of installing apps over and over again (Ghost anyone?).  Everyone was in love with images then and naturally started using them in VMs when vmware was new and hot.  It was only after years of dealing with homegrown image versions and patching problems that folks started becoming more and more anti-image.  To be sure, many people made many horrible images (especially for Windows machines) that didn’t properly get sanitized (with sysprep or similar tools) before use as a VM template which only served to exacerbate things.

Fast forward to 2005-ish when CM tools like Puppet and later Chef in 2008 were formed in the “anti-image” rebellion.  What people wanted in these modern CM tools was the ability to repeatedly build pristine machines literally from bootstrap.  What this meant to many was no images ever: PXE the world and then Chef it.  As the public cloud took off so did people’s needs to build servers at an increasingly rapid pace. PXE bootstrapping everything was just to slow and often not possible in multi-tenant cloud environments like AWS.  The compromise answer was to create super minimal “base images” (also called JEOS or Stem Cell images) which were super pristine and small.  These base images for full virtual machines booted much faster and the fact that they had very little on them didn’t matter anymore since we could reliably and cleanly install everything declaratively in code using Puppet or Chef.

Fast forward to today and folks often find that full VM’s booted and installed with cookbooks are again not fast enough for them.  Also, the complexity of using some CM tools meant that it was a fair amount of work to stand up your Puppet or Chef environments before they paid off in speed and as a single source of truth.  Enter containers.  However, just getting a container only gives you a pristine base image if you start out with one.  Beyond any pristine base container image, any customizations you might need (like installing your app software components) would require you to get back to the days of image sprawl unless you used modern CM like Puppet or Chef to provision on top of your container base images.  Docker decided to fix this old problem with a new twist by using a union mount for layered or copy on write filesystems.  What they did was take the concept of the pristine base image (which we’ve learned is nothing more than minimum viable linux bistro directories and files with components and utilities) and allow you to leave it in a pristine shape.  They then allow you to layer components on top of each other that you need leaving each layer as read only thin deltas of changes.  The automation (infra as code) offered by Docker is via the Dockerfile where machine DNA is located.  What is still yet to be determined is whether the Dockerfile is enough to get your container in the end state you desire.  For example will layered “super thin delta images” be a replacement for modern CM tools?  Or, more likely, will CM tools like Chef be used to build each thin layer delta image (TLDI).


What the Heck is an Architect?

I was never one of those little kids that told mommy and daddy what they wanted to be when they grew up.  I didn’t even start officially working in the “IT” industry until 1996.  I just kind of fell into the career because I enjoyed fixing other people’s computers all the time.  The closest I ever came to wanting a particular job as a teenager was being an architect.  You know the traditional kind like the dad the “Brady Bunch”?  In my mind an architect was always an engineer with a big desk, rulers, calculators, knowledge of complex math, physics, and materials science.  They also were visionary creative types that could come up with amazing designs like the Chrysler building, the TransAmerica Tower, or Falling Water.  In my mind these architects could do designs in an office, but also go out in the field and do some of the engineering work themselves.

The Architect - Mr. Brady

Architect-Mr-Brady

Fallingwater-Frank Lloyd-Wright

Fallingwater-Frank Lloyd-Wright

Fast forward to today when the job title of architect can take on a variety of constructs:  software, infrastructure, political, health care, network, security, traffic, etc.  The job title has morphed into any type of senior design position in some fields.  I myself am part of this trend with the job title “Infrastructure Architect” currently.  However, due to my admiration for the “traditional” architect job, I feel an obligation to at least try and transfer my ideas of what skills translate from the traditional architect into the modern IT infrastructure engineering space.  This has been the cause of some stress for me over the past three years.  I will freely admit that today I worry about what it takes to be proficient in this modern “infrastructure as code” world.  After having a few beers, I sat on my porch and tried to jot down some notes on what troubled me personally and what the causes might be.  I think I have narrowed them down:  personal expectations versus market expectations, time management, and ambition.

Personal Expectations vs Market Expectations

I would be lying to you if I did not admit that I struggle every day personally to try and meet what my own expectations are for an architect in the infrastructure space.  Infrastructure has several “silos/towers” that represent some traditional capability areas.  These are things like servers and virtualization, storage, networking, security, databases, service buses, application hosting, as well as monitoring and performance metrics.  Having deep knowledge in more than one of these towers is what I would say an architect should have.  Due to the amount of knowledge needed to know multiple silos this can take years of work.  In my case this journey has spanned over 17 years since 1997.  There is no formal curriculum so to speak for “architect” so it kind of materializes via technical jobs you take and a little serendipity.
My architect journey kind of materialized like so…

  • Hardware phase (servers/PCs/peripherals)
  • DOS phase (batch programming, emm386 fun days)
  • AS400 and VAX/VMS phase
  • Token ring and ethernet (10base2-10000) phase (jiggle it just a little bit)
  • Novell phase (3 to NDS)
  • Windows phase (over a decade of windows troubleshooting madness)
  • The whole certification industry treadmill phase (mcse/cne/ccnp/blah thank goodness this is dead)
  • The DBA phase (SQL server and Oracle)
  • Network engineering phase (ethernet to satellites to sonet rings to mpls and vpn)
  • Unix/linux phase (Solaris, Slackware, mandrake and RHEL)
  • The Perl/CPAN/Sourceforge phase
  • The php/html/css everything phase
  • The slashdot addiction phase
  • The AWS and Rackspace IaaS phase (ongoing)
  • The OSS and Github revolution phase (ongoing)
  • The ruby and cm transformation (ongoing)
  • The Hacker News addiction phase (ongoing)

The presumed belief is that the more you stuff you learn the more valuable you become from a holistic experience lens.  This is likely true in that you learn over time how multi-level dependencies and other design issues can affect resiliency, performance and scale.  Eventually the need for this kind of holistic knowledge was enough that the modern job titles of “architect” in the IT space arose.  In some ways, the architect position is seen as a more senior role (even though many organizations don’t have an HR job code with architect in it.  To be sure, I’m ambivalent on whether the job title even needs to be official).  The flip side is that going deep in n+3 or more capability silos becomes increasingly difficult to do well given the other responsibilities that often fall on the shoulders of an “architect”.  The funny thing is that these “other responsibilities” often begin to take upwards of 70% or more of the time of an architect type person.  There is a constant balancing act going on where one foot is on the hands on (development/testing/research) side of the fence, and the other foot is in things I call CCE like (collaboration, communication, and evangelism) or leadership side of the fence.  The latter half inevitably results in meeting hell.

JBPadgett_Success_Formula

DO > TALK

Why do these other things increasingly take over the time of an architect type person?  I have thought a lot about this, and I believe it likely boils down in part to the holistic experience thing (others may only have a single silo view of the world and hence need help with understanding about why something could be better another way) and partly because the architect (even though the role may not even be official on paper) is saddled with the overall responsibility that things work cohesively within a given capability area.  When someone has a job responsibility to “make things work” together, it implies that they ideally understand and have proficiency in the technologies involved.  This gets increasingly difficult the more technology areas you are expected to know.  I will be the first to admit that this knowledge responsibility only works when a federated model of architectural ownership is in place.  By this I mean that having multiple architect roles within infrastructure is important and many more within an enterprise is absolutely a necessity.  The old mindset of someone having the job title of ‘Enterprise Architect” is a farce.  It is a feat of impossibility if you expect to have actual architecture efficacy in an IT organization.  The archaic “ivory tower” approach to managing architecture in a top down model is just not feasible.  To be sure, I don’t think it ever was actually effective; and so in my mind the “Enterprise Architect” is dead and has been for a while.  The point I want to make here is that there is just too much to know to be truly effective outside the limited scope of a major capability area.  The ability to be hands on and also do requisite CCE I feel is best within a more manageable size capability area.

“The ‘Enterprise Architect’ is Dead”

Pretty much since 2007 the infrastructure as a service revolution inspired by AWS has been a massive snowball of change.  This change has and continues to be awesome.  I have wholeheartedly embraced it and have been taking part in the transition for years now.  The changes have in part re-invigorated the popularity of the open source movement and with it has come unparalleled opportunities for startups.  It has inspired the creation and success what has become a literal tidal wave of open-source projects on Github.   All one has to do is look at Hacker News on any given day and see that some new project was announced that “fixes this” or “closes this gap in functionality”.  One could argue that a business could start and run without much if any proprietary commercial products today (short of the client devices).  It is an amazing time that we are living in that maybe 30 years from now will be mentioned in history books as the tipping point for mainstream open-source adoption and innovation.  All of this is super cool, especially if you are a small to medium sized startup, and can get everything you need infrastructure-wise from something like AWS or Rackspace or Digital Ocean using an API.  Even better, modern configuration management and containerization sits on top of these IaaS platforms and allows you to install packages and manage build outs rapidly using “infrastructure code” as well.  If you are a software developer and need to be productive in the infrastructure space (even if you don’t know it well) you can do so effectively with just the knowledge of a given API.

Marc Andressen’s famous callout in 2011 that “software is eating the world” has never been more true than in the IT infrastructure space.  I have felt it every day for the past 5 years now.  The challenge, specifically for me, is staying productive, efficient, and helpful in a space that is increasingly more software.  Software development to be precise.  Let’s be honest, even with the cloud NaaS revolution, most enterprises still run the bulk of their infrastructure in house.  Part of this is dealing with legacy apps and the baggage they bring, part of it is a pure cost play where the bandwidth and storage used by an enterprise can still be less in house than using AWS or Rackspace, and finally, part of this is security and performance concerns.  So with many enterprises trying to take part in this “infrastructure as code” revolution while still maintaining significant internal hardware, the need for someone to know both traditional infrastructure management and new code based approaches is critical.  To be sure, a software based management abstraction layer (api) is increasingly meaning that the work an an infrastructure architect does is more dev and less ops.  Since I want to be an architect that does both hands on coding and traditional architecture work (to remain relevant), I need time to focus on development.  Getting the time to do the work and focus is a real challenge.  But dude, the thing is I have to do this to enable infrastructure to provide any value for the app dev teams in the enterprise.  If I can’t help make internal infrastructure as easy to consume and programmable as Amazon AWS, then the Devs increasingly see me (vis-a-vi infrastructure) as a blocker and I don’t blame them.  At least for now, because of the need for supporting legacy and all the painfully inflexible COTS packages (hello Oracle), Infrastructure ends up also having to play a type of luxury one-on-one concierge service for app teams.  I don’t know how long this “concierge” style need will last, but I’m gonna be pushing as much self-service infrastructure via API as fast as I can make it happen.  Things like Openstack and SDN are making this possible for me, but doesn’t address everything needed.

Pressure on Myself  in the Valley

If you buy into the software will eat the world argument (and I feel like I do) then the question becomes:  How can I be the best at my job as an infrastructure architect?
More precisely:

  • What is valuable work?
  • What are valuable skills?

If the answers are only writing code, then there are lots of questions I have the keep me stressed.  First off, is this thing endemic to Silicon Valley only or is more widespread?  I feel like the mass transition is already well underway.  Right now the change is already prevalent in Silicon Valley.  There may be a three to five year delay before it hits middle america, but its probably coming.  But how will these changes (for all their merits) scale?  Are all the old sysadmins that can’t code pretty much SOL?  I think perhaps so.  I said the traditional sysadmin was dead in a paper published last year, but I wonder if middle america can keep up.  I mean this stuff is not exactly easy.  Is this an example of the robotization of everything?  If everyone is writing code, then who is spending time communicating and collaborating (remember this takes a buttload of time) and who maintains the code down the road?  What about all these small IT consulting companies that handle outsourced IT for the space between mom and pop businesses to the enterprise?  Is the “Geek Squad” of the near future going to need staff that are polyglots understanding CI pipelines and Git?   Writing code takes a singular level of focus and time that is already a challenge for even seasoned developers.  The needs for the people (often architects) that sit at the event horizon of software development and collaborative design and communications won’t simply go away.  The difference is now I think they need to be doing both.  Architects, as I have mentioned, often have a lack of time due to the immense gravitational pull of CCE (collaboration, communications, and evangelism) that needs to be done with all the stakeholders/customers of infrastructure.  If I were to surmise that my ideal job time split between “hands on keyboard” time coding and CCE were 50/50, then I’m gonna have to change the way I manage my time.  The balancing act of hands on work and meetings is just way fracking hard.  I may have to start being more selfish with my calendar and time.  I’m just not sure how best to solve this yet.

“The balancing act of hands on work and meetings is just way fracking hard.”  

balancing-hands-on-work-with-leadership

Another issue I struggle with is the perception that the ability to do both hands on development and CCE or leadership well is commonplace in the valley and elsewhere.  This is buttressed by the examples of exceptional people like Elon Musk, Sergey Brin, Larry Page, Mark Zuckerberg, and Max Levchin as bellwethers for how to work.  I feel like these guys and others like them are truly exceptional people.  They garner media attention because they are success stories, but the translation in the valley is perhaps that their skills are as obtainable as riding a bike.  I feel like this is just not the case.  I don’t necessarily want to lend too much credence to the Malcolm Gladwell “10k hours” argument, but this shit is just not easy.  The competition here is high, especially if you truly want to make a difference and want to think of yourself as good at your job.  Perhaps I am just too hard on myself, but here lately I have gotten to where my blood pressure rises, I get hot, and feel overwhelmed when I read HackerNews.  Sometimes I feel like all the article titles are flying by my eyes like data in the matrix making me feel more inadequate by the minute (the Oracle said I was not “the one”).  Is this is the bane of type A personalities who want to be the best and are competitive?  I rarely think of myself as an “expert” at anything even if I am kicking ass at something.  Sometimes this personality trait of mine helps me and drives me to be better, while other times it exhausts me and gets me down.  Am I a little out of balance with incessantly wanting to be a better programmer, a great speaker and leader, a great skateboarder, up to date on all the latest geopolitics, and be able to write eloquent comments on HN?  Maybe.  I might need to wean myself off HN for a little bit for my own mental health.  I think when you start to get jealous of the gas station cashier for having a simple and less stressful job, it might be time to level set.  Than again, I just want to do a good job in this rapidly changing infrastructure as code landscape.  So what is a modern infrastructure architect?  Is it a software developer, a shoe salesman, or a crisis negotiator?

Wait, sorry I have another meeting to attend.  Let me know if you approve and merge what I perceive to be a market pull request.


Skateboarding and Open Source Communities

I have been skateboarding since 1986.  I love it.  I cannot get it out of my blood.  I still skate today regularly even after years of injuries: broken toe, broken thumb, broken clavicle, multiple ripped ligaments…the list goes on.  This is more than dedication, skating it is a life-morphing adrenaline rush that can forever change you.  I also love computers and software development and have been doing that since 1990.  Rodney Mullen is one of the most influential skateboarders in skateboarding.  You may not have heard about him, but this guy is amazing.  He won literally every competition in freestyle skateboarding he ever competed in.  He basically invented the entire category of freestyle skateboarding.  He defined a singular work ethic in skateboarding that many in skateboarding couldn’t groc at the time skating upwards of 9 hours a day.  But he didn’t stop there.  He started a company and made it hugely successful and then sold it.

Sound familiar?  This is what software developers and folks like us do in Silicon Valley all the time.  Imagine my joy when one of my idols was able to articulate the similarities between the skateboarding community and the open source tech community.  Recently, Rodney Mullen was a speaker at Ted Talk at the USC campus in LA.  At the talk Mullen pointed out the importance of creativity and innovation in skateboarding.  He says the greater the contribution to the community, the greater the response and extension by the community.  Man does this ever hit home.  He makes the example of the Ollie and how that singular trick development and innovation spawned an entire new set of skateboarding tricks that have come to define the sport.

The comparison to the open source community within the tech world has the same multiplier effect like in skateboarding.  Look at the amazing decisions to make software like Linux, Apache, Hadoop, and OpenStack available to the community.  The community responds in kind with continuous advancements and ongoing maintenance.  We all benefit.  So while you may want to do a startup to be successful (and to be sure skateboarders invent tricks to be successul too) the joy and benefit of your creativity to the open source community  is also a worthy reward.  I say this also to draw your attention to the incredible work ethic of many skateboarders and the innovative community they foster.  Our two communities have many similarities.

See the Ted Talk here.