Containers Don’t Really Boot

Docker has been a great advancement for mass consumption of linux based containers.  The maturation of the virtual machine boom that has been happening since the early 2000’s led to mass acceptance and deployment in public and private clouds.  To be sure, asking for bare metal today can be seen as a faux pas without some well-defined use case (like super high IO).  So, now that folks have accepted that slices of CPU, memory, and disk are good enough through well-known hypervisors (kvm, esxi, xen) for most workloads, taking the next step to containers will not be that big of a leap.  Except that now it is more common to run containers on VMs than bare metal.  So now we get a slice of a slice of a slice of resources!

slice-slice

Virtual machines are just what their name implies:  full machines that are virtualized.  This means they have virtual hardware that virtually boots an OS kernel and mounts a filesystem.  The OS doesn’t really know that the hardware it is running on is not real.  Containers on the other hand are not virtual machines.  Containers are fully sandboxed processes using the host machine OS Kernel.  So, when running on a VM, they are slices of VM vCPU, memory, and disk for fully sandboxed processes.  This is the part that had me perplexed for a while until I ventured to understand exactly what happens when an lxc container starts versus a virtual machine.

Boot or Start?

Let’s compare boots of CentOS Linux on virtual machines versus containers:
Virtual Machine/Bare Metal:
  • Power on, the system BIOS loads and looks for sector 0, cylinder 0 of the boot drive (Typically /dev/hda, or /dev/sda)
  • The boot drive contains the MBR which then uses a boot loader such as GRUB (typically in /boot) which locates the kernel and loads it (based on GRUB config)
  • The kernel (vmlinuz) then uncompresses itself into memory
  • Load the temporary RO root filesystem via initramfs (configured in GRUB)
  • The Kernel locates and launches the /init program from within initramfs (/sbin/init)
  • Init determines run level via /etc/inittab and executes startup scripts
  • Per fstab entry, root filesystem completes integrity check and then is re-mounted as RW.
  • You get a shell via /bin/sh
Container:
  • Docker tells LXC to start a new container using the config in your Dockerfile
    sudo docker run -i -t centos7 /bin/bash 
    • Runs lxc-create with params (example)
      lxc-create -t mytemplate -f lxc.conf -n mycontainer
  • Docker on rhel/centos/fedora systems use device mapper which uses a sparse file for holding container images here:
    /var/lib/docker/devicemapper/devicemapper/data
  • Docker starts the base image (directory structure) as read only, and creates the new RW (CoW) layer on top of it.
  • Docker gives you a shell via /bin/sh (if you asked for it in the Docker run or a Dockerfile config)Docker configured union filesystem (AUFS, devicemapper, overlayfs) is used for mounted root filesystem

“It is perhaps more precise to say that a linux environment is started rather than booted with containers.”

The entire linux “boot” process that a normal virtual machine goes through is essentially skipped and only the last steps where the root filesystem is loaded and a shell is launched happens.  It is perhaps more precise to say that a linux environment is “started rather than booted”.  I was also further confused by the Docker terminology which uses the word “image” to describe something different from cloud images.  When I hear “image” I think of AMI style full virtual machine images as used in clouds.  These images are different from container images used by Docker.  Docker uses the term “image” to describe what is really a minimal layered root filesystem (union mounted).  This is all a bit confusing at first until you remember that everything in Linux is a file.  If you dig around and take a look at some of the utilities to create these “base images” such as febootstrap/supermin or debootstrap you will see that they are creating clean, consistent directories and files for the linux distro output in various formats such as .img or .tar.  So, the docker “images” are really nothing more than directories and files with pre-populated with the minimum viable set of linux components and utilities you need for a functional linux system.

“This is all a bit confusing at first until you remember that everything in Linux is a file.”

When Docker LXC based containers boot they are really just starting a kind of super “chroot” of processes with totally sandboxed storage and networking.  They don’t need to do a full boot since they are just leveraging the OS kernel of the host system.  All they seem to need are the minimum viable linux system directory structure tools and utilities.  Additionally, because Docker caches content, things run even faster since there is less to download.  These are reasons why containers “boot” or more precisely “start” incredibly fast.  Because you don’t have to go through a fully virtualized system boot process like a virtual machine or bare metal machine, you get productive “process-wise” rapidly in your own super sandboxed linux environment.

Union File Systems and the Neo Image Zeitgeist

One cool thing Docker introduced is the use of union mount layered file systems to control the craziness of working with images.  When I say image “craziness” I might need to clarify with a refresher for those youngsters who didn’t go through the whole “anti-image” zeitgeist of the past 5 years.  Let’s rewind to the early 2000’s when people discovered the ability to create sector by sector disk copies and saved all the time of installing apps over and over again (Ghost anyone?).  Everyone was in love with images then and naturally started using them in VMs when vmware was new and hot.  It was only after years of dealing with homegrown image versions and patching problems that folks started becoming more and more anti-image.  To be sure, many people made many horrible images (especially for Windows machines) that didn’t properly get sanitized (with sysprep or similar tools) before use as a VM template which only served to exacerbate things.

Fast forward to 2005-ish when CM tools like Puppet and later Chef in 2008 were formed in the “anti-image” rebellion.  What people wanted in these modern CM tools was the ability to repeatedly build pristine machines literally from bootstrap.  What this meant to many was no images ever: PXE the world and then Chef it.  As the public cloud took off so did people’s needs to build servers at an increasingly rapid pace. PXE bootstrapping everything was just to slow and often not possible in multi-tenant cloud environments like AWS.  The compromise answer was to create super minimal “base images” (also called JEOS or Stem Cell images) which were super pristine and small.  These base images for full virtual machines booted much faster and the fact that they had very little on them didn’t matter anymore since we could reliably and cleanly install everything declaratively in code using Puppet or Chef.

Fast forward to today and folks often find that full VM’s booted and installed with cookbooks are again not fast enough for them.  Also, the complexity of using some CM tools meant that it was a fair amount of work to stand up your Puppet or Chef environments before they paid off in speed and as a single source of truth.  Enter containers.  However, just getting a container only gives you a pristine base image if you start out with one.  Beyond any pristine base container image, any customizations you might need (like installing your app software components) would require you to get back to the days of image sprawl unless you used modern CM like Puppet or Chef to provision on top of your container base images.  Docker decided to fix this old problem with a new twist by using a union mount for layered or copy on write filesystems.  What they did was take the concept of the pristine base image (which we’ve learned is nothing more than minimum viable linux bistro directories and files with components and utilities) and allow you to leave it in a pristine shape.  They then allow you to layer components on top of each other that you need leaving each layer as read only thin deltas of changes.  The automation (infra as code) offered by Docker is via the Dockerfile where machine DNA is located.  What is still yet to be determined is whether the Dockerfile is enough to get your container in the end state you desire.  For example will layered “super thin delta images” be a replacement for modern CM tools?  Or, more likely, will CM tools like Chef be used to build each thin layer delta image (TLDI).


Thoughts on Loosing Weight

Rather than buy a bunch of books and other BS, let me drop some science on your straight from Silicon valley on weight loss.

Note:  This is anecdotal science straight from a sample of 1 at its best.

  1. Eat less food
    • (cut overall intake at least 30%)
  2. Eat better food
    • (cut anything with sugar or that metabolizes to sugar: sodas, breads, pasta, fast food)
  3. Exercise at least 5 days a week
    • (so sorry to bust your bubble, but this is key)
  4. Get more sleep
    • (yep, really… like 7+ hours min)
  5. Drink a buttload of water
    • (scientific amount to be sure)
  6. Reduce your stress
    • (I’m so effing serious here. The exercise and sleep does help some here)
  7. If you got this far, then you should drop like 10lbs every 2 months.

Now, that you have the harsh truth, get out there and get it done.

Note to self:  Please follow your own advice :)

Sources:  Dr. Padge


Path it your way

I really like the cool products the team at HashiCorp cook up.  Unless you have been frozen in carbonite for the past 3+ years, you’ve likely used or heard about Vagrant.  What a wonderful tool that lets you rapidly build and test on VMs and containers on VMs using Virtualbox.  As if that wasn’t awesome enough, this past year Mitchell and his team have introduced multiple awesome projects including terraform, consul, serf, and packer.  I wanted to get my hands on these tools and use them locally on my Mac.  As with Vagrant, each of these products are packaged into a nice downloadable binary which you can then extract to the directory of your choice for use.   However, after reading the documentation carefully, you’ll find that setting your path to these executables is a key part of your environment setup.

Setting path variables is not a big deal at all.  However, I wanted all my HashiCorp products executables to live in a specific directory configuration.

mkdir -p $HOME/hashicorp/{consul,packer,serf,terraform}

$tree $HOME/hashicorp -d
/$HOME/hashicorp
├── consul
├── packer
├── serf
└── terraform

Since there are multiple projects from the same company to be used, I wanted each one in its own subdirectory.  Rather than list each HashiCorp product in my .bash_profile PATH statements, and to prevent other things from getting broken in the process of editing the paths (RVM, etc), I decided to dig around on our good ole friend StackExchange and found some good tips for doing paths for multiple subdirectories.

So, I did the following in .bash_profile to have the shell iterate over each subdirectory and add to the $PATH export.

$vim .bash_profile
# HASHICORP PATHS
for p in $HOME/hashicorp/*/; do
 export HASHIPATH="$p"
 PATH="$PATH:$HASHIPATH"
done

Note: It is likely a security anti-pattern to have recursive paths if I were to get malicious code from Hashicorp since I am setting up a specific path root for all the tools from them I intend to drop into this directory structure. Again, this structure is just a personal preference to keep everything in a single root directory with release versions. I could have chosen to do symlinks or symlink tools like stow, but I just wanted something quick and functional with the directory structure I desired.

Now, the next awesome product that the folks at HashiCorp release that I want to use, I can simply drop it into the subdirectory of my choice with the versioning directory I like, and I’m good to go.


Generation wuss article…

This article from Bret Easton Ellis on Millennials is nothing short of brilliant and spot on.  Check it out.

Millennial« Generation Wuss » by Bret Easton Ellis

http://www.vanityfair.fr/culture/livre/articles/generation-wuss-by-bret-easton-ellis/15837


For Whom the Batphone Calls

A Runbook for Enterprise Adoption of Open Source Software 

Image

Enterprise leaders wanting to do a “Dev and Ops” pivot need to own up to the fact that you have to earn you keep today when choosing this path.
If shit is broke, YOU should have the power to fix it.  This idea of outsourcing all problems to a paid closed source vendor support call can be naive and lazy in many cases.  You should have the power to be your own bat phone.  When you call Batman, you are in essence calling yourself.

Retirement Home Enterprise          

For the longest time, infrastructure and application support teams have operated in a super old workflow that was uninspiring.  They got paid well to be effectively “glorified maintenance/janitorial staff.”  An operations spreadsheet/app had a name on it that says they support X and Y.  When X or Y breaks, they put a ticket in a queue and call people.  These people look at some super basic stuff like 1) Is the server up and pingable?, 2) Is the app process running?, or 3) Is there a vague, nefarious network issue?  If none of these things get hits, they shrug their shoulders and  and open a ticket with the Vendor that they pay support to and then wait.  They then report to some person that reports to some other ops person that they are “working on the issue” with the vendor.  

The vendor then has a support engineer (L1/L2) work the call and ask for sane things like logs and read the case notes.  The enterprise support person just acts like a drone in this case typing whatever the vendor support engineer says and accepts whatever the vendor provides as an answer.  Unless an answer cannot be found, the enterprise support person just accepts the answer/fix and closes the case.  If no fix can be found, it effectively is like reporting a bug (If it is not already well known).  The fixing for the bug can be soon* or indefinite.  Unless the enterprise customer is huge (spends loads on support & products) the bug fix doesn’t get addressed fast.  It gets addressed on the schedule most convenient to the vendor.  These support contracts are not cheap.  This is such a lazy, lame approach.  I call this the “Batphone Mentality” where the vendor software support superhero is always a support phone call away.  This is  especially true for closed source COTS apps whose heavy handed approach actually creates a self-reinforcing cycle of enterprise retirement home “Batphone” mentalities. The problem is that Batman worked for free for Gotham, while enterprises have to pay salaries for multiple external vendor Batmans.

“closed source COTS apps … heavy handed approach actually creates a self-reinforcing cycle of enterprise retirement home “Batphone” mentalities.” 

The Emerging Open-Source Enterprise 

When you decide that you want to control your destiny, get things fixed faster, or add features and functionality to applications and tools you need to run your business, you may be ready to make the open-source switch.  If paying loads of money for support contracts is leaving you beholden to a vendor that is not transparent and open about product bug fixes or feature roadmaps, you are effectively trapped.  Major commercial software vendors with closed source products maintain a kind of hegemony over enterprise customers in the following ways: 

  • Forcing the customer to follow often stringent tech-stack requirements to install and use the product.
  • Forcing the customer to forgo support officially if they try to deviate any part of the tech-stack.
  • Making money off parts of this “required” tech-stack with onerous licensing.
  • Releasing new versions that require forklift upgrades that are uber costly and are often not backward compatible.
  • Purposely not being open with support documentation and requiring pay/account gateways to get it.
  • Forcing a god forsaken license key or licensing process that makes installs painful and limited.
  • Dropping a product completely leaving the customer with no option. 

“Major commercial software vendors with closed source products maintain a kind of hegemony over enterprise customers.”

For years enterprises just accepted this in large part due to the following reasons:

  • They used closed source OS platforms (Windows/AIX/Mainframes) since viable open-source alternatives were not available.
  • They bought software that was closed source because developing the software was hard and costly.
  • Tools needed to write software for these closed-source platforms were not easily accessible or free ($1000+ for Visual Studio is pathetic).
  • Technology perhaps was not a key part their business (non-tech companies).
  • Staff that had requisite skills to overcome (1-4) were not widely available. 

Enter the modern tech era Circa 2007 – 2014 where open-source software is almost an American birthright and as normal and accepted as apple pie.  Open source operating systems, applications, and software development tools are widely available that can solve many (if not most) of the things an enterprise might need (Linux, OpenStack, Nagios, Hadoop, Tomcat, Apache, Rails, NodeJS, and on and on).  Public code sharing repositories like Github, Gitlab, Gitorious, Cloudforge, Sourceforge, and GoogleCode have been a revolution in open-source workflow and project accessibility.  Github in particular has been a big part of this inflection point in social coding.  There has likely been more open-source momentum in the past 7 years than in that past 30.  It is truly an amazing time. Decades from now, folks may look back and ask all us old-timers stary-eyed questions about “what was it like to live during the second open-source revolution?” like we we lived through the equivalent of the roaring twenties or some other historically significant time.

“There has likely been more open-source momentum in the past 7 years than in that past 30.” 

While the open-source movement may have not started precisely in the San Francisco bay area, the area has been home to many leaders in the movement.  Partly perhaps due to the roots (even today) of silicon valley as a haven for free thinkers and futurists.  A place also were it was ok to be distrustful of governments and “the man.”  The bay area hippie culture of the 60’s may have also contributed in part to a tradition of folks wanting things to be free or more accessible that continues.  Today, there is still a vein of this benevolent revolutionary mindset behind every open-source project.  The interesting thing that has happened is that the hippies have been replaced by benevolent libertarian capitalists.  This is very cool.  Many startups in the valley today often release their product day one as an open-source project.  They simultaneously offer paid support for customers.  In many cases, their products are based on open-source products but sold also as SaaS or cloud offerings.  This is an amazing amount of new freedom for the enterprise.

“…the hippies have been replaced by benevolent libertarian capitalists.” 

Be your own Batphone 

If you want to control your destiny, get things fixed faster, or add features and functionality you need on your own schedule, you need to be using open-source software and hiring staff that are not drones or retirees.  Here is proposal for how the modern enterprise can call their own Batphone.

1) Use open-source software
     – get access to the source code now (there is a boat load out there github.com)
     – check lists like this (http://www.osalt.com/)
2) Hire good developers for Apps and good developers for Infra
     – Pay them well
     – Empower them
3) Be very strategic about what commercial, closed source software you buy and use
     – It needs to provide a better ROI for functionality, speed, reliability than an OSS alternative.
     – Try to pick closed source software that at least is largely standards based (it can ease a transition off the product later if needed).
     – Put pressure on the vendor to allow flexibility of the tech stack to use open-source projects (tomcat, node, etc).
4) Train your staff and engineers well
5) Accept the fact that you can now truly control your destiny.

On this last point, it is a function of having skilled developers and engineers that use open source projects and have a specific mentality.  The mentality needed is really an “Anti-Batphone” mentality.  Developers and Engineers have to own up to the fact that they are the last person standing.  They are the last solider in the bunker and have to be McGiver or Will Wheaton to get it working.  They have to believe there is no fucking Batphone.  They are the Batphone. 

A Sample Enterprise anti-Batphone Support Call Flow

1) Some important app is broke
2) Open-source monitoring software alerts the app owner/developer team 
     – Operations sees that same alert and opens a ticket
3) Internal support engineer starts working on it
     – logs, processes, system level debugging
4) Something is broken with the code (a bug)
     – Look at the damn source code, you have it!
5) Commit code changes with the fix to your CI/CD pipeline.
     – Push to prod for the fix
     – close the internal support case.

Paying for Support with an Open Source Vendor 

Paying for support with a vendor that has an open-source version of it’s product is truly a win-win.
You get the luxury of official support.
The open-source vendor gets to make money and you still get the source code.
What if you cannot fix the bug with your internal staff?  What if you need to pay for support?
No problem, do the following: 

1) Locate a vendor that provides the product platform you need as open-source + support
2) Pay these good people for support.
3) Use their product and do the following for support issues
          – Open issues on Github for problems and bugs
          – Fix the bugs yourself and submit pull requests directly to the vendor github
4) Influence product roadmaps by submitting pull requests and collaborating with the vendor:
          – this can be from your own internal staff if they are skilled enough.
          – this can be through skilled contractors that you pay to develop modifications to submit as pull requests to the vendor.
          – this can be through outright paying the vendor for adding the features directly.
5) If for whatever reason the open-source vendor decides to stop working on the project, closes shop, or will not accept your pull requests for product modifications, you still own the source code.
          – you can take over the project
          – you can fork the project and do your own thing

This is an awesome new world with loads of new freedom.  Enterprises need to embrace this and control their destiny by using open-source software and using open-source vendors.  You should have the power to be your own bat phone.  When you call Batman, you are in essence calling yourself. 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Being an Apprentice is better than a Scrum Master (Tools do not make thou work with Agility)

Buzzwords. They seem to make their way into the vernacular via the process of regurgitation from folks farther and farther removed from the source. They start out as simple ways for tech folks to quickly communicate a concept to other tech folk and eventually become shorthand for explaining something to the business. The process of using analogy to describe something is not a bad thing, but when a shorthand term itself creates a cottage industry, one might want to dig deeper. In particular, let’s talk about the now Hackneyed term “Agile.” With the growing drumbeat of organizations trying to squeeze more efficiency out of teams, they have turned to the “magical pixie dust” delivered through the now buzzword of Agile. Heck, even people that helped create the Agile Manifesto are now having to respond to it (see Dave Thomas post here). All these certified scrum-master courses that are popping up like weeds in a bed of mulch after a rain are perhaps the tipping point for the overuse of a shorthand term.
Look, the guys that put together the Agile manifesto were just trying to cut through all the bullshit that happens in traditional corporate spaces to just ship some decent software fast. Taking simple concepts and creating a certification (Scrum Master) for it seem counterintuitive to me since, if it were actually simple, then why would there be a need for a certification rigor?  It makes more sense to me to have a mindset of being an apprentice where you are constantly reminding yourself how to work efficiently by….

  • Valuing Individuals and interactions over processes and tools
  • Preferring Working software over comprehensive documentation
  • Valuing Customer collaboration over contract negotiation
  • Responding to change over following a plan

The reason the guys behind the Agile Manifesto came up with these simple terms, was precisely because all the damn project managers at large organizations were so locked into the doing massive “waterfall” style project plans and Gantt charts and dedicated people to do technical writing in bloated 6MB Word documents that software developers just couldn’t work quickly and without friction. Dave Thomas made the point that people should stop using the term Agile as perhaps a noun, and go back to using it as a action statement more akin to an adjective.
Dave says the following brilliant statement: “It’s easy to tack the word “agile” onto just about anything. Agility is harder to misappropriate.” So for him you aren’t and agile programmer – you are a programmer that programs with AGILITY. You don’t work on an agile team, you work on a team that exhibits AGILITY. You don’t use “agile tools”, you use tools that enhance your AGILITY.

Since I myself have been been trying to make people on my team act and work with more “agility”, I try to keep myself tuned into ideas and people that “get it.”  So when someone tries to help convert traditional IT project managers and technical managers into working with more “agility”, it is worth taking note. Kamal Manglani’s book is trying to do just that. He is trying to get people to work with more agility. Having worked with Kamal in the past, I can tell you that he does get it. Kamal starts shaking his head when standups are taking too long and people turn them into bitchfests or psuedo spikes. He is known for being a bulldog that will search people out and sit with them at their desk in person to solve a problem and won’t leave them alone until the problem gets attention or resolution.  He’s the guy that frown’s on large bloated Word Docs in favor of a wiki or a readme file in Git. His book at least doesn’t misuse the term agile, it’s called ‘The Apprentice and the Project Manager‘.  So, while the devs hopefully get it, at least someone is trying to get those pesky project managers thinking and acting with more AGILTY, not being “agile”.  If this helps teams ship things faster and gets teams more productive by reducing the number of folks in their path, I’m cool with it.  Since we have to keep reminding ourselves how to do work with agility, perhaps being permanent apprentices is better than a scrum “master” of anything.  Is anyone ever a master?


Thoughts on having an opinion on tech

regurgitate != knowledge 
explore+test = have_an_opinion 
requires_work = 1 
sed -i 's/no_time/make_time/g' /my/schedule

Follow

Get every new post delivered to your Inbox.