Chef Vault with Large Teams

Chef-vault is a tool created by Nordstrom and adopted by Chef as the de facto way to handle secrets management using the Chef platform.  Chef-vault builds on the original Chef encrypted data bags concept¹. Rather than a single shared decryption key, chef-vault creates a separate copy of the data bag item for each node that is granted access, using the existing RSA public/private key pair normally used for Chef API authentication. According to Noah Kantrowitz, who sums it up nicely, “This means you no longer have to worry about distributing the decryption keys, but it also moves chef-vault into the gray area between online and offline storage systems.  Once a server is granted access to a secret, it can continue to access it in an online manner. However granting or revoking new servers requires human interaction. This means chef-vault is incompatible with cloud-based auto-scaling or self-healing systems where nodes come/go frequently. It also inherits the same issues with lack of audit logging as all other data-bag driven approaches”.
Summary of Chef-Vault Functionality:
  • Uses encrypted data_bags, but adds layer of mgmt on top
  • Creates separate copy of encrypted data_bag item for each node granted access
    • No single shared decryption key
  • Uses the existing public/private key pair for the Chef API authentication (node public key)
  • Handles distribution of keys for you using Chef
  • Granting/revoking servers access is a separate human task
    • Cloud based systems which build and destroy nodes requires that new nodes be added/removed each time nodes status changes for vault
    • Requires using the chef-vault local knife commands
Problem:
How to have teams of developers manage secrets in a common manner using chef-vault?
Of course you could say that each individual team has designated individual(s) with knife access to Chef servers and they use knife vault to get the secrets up to Chef and call it a day.  However, this is not always as cut and dry as one might hope.  For example, in my company we chose years ago to tightly restrict knife write access to chef servers only from a CI/CD server via a pipeline.  In other words, there are SDLC process limits and security process limits put in place to limit the risks of “wild west knife access”.  Another reason is that members of teams in larger orgs can come and go, and often you need systems where things are well documented and stored in ways that others can pick up and work where another left them.
Due to the need to have a consistent process as well as limit knife access to the pipeline, we needed a mechanism to get secrets into Vault without funneling them all to a central team every time a new secret is added or removed.  Because Chef Vault uses data_bags which are predominantly stored in version control, the seductive answer is to just throw them on your private version control system like GitHub Enterprise.  However, the moment you do that, your secrets no longer become secret.  What  was needed was a way to encrypt secrets in GitHub in a seamless manner such that a CI/CD pipeline server can ingest them for the express purpose of using knife vault to get it securely to Chef.
The design solution was to use a combination of technologies:  git-crypt + GPG + github + Jenkins + Chef vault.  Let’s dig into the design details.
 
Solution Summary:
  • Create a team based “GPG WoT” (web of trust)
  • Limit exposure of the public key to the folks that need it (dev team, Chef admins, CI server admin) in a localized web of trust.
  • Use git-crypt + the GPG key to seamlessly encrypt your desired secrets for storage in the data_bags/chef-vault git repo.
  • Allow a private CI/CD server instance (like Jenkins) to have your team GPG public key so it can grab your encrypted data_bags/chef-vault secrets and decrypt them
  • Have the CI/CD server instance then run a job which takes the unencrypted data_bags/chef-vault secrets and then uses knife vault to secure them again on the Chef server.

MyOrg Web of Trust (WoT) Using GPG

In order to sanely place secrets like passwords, certificates, and keys in a DVCS like GitHub with only authorized people having access, we need to use GPG keys.  The design is such that the authorized “web of trust” user accounts like an Infrastructure CI/CD server ldap user, Infrastructure Chef admins, or specific Security Engineering team members get a copy of a given team’s public GPG key.  An authorized user account will then be able to grab the data and unencrypt it using the GPG keys.  Depending on your Web of Trust preferences, you may desire to only allow access to your GPG public key to a smaller group.
Essentially, the design is such that each team can have a designated email address or name as an identifier which is used with the GPG public key.  This public key is then sent to ONLY those in your Org Web of Trust (WoT) that are authorized to get the public keys.  In this case we are not actually freely giving out team public GPG keys, but selectively choosing who can get them.  I realize that the beauty of the GPG key in the public internet use case is to make the public key widely available.  Here, we are using it selectively within a given org for purposes of limiting who has access to decrypt.  You are taking some steps to protect your public keys within a team for purposes of getting the shared secrets into a data_bag store (usually Git) securely.  The trick here is to have the data_bag items encrypted in the version control system like a private Github repository such as Github enterprise using GPG keys + Git-Crypt.
The process of getting them into the Chef server as an knife vault encrypted data_bag item then can be handled by a secured job on a CI server such as a private Jenkins instance.
The CI/CD server job must do 2 things:
  1. Grab the GPG encrypted data_bag items from version control (it has the GPG public keys for each team).
  2. Decrypt the GPG encrypted data_bag item and then use knife vault to upload and encrypt using normal Chef Vault processes.
This job workflow allows a distributed team to centrally store secrets in a way that a central job can grab it, decrypt it, and then re-encrypt it using native Chef vault processes.  Of course, when people leave the team, it behooves you ideally to generate a new team based GPG key and then re-encrypt your team secrets using the new GPG key and ensure the CI/CD server job gets the new key for its purposes.  This design also assumes you own a private DVCS system like Github enterprise or a private Github account.  I don’t recommend implementing this design for any open public internet projects, on a non-private Github or bitbucket for example, where mistakes in your workflow and WoT could potentially expose your secrets.  This is a weakness of the design in terms of security and management, but a fair trade off from the alternative of non-standard secrets management among large teams before they get it into chef vault.

Pre-requisites:

  • CI Server & Admins:
    • Working Chef11/12 workstation environment (chef, knife, etc)
      • Chef DK
      • This includes you being a valid Chef user
      • Knife is best reserved to very few people.  Let a CI/CD pipeline do knife tasks.
    • Installed chef-vault gem on your workstation (or just use ChefDK)
      • Recommended to use RVM or rbenv to avoid installing gems in your system ruby or just use ChefDK
  • Org Team members & CI Server & Admins:

Workflow and Dev setup

  • Install git crypt on your local machine in order to encrypt your data bags.
  • Generate and share your team GPG pub key with the central Chef Admins for purposes of getting it to the private CI/CD pipeline server(s).
    • Export your pub key using:
      gpg --export -a "User Name" > public.key
    • This will create a file called public.key with the ascii representation of the public key for User Name (use LDAP userid or email is useful in large orgs)
  • Configure git-crypt for your repo
    • Fork a copy of the chef-repo/myChef repo to your personal GitHub repo (this is the repo where you store data_bags.
    • Clone your forked copy of the myChef repo to your local workstation.
    • Navigate to the data_bags directory and then to the chef-vault data_bag directory, then create a directory for your app tied to the role name
cd /chefRepo/data_bags/chef-vault/
mkdir myApproleProd
  • Create a .gitattributes file to tell git-crypt what files to encrypt in a repo directory
cat > .gitattributes << EOL
secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
EOL
  • Initialize the repo directory for git-crypt
    git-crypt init
  • Add your GPG key to git-crypt
    git-crypt add-gpg-user USER_ID
Workflow Summary :  Common Chef Repo Data_bags or Chef 12 Organizations
  • After Chef Admin team adds your team public key, follow the steps below to add a new secret to the chef-vault store within the data_bags root directory in Git.
  1. cd /chefRepo/data_bags/chef-vault
  2. git pull origin master
  3. git-crypt unlock
  4. Add your new data bag item ( make sure it ends with .key and id matches the file name, steps listed below)
    • A vault data bag needs to be checked in a specific directory structure standard (see below for details).
  5. git commit -am “adding encrypted chef-vault item"
  6. git push
  7. Submit pull request to accept additions to the repo
  8. Wait for the Chef CI/CD pipeline server job to:
    • Ingest the new items via the CI server job
    • Decrypt using your team GPG key
    • Knife vault to re-encrypt using chef-vault a given Chef server.
Chef-Vault Directory Design Standards
The data_bags directory structures allow data_bags that contain data_bag items.  This can exist in a sane root directory structure such that we can organize things in a Github repo in a way that is visually appealing.  Once the data_bags and their items are ingested by Chef to SOLR, the root directory structure in GitHub is not present.  Instead there are data_bags (keys) and data_bag items (values).
See details on how Chef views the data_bag directory structure.  Per the Chef docs, “A data bag is a container of related data bag items, where each individual data bag item is a JSON file.  knife can load a data bag item by specifying the name of the data bag to which the item belongs and then the filename of the data bag item.”
Essentially, the form is:
data_bag directory
     data_bag_item json file
You can chose the name the json data_bag_item in Chef vault with a .key extension, but the content should have “id” and “key” section.
{
 "id": "my_secret_item_name",
 "key": "value"
}
An example directory structure is below:
The bag_name should be updated based on your application role.
myChefrepo
├── data_bags
│   ├── chef-vault
│   │   └── bag_name_matching_role
│   │      └── fooapp-secrets.key
│   │      └── barapp-secrets.key
  • We (CI/CD job) expects the bag_name to be added under the chef-vault directory
  • We (CI/CD job) also expect the bag_name to match the role name.
    • The vault data_bag would be created and available only for nodes with corresponding with chef role.
      • e.g: If you expect a vault item to be applied to a node with a chef role of myAppenvProd
      • The structure would look like this:
├── data_bags
│   ├── chef-vault
│   │   └── myApproleProd
│   │      └── myApp-secrets.key
  • For the item to be encrypted the data_bag_item should end with file extension “.key” and the contents should be in JSON format.
    • This is a design preference (.key), but the JSON format is a Chef data_bag thing (see here).

Design for sharing encrypted Items in a DVCS for Chef using chef-vault

chef-vault-automation-github


REFERENCE:

Here are some common use cases or knife-vault that the CI/CD server job logic can execute.
 
Chef-vault commands used by the CI Server automation (Examples)
Create Vault Item
knife vault create fooapp fooapp-secrets \ -J data_bags/chef-vault/apps/fooapp/fooapp-secrets.json \ -A "adminuser1,adminuser2" -S "role:fooapp-server"
Show Vault Item
knife vault show fooapp fooapp-secrets -Fjson 
Delete Vault Item
knife vault delete fooapp fooapp-secrets
Delete old node
knife vault update fooapp fooapp-secrets \ -S "role:fooapp-server"
Update list of Admins
First see who has access:
knife search fooapp 'id:fooapp-secrets -a clients
Next change the membership:
knife vault update fooapp fooapp-secrets \ -J data_bags/chef-vault/apps/fooapp/fooapp-secrets.json \ -A "adminuser3,adminuser2" -S "role:fooapp-server"

Update/Rotate/Refresh Keys/secrets
Rotate keys for a single vault item:

knife vault rotate fooapp fooapp-secrets

Rotate all keys for all vault items:

knife vault rotate all keys

References:
  1. https://coderanger.net/chef-secrets/
  2. http://engineering.ooyala.com/blog/keeping-secrets-chef
  3. https://github.com/Nordstrom/chef-vault/blob/master/KNIFE_EXAMPLES.md
  4. http://jtimberman.housepub.org/blog/2013/09/10/managing-secrets-with-chef-vault/

Containers Don’t Really Boot

Docker has been a great advancement for mass consumption of linux based containers.  The maturation of the virtual machine boom that has been happening since the early 2000’s led to mass acceptance and deployment in public and private clouds.  To be sure, asking for bare metal today can be seen as a faux pas without some well-defined use case (like super high IO).  So, now that folks have accepted that slices of CPU, memory, and disk are good enough through well-known hypervisors (kvm, esxi, xen) for most workloads, taking the next step to containers will not be that big of a leap.  Except that now it is more common to run containers on VMs than bare metal.  So now we get a slice of a slice of a slice of resources!
slice-slice
Virtual machines are just what their name implies:  full machines that are virtualized.  This means they have virtual hardware that virtually boots an OS kernel and mounts a filesystem.  The OS doesn’t really know that the hardware it is running on is not real.  Containers on the other hand are not virtual machines.  Containers are fully sandboxed processes using the host machine OS Kernel.  So, when running on a VM, they are slices of VM vCPU, memory, and disk for fully sandboxed processes.  This is the part that had me perplexed for a while until I ventured to understand exactly what happens when an lxc container starts versus a virtual machine.

Boot or Start?

Let’s compare boots of CentOS Linux on virtual machines versus containers:
Virtual Machine/Bare Metal:
  • Power on, the system BIOS loads and looks for sector 0, cylinder 0 of the boot drive (Typically /dev/hda, or /dev/sda)
  • The boot drive contains the MBR which then uses a boot loader such as GRUB (typically in /boot) which locates the kernel and loads it (based on GRUB config)
  • The kernel (vmlinuz) then uncompresses itself into memory
  • Load the temporary RO root filesystem via initramfs (configured in GRUB)
  • The Kernel locates and launches the /init program from within initramfs (/sbin/init)
  • Init determines run level via /etc/inittab and executes startup scripts
  • Per fstab entry, root filesystem completes integrity check and then is re-mounted as RW.
  • You get a shell via /bin/sh
Container:
  • Docker tells LXC (now libcontainer) to start a new container using the config in your Dockerfile
    sudo docker run -i -t centos7 /bin/bash 
    • Runs lxc-create or libcontainer equivalent with params (example)
      lxc-create -t mytemplate -f lxc.conf -n mycontainer
  • Docker on rhel/centos/fedora systems use device mapper which uses a sparse file for holding container images here:
    /var/lib/docker/devicemapper/devicemapper/data
  • Docker starts the base image (directory structure) as read only, and creates the new RW (CoW) layer on top of it.
  • Docker gives you a shell via /bin/sh (if you asked for it in the Docker run or a Dockerfile config)Docker configured union filesystem (AUFS, devicemapper, overlayfs) is used for mounted root filesystem

“It is perhaps more precise to say that a linux environment is started rather than booted with containers.”

The entire linux “boot” process that a normal virtual machine goes through is essentially skipped and only the last steps where the root filesystem is loaded and a shell is launched happens.  It is perhaps more precise to say that a linux environment is “started rather than booted”.  I was also further confused by the Docker terminology which uses the word “image” to describe something different from cloud images.  When I hear “image” I think of AMI style full virtual machine images as used in clouds.  These images are different from container images used by Docker.  Docker uses the term “image” to describe what is really a minimal layered root filesystem (union mounted).  This is all a bit confusing at first until you remember that everything in Linux is a file.  If you dig around and take a look at some of the utilities to create these “base images” such as febootstrap/supermin or debootstrap you will see that they are creating clean, consistent directories and files for the linux distro output in various formats such as .img or .tar.  So, the docker “images” are really nothing more than directories and files with pre-populated with the minimum viable set of linux components and utilities you need for a functional linux system.

“This is all a bit confusing at first until you remember that everything in Linux is a file.”

When Docker LXC/libcontainer based containers boot they are really just starting a kind of super “chroot” of processes with totally sandboxed storage and networking.  They don’t need to do a full boot since they are just leveraging the OS kernel of the host system.  All they seem to need are the minimum viable linux system directory structure tools and utilities.  Additionally, because Docker caches content, things run even faster since there is less to download.  These are reasons why containers “boot” or more precisely “start” incredibly fast.  Because you don’t have to go through a fully virtualized system boot process like a virtual machine or bare metal machine, you get productive “process-wise” rapidly in your own super sandboxed linux environment.

Union File Systems and the Neo Image Zeitgeist

One cool thing Docker introduced is the use of union mount layered file systems to control the craziness of working with images.  When I say image “craziness” I might need to clarify with a refresher for those youngsters who didn’t go through the whole “anti-image” zeitgeist of the past 5 years.  Let’s rewind to the early 2000’s when people discovered the ability to create sector by sector disk copies and saved all the time of installing apps over and over again (Ghost anyone?).  Everyone was in love with images then and naturally started using them in VMs when vmware was new and hot.  It was only after years of dealing with homegrown image versions and patching problems that folks started becoming more and more anti-image.  To be sure, many people made many horrible images (especially for Windows machines) that didn’t properly get sanitized (with sysprep or similar tools) before use as a VM template which only served to exacerbate things.

Fast forward to 2005-ish when CM tools like Puppet and later Chef in 2008 were formed in the “anti-image” rebellion.  What people wanted in these modern CM tools was the ability to repeatedly build pristine machines literally from bootstrap.  What this meant to many was no images ever: PXE the world and then Chef it.  As the public cloud took off so did people’s needs to build servers at an increasingly rapid pace. PXE bootstrapping everything was just to slow and often not possible in multi-tenant cloud environments like AWS.  The compromise answer was to create super minimal “base images” (also called JEOS or Stem Cell images) which were super pristine and small.  These base images for full virtual machines booted much faster and the fact that they had very little on them didn’t matter anymore since we could reliably and cleanly install everything declaratively in code using Puppet or Chef.

Fast forward to today and folks often find that full VM’s booted and installed with cookbooks are again not fast enough for them.  Also, the complexity of using some CM tools meant that it was a fair amount of work to stand up your Puppet or Chef environments before they paid off in speed and as a single source of truth.  Enter containers.  However, just getting a container only gives you a pristine base image if you start out with one.  Beyond any pristine base container image, any customizations you might need (like installing your app software components) would require you to get back to the days of image sprawl unless you used modern CM like Puppet or Chef to provision on top of your container base images.  Docker decided to fix this old problem with a new twist by using a union mount for layered or copy on write filesystems.  What they did was take the concept of the pristine base image (which we’ve learned is nothing more than minimum viable linux bistro directories and files with components and utilities) and allow you to leave it in a pristine shape.  They then allow you to layer components on top of each other that you need leaving each layer as read only thin deltas of changes.  The automation (infra as code) offered by Docker is via the Dockerfile where machine DNA is located.  What is still yet to be determined is whether the Dockerfile is enough to get your container in the end state you desire.  For example will layered “super thin delta images” be a replacement for modern CM tools?  Or, more likely, will CM tools like Chef be used to build each thin layer delta image (TLDI).


Thoughts on having an opinion on tech

regurgitate != knowledge 
explore+test = have_an_opinion 
requires_work = 1 
sed -i 's/no_time/make_time/g' /my/schedule

What the Heck is an Architect?

I was never one of those little kids that told mommy and daddy what they wanted to be when they grew up.  I didn’t even start officially working in the “IT” industry until 1996.  I just kind of fell into the career because I enjoyed fixing other people’s computers all the time.  The closest I ever came to wanting a particular job as a teenager was being an architect.  You know the traditional kind like the dad the “Brady Bunch”?  In my mind an architect was always an engineer with a big desk, rulers, calculators, knowledge of complex math, physics, and materials science.  They also were visionary creative types that could come up with amazing designs like the Chrysler building, the TransAmerica Tower, or Falling Water.  In my mind these architects could do designs in an office, but also go out in the field and do some of the engineering work themselves.

The Architect - Mr. Brady

Architect-Mr-Brady

Fallingwater-Frank Lloyd-Wright

Fallingwater-Frank Lloyd-Wright

Fast forward to today when the job title of architect can take on a variety of constructs:  software, infrastructure, political, health care, network, security, traffic, etc.  The job title has morphed into any type of senior design position in some fields.  I myself am part of this trend with the job title “Infrastructure Architect” currently.  However, due to my admiration for the “traditional” architect job, I feel an obligation to at least try and transfer my ideas of what skills translate from the traditional architect into the modern IT infrastructure engineering space.  This has been the cause of some stress for me over the past three years.  I will freely admit that today I worry about what it takes to be proficient in this modern “infrastructure as code” world.  After having a few beers, I sat on my porch and tried to jot down some notes on what troubled me personally and what the causes might be.  I think I have narrowed them down:  personal expectations versus market expectations, time management, and ambition.

Personal Expectations vs Market Expectations

I would be lying to you if I did not admit that I struggle every day personally to try and meet what my own expectations are for an architect in the infrastructure space.  Infrastructure has several “silos/towers” that represent some traditional capability areas.  These are things like servers and virtualization, storage, networking, security, databases, service buses, application hosting, as well as monitoring and performance metrics.  Having deep knowledge in more than one of these towers is what I would say an architect should have.  Due to the amount of knowledge needed to know multiple silos this can take years of work.  In my case this journey has spanned over 17 years since 1997.  There is no formal curriculum so to speak for “architect” so it kind of materializes via technical jobs you take and a little serendipity.
My architect journey kind of materialized like so…

  • Hardware phase (servers/PCs/peripherals)
  • DOS phase (batch programming, emm386 fun days)
  • AS400 and VAX/VMS phase
  • Token ring and ethernet (10base2-10000) phase (jiggle it just a little bit)
  • Novell phase (3 to NDS)
  • Windows phase (over a decade of windows troubleshooting madness)
  • The whole certification industry treadmill phase (mcse/cne/ccnp/blah thank goodness this is dead)
  • The DBA phase (SQL server and Oracle)
  • Network engineering phase (ethernet to satellites to sonet rings to mpls and vpn)
  • Unix/linux phase (Solaris, Slackware, mandrake and RHEL)
  • The Perl/CPAN/Sourceforge phase
  • The php/html/css everything phase
  • The slashdot addiction phase
  • The AWS and Rackspace IaaS phase (ongoing)
  • The OSS and Github revolution phase (ongoing)
  • The ruby and cm transformation (ongoing)
  • The Hacker News addiction phase (ongoing)

The presumed belief is that the more you stuff you learn the more valuable you become from a holistic experience lens.  This is likely true in that you learn over time how multi-level dependencies and other design issues can affect resiliency, performance and scale.  Eventually the need for this kind of holistic knowledge was enough that the modern job titles of “architect” in the IT space arose.  In some ways, the architect position is seen as a more senior role (even though many organizations don’t have an HR job code with architect in it.  To be sure, I’m ambivalent on whether the job title even needs to be official).  The flip side is that going deep in n+3 or more capability silos becomes increasingly difficult to do well given the other responsibilities that often fall on the shoulders of an “architect”.  The funny thing is that these “other responsibilities” often begin to take upwards of 70% or more of the time of an architect type person.  There is a constant balancing act going on where one foot is on the hands on (development/testing/research) side of the fence, and the other foot is in things I call CCE like (collaboration, communication, and evangelism) or leadership side of the fence.  The latter half inevitably results in meeting hell.

JBPadgett_Success_Formula

DO > TALK

Why do these other things increasingly take over the time of an architect type person?  I have thought a lot about this, and I believe it likely boils down in part to the holistic experience thing (others may only have a single silo view of the world and hence need help with understanding about why something could be better another way) and partly because the architect (even though the role may not even be official on paper) is saddled with the overall responsibility that things work cohesively within a given capability area.  When someone has a job responsibility to “make things work” together, it implies that they ideally understand and have proficiency in the technologies involved.  This gets increasingly difficult the more technology areas you are expected to know.  I will be the first to admit that this knowledge responsibility only works when a federated model of architectural ownership is in place.  By this I mean that having multiple architect roles within infrastructure is important and many more within an enterprise is absolutely a necessity.  The old mindset of someone having the job title of ‘Enterprise Architect” is a farce.  It is a feat of impossibility if you expect to have actual architecture efficacy in an IT organization.  The archaic “ivory tower” approach to managing architecture in a top down model is just not feasible.  To be sure, I don’t think it ever was actually effective; and so in my mind the “Enterprise Architect” is dead and has been for a while.  The point I want to make here is that there is just too much to know to be truly effective outside the limited scope of a major capability area.  The ability to be hands on and also do requisite CCE I feel is best within a more manageable size capability area.

“The ‘Enterprise Architect’ is Dead”

Pretty much since 2007 the infrastructure as a service revolution inspired by AWS has been a massive snowball of change.  This change has and continues to be awesome.  I have wholeheartedly embraced it and have been taking part in the transition for years now.  The changes have in part re-invigorated the popularity of the open source movement and with it has come unparalleled opportunities for startups.  It has inspired the creation and success what has become a literal tidal wave of open-source projects on Github.   All one has to do is look at Hacker News on any given day and see that some new project was announced that “fixes this” or “closes this gap in functionality”.  One could argue that a business could start and run without much if any proprietary commercial products today (short of the client devices).  It is an amazing time that we are living in that maybe 30 years from now will be mentioned in history books as the tipping point for mainstream open-source adoption and innovation.  All of this is super cool, especially if you are a small to medium sized startup, and can get everything you need infrastructure-wise from something like AWS or Rackspace or Digital Ocean using an API.  Even better, modern configuration management and containerization sits on top of these IaaS platforms and allows you to install packages and manage build outs rapidly using “infrastructure code” as well.  If you are a software developer and need to be productive in the infrastructure space (even if you don’t know it well) you can do so effectively with just the knowledge of a given API.

Marc Andressen’s famous callout in 2011 that “software is eating the world” has never been more true than in the IT infrastructure space.  I have felt it every day for the past 5 years now.  The challenge, specifically for me, is staying productive, efficient, and helpful in a space that is increasingly more software.  Software development to be precise.  Let’s be honest, even with the cloud NaaS revolution, most enterprises still run the bulk of their infrastructure in house.  Part of this is dealing with legacy apps and the baggage they bring, part of it is a pure cost play where the bandwidth and storage used by an enterprise can still be less in house than using AWS or Rackspace, and finally, part of this is security and performance concerns.  So with many enterprises trying to take part in this “infrastructure as code” revolution while still maintaining significant internal hardware, the need for someone to know both traditional infrastructure management and new code based approaches is critical.  To be sure, a software based management abstraction layer (api) is increasingly meaning that the work an an infrastructure architect does is more dev and less ops.  Since I want to be an architect that does both hands on coding and traditional architecture work (to remain relevant), I need time to focus on development.  Getting the time to do the work and focus is a real challenge.  But dude, the thing is I have to do this to enable infrastructure to provide any value for the app dev teams in the enterprise.  If I can’t help make internal infrastructure as easy to consume and programmable as Amazon AWS, then the Devs increasingly see me (vis-a-vi infrastructure) as a blocker and I don’t blame them.  At least for now, because of the need for supporting legacy and all the painfully inflexible COTS packages (hello Oracle), Infrastructure ends up also having to play a type of luxury one-on-one concierge service for app teams.  I don’t know how long this “concierge” style need will last, but I’m gonna be pushing as much self-service infrastructure via API as fast as I can make it happen.  Things like Openstack and SDN are making this possible for me, but doesn’t address everything needed.

Pressure on Myself  in the Valley

If you buy into the software will eat the world argument (and I feel like I do) then the question becomes:  How can I be the best at my job as an infrastructure architect?
More precisely:

  • What is valuable work?
  • What are valuable skills?

If the answers are only writing code, then there are lots of questions I have the keep me stressed.  First off, is this thing endemic to Silicon Valley only or is more widespread?  I feel like the mass transition is already well underway.  Right now the change is already prevalent in Silicon Valley.  There may be a three to five year delay before it hits middle america, but its probably coming.  But how will these changes (for all their merits) scale?  Are all the old sysadmins that can’t code pretty much SOL?  I think perhaps so.  I said the traditional sysadmin was dead in a paper published last year, but I wonder if middle america can keep up.  I mean this stuff is not exactly easy.  Is this an example of the robotization of everything?  If everyone is writing code, then who is spending time communicating and collaborating (remember this takes a buttload of time) and who maintains the code down the road?  What about all these small IT consulting companies that handle outsourced IT for the space between mom and pop businesses to the enterprise?  Is the “Geek Squad” of the near future going to need staff that are polyglots understanding CI pipelines and Git?   Writing code takes a singular level of focus and time that is already a challenge for even seasoned developers.  The needs for the people (often architects) that sit at the event horizon of software development and collaborative design and communications won’t simply go away.  The difference is now I think they need to be doing both.  Architects, as I have mentioned, often have a lack of time due to the immense gravitational pull of CCE (collaboration, communications, and evangelism) that needs to be done with all the stakeholders/customers of infrastructure.  If I were to surmise that my ideal job time split between “hands on keyboard” time coding and CCE were 50/50, then I’m gonna have to change the way I manage my time.  The balancing act of hands on work and meetings is just way fracking hard.  I may have to start being more selfish with my calendar and time.  I’m just not sure how best to solve this yet.

“The balancing act of hands on work and meetings is just way fracking hard.”  

balancing-hands-on-work-with-leadership

Another issue I struggle with is the perception that the ability to do both hands on development and CCE or leadership well is commonplace in the valley and elsewhere.  This is buttressed by the examples of exceptional people like Elon Musk, Sergey Brin, Larry Page, Mark Zuckerberg, and Max Levchin as bellwethers for how to work.  I feel like these guys and others like them are truly exceptional people.  They garner media attention because they are success stories, but the translation in the valley is perhaps that their skills are as obtainable as riding a bike.  I feel like this is just not the case.  I don’t necessarily want to lend too much credence to the Malcolm Gladwell “10k hours” argument, but this shit is just not easy.  The competition here is high, especially if you truly want to make a difference and want to think of yourself as good at your job.  Perhaps I am just too hard on myself, but here lately I have gotten to where my blood pressure rises, I get hot, and feel overwhelmed when I read HackerNews.  Sometimes I feel like all the article titles are flying by my eyes like data in the matrix making me feel more inadequate by the minute (the Oracle said I was not “the one”).  Is this is the bane of type A personalities who want to be the best and are competitive?  I rarely think of myself as an “expert” at anything even if I am kicking ass at something.  Sometimes this personality trait of mine helps me and drives me to be better, while other times it exhausts me and gets me down.  Am I a little out of balance with incessantly wanting to be a better programmer, a great speaker and leader, a great skateboarder, up to date on all the latest geopolitics, and be able to write eloquent comments on HN?  Maybe.  I might need to wean myself off HN for a little bit for my own mental health.  I think when you start to get jealous of the gas station cashier for having a simple and less stressful job, it might be time to level set.  Than again, I just want to do a good job in this rapidly changing infrastructure as code landscape.  So what is a modern infrastructure architect?  Is it a software developer, a shoe salesman, or a crisis negotiator?

Wait, sorry I have another meeting to attend.  Let me know if you approve and merge what I perceive to be a market pull request.