Microsoft’s Most Valuable Asset

Windows 8 was just released and so was the Surface tablet.  While this is a milestone for folks in Redmond, I couldn’t care less.  In fact, many engineers and programmers I have spoken to feel the same.  In many cases, people I work with are JUST NOW upgrading their virtual machines to Windows 7.  Notice I said virtual machines.  In fact, so many people I work with don’t use Windows as their host OS.  They just use it as a guest.  More and more people have tried Mac or Linux as the host.  But that is a side topic.  Let me get back to the main point here.

Microsoft is under assault from the marketplace.  Fewer new Windows PCs are popping up every 2 years since the Apple juggernaut started gaining momentum and people started getting work done  on operating systems that were non-Windows.  From a capitalism perspective, the shift in power is probably healthy.  Back in 2001, Microsoft had such a dominant position in the computing sphere they were under constant anti-trust investigations and derision by the blogosphere.  Today, the landscape is changing dramatically with tablets, and cloud computing.  As people’s exposure to apps that were not from a Windows platform happend more, they became more comfortable with alternative computing tools.  This change in mindset has predominantly been in the home market, but has trickled gradually into the corporate market also.

While I could talk about the operating system and tablet market ad nauseum, this was not the topic on top of my mind.  I have already moved beyond operating systems, compute hardware, and apps as an analysis of the impact Microsoft makes on my life.  What I want to point out what I think is perhaps the single most valuable thing Microsoft owns and sells today.  Nope, it is not Exchange server (which is actually a decent full-featured mail server platform).  Nope it is not Office.  The single most valuable product that Microsoft owns is Active Directory.  Yes, I said it Active Directory.

Why?  Because after the fail of Novell, the only relevant, full featured alternative for the corporate marketplace is AD.  When Novell was king, NDS was everywhere.  In fact, I spent much of my career connecting devices to an NDS X500 directory.  But Novell pissed so many people off with that nasty Novell client that they started to lose market share to the then burgeoning growth of Windows NT.  Eventually, Microsoft had to compete and release Active Directory.  While AD has never been as fully featured as NDS was, it was good enough.  Good enough that most IT departments said they did not want the pain of managing 2 Directory logins and trees.  So, they ditched NDS and embraced AD.

In fact, today, one of the main reasons corporate IT still predominantly have Windows clients and some installed base of Windows servers is because they like and need Active Directory to maintain thousands of users and groups.  To date, I have not been satisfied that there is a credible alternative to AD for user account management.  I have tried and use Sun LDAP, which is now Oracle LDAP.  I am not really impressed with that product and Oracle’s stewardship of great Sun open source initiatives is lackluster at best.  The fact is, Microsoft Active Directory is the glue that holds many IT departments together.

I make this point to hopefully draw Microsoft’s attention to an opportunity for them to keep a hold to this market and improve their standing in the community, or to excite some eager startup.  Microsoft needs to make it super easy to connect Linux and Unix machines to AD with FULL features.  I say that Windows Services for UNIX is not enough.  Further, I think Microsoft should license AD to work on Linux servers with full features.  They should do this with all the gusto of a market that they see as strategic.  They are wasting incredible amounts of money on IE advertising, Bing and other online properties and could have fully funded all the development costs to do this work.  So, Microsoft:  Wake up and secure this market before someone makes a better, open-source AD and then there is one less reason to keep you in the data center.

The End of Hostname Standards? Hostnames in the Dynamic Infrastructure World

The fastest way to start a fight in IT is to get engineers together with the goal of creating a hostname standard.  If there is one thing that seems to get blood pressure altitudes at new heights it is the topic of hostname standards.  This was the case at Gap and boy did it go on for months.  At one point people that were friends actually stopped talking to each other because arguments got so heated.  In retrospect, the entire exercise was one in futility.

Individual server hostnames are increasingly becoming irrelevant today.  Hostnames, historically, provide IT people with a sense of control over the chaos of IT operations.  However, this “control” is just one of many legacy thought patterns that equate to an old way of doing business.  In truth, the hostname should essentially be viewed as a variable that is disposable.  How you might ask?  This is because the servers themselves should be viewed as disposable.  In fact, with modern configuration management software like OpsCode Chef or PuppetLabs Puppet, and dynamic cloud computing host instantiation, hostnames truly do become irrelevant.  What is still relevant is the virtual ip or “VIP” in networking parlance.  The VIP is a (usually) friendly DNS name that end users use to locate an application or  service behind a load balancer.  The  servers themselves are the dynamically named piece of this design discussion that I am talking about.

Think about it, if a server vm can be spawned, destroyed, and rebuilt in minutes, can you really rely on a static hostname?  The answer is probably no.  Increasingly, the server instance name needs to be as dynamic and flexible as the infrastructure.  Finding a server is often done when something is wrong, and with static hostnames, you would have had to either:  consult a monitoring system that told you the name because something was wrong, someone else came to you and said ” I need help with the yoda123 server”, or think for a minute to try and derive it from a hostname standard.  With modern configuration management, if you need to locate any server, or more likely a group of servers, you can do so easily and programmatically.  You dont really need a hostname standard.  Let’s examine some examples of why.

With OpsCode Chef, you can search your entire infrastructure for any combination of server roles, attributes, or functions.  It is super easy and straightforward.  Here is an example of how to search for all servers you have that are web servers:

knife search node "role:webserver"

You can also do boolean searches for multiple roles.  Here we search for all nodes with apache and memcached roles:

knife search node "role:apache AND role:memcached"

You can also do some fun things like search all your nodes for things like a variation of a hostname!

knife search node "id:yodaweb*"

Finding hosts by role or function like this is as easy or easier than relying on a static traditional hostname standard.  It also is dynamic.  When the host is terminated, the CM clients are told to remove them from the node list.  So, the next time you do a search, you will not get a result set without that server.  This type model means that the server hostname can be something like aabbccddeeffgg1234 and it can be easier to find machines using intelligent search rather than trying to map aabbcc to some kind of master hostname schema, where aabb means it is a web server.  Plus, is it always accurate as of the way the server was built.  Meaning, you don’t have situations where Mr. Joe DevOps built a memcached server but did not name it yodamemd, but instead named it r2d2mem.  In this type scenario you would not be able to locate the server, but with the modern CM system you could find it.

Still, after many arguments, folks still say there are use cases for hostname standards.  I think that argument can have some relevance when you break down the type of machine and the specific use case in question.  For example, hypervisor hosts often don’t need to have an end user friendly hostname because they never access these servers directly.  However, a VIP for a well known app probably would be nice if it were friendly and iterative.  Perhaps, the case can be made that servers that end users access directly (with no VIP) is convenient to have “easy” names to remember.  However, this is most likely a set of use cases that are small.  Anything, that needs to scale most likely won’t benefit from a predictable hostname standard.

I did a quick breakdown of how hostnames can be viewed by use case:


Dynamic Host Names that do not adhere to any hostname standard.

Ex) This GUID name can be formed from well known UUID sources like the host mac address.  Here is a quick and dirty example.

     MAC address of NIC1 
eg) 04:0c:ce:d0:7a:b6

    – The following steps are performed to arrive at the dynamic Hypervisor host name:
1) Query to system to get the mac address of NIC1 the RAW name.
2) Any dashes (-), underscores (_), colons (:), or dots (.) are removed from the RAW name.
3) The CLEANED mac name is then prefixed in lowercase with anything you chose to ensure alphabetical characters.


See my previous post of HAV versus Cloud for what HAV means.  Persistent hosts that end users may access directly without VIPs can benefit from some sane hostname standard.

Here is an example of a hostname standard that might work for some common infrastructure use cases.

  • Hostnames are accessed using friendly 13 character CNAME.
  • 15 Characters are the full characters allotted for the hostname standard.
    • 14-15th characters only used for mgmt and ethernet interfaces identification.
    • CNAME should automatically be created for any of the e0 and e7 interfaces to the hostname ending at Server Instance ID.

Env ID (1)      Location ID (3)   App ID (3)   App Function (3)   Instance ID (3)   Mgmt ID (2)

  • The 15 character format is the full name.
  • Most physical hosts will have at least 3 interfaces:  2 ethernet interfaces, and 1 mgmt interfaces.
  • Most virtual guest hosts will have at least 1 or 2 virtual ethernet interfaces.

Note:  You can go higher than 15 characters for the name, but Windows machines will not like them and unattended installs will in fact fail.  Yes, this still happens in 2012.  Stupid, but there it is.

While I did this work, I compiled what I think are the relevant standards information to guide hostname making decisions.  I have added my notes below for your reading pleasure.

DNS Naming Rules:


  • There cannot be more than 127 levels of LABEL
  • Each LABEL has a max of 63 ASCII subset characters.
  • The total number of characters in a FQDN cannot exceed 253 ASCII subset characters including decimal dots.
  • The hierarchy of domains descends from right to left.
  • DNS names may use any ASCII subset of characters:  A – Z, 0 – 9, and a – hyphen. (The LDH rule for letters, digits, and hyphen)
  • DNS names may NOT start with the hyphen.
  • DNS names are NOT case sensitive and not interpreted as such either (case insensitive)


The Single UNIX Specification version 2 (SuSv2) guarantees ‘Host names are limited to 255 bytes’. POSIX 1003.1-2001 guarantees ‘Host names (not including the terminating NULL) are limited to HOST_NAME_MAX bytes’.  These values in Linux are set to 64 bytes.

(libc defaults to 64.)


Minimum name length

2 characters.

Maximum name length

63 characters.



Minimum name length

2 characters.

Maximum name length

24 characters.


Minimum name length

1 character.

Maximum name length

15 characters.

The maximum length of the host name and of the fully qualified domain name (FQDN) is 63 octets per label and 255 bytes per FQDN. This maximum includes 254 bytes for the FQDN and one byte for the ending dot.

HAV vs. Cloud

I’m tired of the word cloud.  It is so hackneyed at this point that uttering it out of my mouth feels like a “poser” skateboarder in the late 80’s wearing Vision Street Wear and holding a pristine Gator pro model with rails, copers, and a nosegaurd.  Sigh.  However, for over a year now I have had to ensure that all my peers at Gap and those upstream of me understood the differences between “true” cloud and virtualization.  Infrastructure architectural designs and discussions had to be framed in such a way to avoid people (even smart people) from being sucked into broad misinterpretations of technology implementations.

Back in July of 2011, my buddy Chris Buben and I sat in the Gap cafeteria frustrated on how to get everyone on our teams aligned on how to properly consume “cloud”.  We wasted so much time in team meetings correcting people’s widespread misuse of the term that we were determined to fix the vernacular.  One of the biggest impediments to successful technology projects is clear communications.  Specifically, the need to align everyone on the terms.  This means you need everyone from technology VP down to analyst saying the same things the same way.  We needed a clearer way to mass communicate virtualization versus cloud.  So, we came up with a simple way to quickly and easily differentiate a lot of big infrastructure architectural differences with cloud and pure virtualization.  We started describing two different architectural zones:  One zone we called High Availability Virtualization or HAV and another just called cloud.

We wanted to draw a distinction between the two to avoid having people think that we can simply shove every application we have at Gap into what we considered real cloud architected infrastructure.  The reality is that you cannot.  Not every app is architected and has code that is amenable to real cloud computing.  The sad truth is that many commercially sold applications are not optimized for real cloud.  Case in point:  many Oracle e-business apps (In fact, Oracle seems to go out of their way to make applications difficult to run on anything but Oracle platforms and software… but I digress).  So, we needed a way to give ourselves breathing room to host some apps in a virtualized environment to reap some modern architectural benefits without being caught up in the “cloud” term abuse game.  HAV became our mantra for everything non-cloud.  You might ask, what and why are you drawing a distinction?  The answer is that we had our own clear viewpoints about what cloud really was versus just virtualization.  For purposes of this analysis we are using OpenStack as our cloud (IaaS) platform.  To understand our position, I put together a breakdown for HAV versus Cloud.  Understand that some features are continuously being developed and improved in the cloud arena (especially in the networking area) so this chart is changing as I type.

HAV Versus Cloud

  • Private Cloud = cheap, fault tolerant by design, disposable, big scale
  • HAV = more expensive, fault tolerance through licensed features, less scalability


HAV (High Availability Virtualization)

Cloud (Private IaaS)

Blade Servers or Rack ServersCommodity hardware (premium or cheap)
Rack Servers primarilyCommodity hardware (cheap)

  • Shared SAN or NAS Disk for VMs
  • T1 SAN or T2 NAS Tier on iSCSI based block
  • High IOPs perforance expectations

  • Local Disk for VMs
  • iSCSI based block cloud managed volumes
  • Generally expected “lower” IOPs performance
Persistent VMs (horizontal scale needs are less)
Disposable/Ephemeral VMs that scale out

  • Vmware esxi
  • RHEV
  • Citrix XenServer
  • Hyper-V

  • KVM
  • XEN
  • Other hypervisors may be supported, but we didn’t want to use them (esxi).
Patching needed due to persistent lifetime VMs

  • WSUS for Windows
  • CM pushed RPM or YUM updates for Linux
No “Traditional” Patching (always dispose & rebuild instantly)

  • High risk vulnerabilities require new AMIs to be built
  • CM pushed RPM or YUM updates for Linux

  • Persistent VMs that are re-deployed on failure
  • Clustering used for HA
  • Live Migration
  • Apps behind load balancer VIPs

  • On demand new VM instances deployed as needed
  • No reliance Live Migration
  • Apps behind load balancer VIPs
Custom Networking:

  • Can have Active/passive NICs on HAV hosts
  • Can have 2 Active NICs on hypervisor hosts
  • Dot1q VLAN trunks to HAV hypervisor hosts
  • Bridged networking between HAV VMs and network
  • Subnets controlled by VLANs and network L3 switches
  • Default gateways are external L3 VLAN interfaces
Homogenized Networking:  (Note: this is changing)

  • 2 Active NICs or more required on hypervisor Hosts
  • Dot1q VLAN trunks to HAV hypervisor hosts
  • NAT between cloud VMs and network is typical
  • VLANs/Subnets pre-allocated per Cloud Hypervisor Host or by Tenant/project
  • Default gateways are virtual gateways on Cloud Hypervisor Host
  • Default gateways can also be external L3 VLAN interfaces
Enterprise DHCP servers
Local Cloud Host DHCP servers or Enterprise External DHCP servers
Enterprise DNS direct to VMs
Cloud Host DNS Proxy (DNSMasq) to VMs or Enterprise DNS

Database Tier


Physical and virtual

(Determined by performance requirements)
Linux servers can utilize LXC containers for physical pseudo-virtualization 

Cloud (Private IaaS)

Physical and virtual (predominantly virtual)
(Determined by performance requirements)
Linux servers can utilize LXC containers for physical pseudo-virtualization 

Blade Servers or Rack Servers
Rack Servers predominantly

  • Local disk
  • T1 SAN or T2 NAS
  • Database Tier on iSCSI based block

  • Local disk
  • iSCSI based block cloud managed volumes

So, why go to all this trouble to break these out?  Because, when trying to properly place applications in the most appropriate environment for deployment, you need to take all these factors into consideration.  You may have an application that does not tolerate network address translation, or does not work well in a load balancer  design, or needs ultra-high IOPs.  In such a case, you may be better served deploying such apps in HAV architected infrastructure zones.  However, if you have a modern, truly service oriented architecture (SOA) app that doesn’t require ultra high IOPs, and is designed from ground up to be stateless and has built in failure detection then cloud architected zones may be perfectly appropriate.

The point is:  Don’t just try to shove every app (commercial or homegrown) into cloud architected zones without understanding the implications for doing so.  You may have undesired performance or reliability headaches.  Our goal is to move as much or our application workload to OpenStack based cloud infrastructure zones as possible.  However, on the journey to that nirvana, we have a lot of legacy app crud that just isn’t optimized for true cloud infrastructure.  For these apps, we have opted for a one-two hop using just an HAV infrastructure zone on the way to cloud.  This will buy us time to either re-architect those apps or simply replace them. This brings up a larger discussion about proper cloud application design.  In the next post, I will cover months of work by our internal architects and app dev teams at Gap for what comprises “cloud architected” application best practices.

“Hello world!”

So after a long mental loop of reminding myself I need to share my technology discoveries and journeys, I finally did it.  Here it is, Padgeblog.  The blog name is based on a nick name I have had since high-school which is a shortened version of my last name Padgett.    Hence, the nick name “Padge” and now the blog “padgeblog.”

If you like explanations on why things arrived at their current state and where they are moving, especially on all things infrastructure technology with a little splash of skateboarding, occasional nice pictures, music discoveries, and some rants then you might enjoy this blog.

Let’s see how it goes.