Enterprise Metaphors: Water

When you read about the Devops movement, there is generally an assumption of greenfield.  Those that address change usually are addressing an organization with less than 100 IT workers or ones that are bringing less than 10 years of history with them.  While devops (at least the idea behind it) is more than a fad, I think that the conversation focuses too heavily on implementation and charts, and not enough on strategy and purpose.  I will try and use some illustrative examples of what I feel are the mechanics behind devops, cloud-computing, and apply them to the Enterprise.

How did we get here?

So distributed computing… Like any cool new technology, businesses flocked to Unix as a cheaper alternative to mainframes.  The adoption looked like most technological adoption.  It didn’t replace the mainframe for core business functionality (often times, still hasn’t), mostly due to fear of risk.  But projects sprung up.  Each project evaluated its needs, hired staffing, bought hardware, developed and deployed.

The Sprawl

At this point there were dedicated developers, sysadmins, project personnel, hardware, and software, all of which were underutilized and non-portable.  And the staff could not easily be reallocated since each project required learning and working with an entirely different stack: You probably had HP-UX, DB2, and C++ one place. Then Solaris, Oracle, and Java for another.  And projects that matched tools, didn’t match versions or consumption patters.

From each consideration, it made sense.  But as a whole it was a mess.

 

Organic Growth
Organic Growth in Delhi 
photo by Rhiannon / CC0

Standardization!

To address the issue, phase 2 begins.  Standardize all the things!  Timed with the move to commodity software/hardware, management addresses the resource sprawl by standardization.  Instead of teams organized by project or business unit, teams are reorganized by function.  The sysadmin team, the hardware team, the DBA team, etc.  The choices available for development are reduced to Linux, Java, and a single RDBMS.  Process is in place to ensure all projects follow the rules through oversight.

Chinese Apartment Buildings photo by Jim Bowen /CC BY 2.0
Chinese Apartment Buildings
photo by Jim Bowen /CC BY 2.0

Post-Standardization

Now costs have been reduced, but time to market has grown. The business doesn’t understand why it takes 1-2 years to get a new application into production.  More resources are dedicated to oversight than to construction.  The business is frustrated, development is frustrated, operations is frustrated.  Management is itchy to buy into something new and vendors smell blood.

Process as water

In the beginning of the story of distributed, a project is like a bucket of water.  We just dumped it on the ground and it rand generally downhill to the destination.  It took whatever path was easiest for it.  So depending on each projects’ perspective and the conditions at the time, they took their own path.

Standardization meant taking each bucket and rather than pouring it and making it run, we carried each bucket to its destination.  They generally followed the same path (depending on who was carrying the bucket), but it is energy intensive.  The water resents being constrained by the bucket carriers, and the bucket carriers resent the weight.  Many times, the next evolution of  this is a bucket brigade.  The person who cares about infrastructure costs passes on to the person who cares about information security, and on to the operations policy, etc.  And sometimes buckets get set down along the way.

Bucket Brigade  photo by Appalachia Rising / CC by 2.0
Bucket Brigade
photo by Appalachia Rising / CC by 2.0

The bucket brigade, while more effective than before, is expensive and ever expansive.  The number of people carrying the bucket always grows, and it always increases opportunities for projects to be slowed down.  Also, oversight always depends on the overseer.  Development gets frustrated by seemingly constantly changing requirements, the process managers are frustrated by development always trying to skirt the rules, and no one feels like their concerns are being addressed.

Automation, not standardization

The solution is automation instead of rules and enforcement.  Stop lugging water, and start building aqueducts.  This is what things like Devops, cloud computing, IaaS, and PaaS promote.  You go back to the first state and you ensure that the easiest path is the “right” path.  You ensure that if people conform to the desired parameters they decrease effort and time to market.

Aqueduct
photo by Sloopng / CC0

Final Thoughts

Let People Succeed

There is an easy litmus test to this:  if a job is following a procedure it needs to be done by automation.  This sounds harsh, but someone who only follows procedure has 0 opportunity for success.  They only have the ability to fail.  People’s energy needs to be spent making decisions where they can provide value.

Stop Documenting

It sounds wrong, but try not to write down how something needs to be done.  Make something just do it.  For those things you can’t tie together, have authoritative enforced data sources.  Let the system be the documentation.  Then you ensure it is never out of date.  And if you thought [FILL IN THE REGULATORY CONCERN] auditors liked good documentation, they love live queryable systems.  Well, except the whole reducing billable hours part.

Nothing alleviates the work.

Adopting any solution to this does not take the work out of it.  It just makes the work constructive.  Devops-style configuration management is a lot of work, but you only have to do it once.  Standing up PaaS, tying CI/CD into it, extending it to support your business, etc. is a lot of work, but then every project that can fit in the PaaS benefits. No vendor is going to have you something that is a ready-made solution to your problem.  Remember, every time it has more than one way of doing something, that’s work.  And when a vendor says it can do it however you want, that means you’re building it yourself.

Raised beds on the cheap

So you want a raised garden for vegetables, but don’t have a boatload of cash? Use what America is built with: pressure treated 2×4’s.

But but but, chemicals!

When originally looking into building raised beds I saw many articles requiring cedar, and shying away from pressure treated pine because of Arsenic. This has been factually incorrect for more than 10 years. CCA (Chromated Copper Arsenate) hasn’t been used to treat lumber since 2003 (see the EPA). And there the big issue isn’t actually leaching, but burning the wood and dealing with the ash. That said, use cedar if it’s reasonable. It smells great and looks awesome too, if you ask me. But at my lumberyard here, cedar 2x4x8’s are >$8 a piece. Pressure treated pine are ~$3 a pop.

What to buy

  • 9 2x4x8 Boards
  • 1 4x4x8 Board
  • Deck screws certified for pressure treated lumber

For the last one, I personally love these guys: FastenMaster FMGD003-75 GuardDog Exterior Wood Screw, Tan, 3-Inch, 75-Pack. They even come with a pozisquare bit in the package which has all the great features of a Phillips and a Robertson head… end result: let your drill bit fly and don’t worry about stripping it.

Directions

Cuts

  1. Cut 3 x 2x4x8 boards in half
  2. Cut 4 x 1.5 foot sections from the 4x4x8

Assembly

  • The 4 x 4x4x1.5 pieces are vertical posts
  • The 6 x 2x4x4 pieces are for the short sides (stacked 3 high)
  • The 6 x 2x4x8 pieces are for the long sides (stacked 3 high)

Placement

Assuming you are placing this somewhere that is currently grass.

  1. Drop your mower as low as it will possibly go and mow the area.
  2. Feel free to do stuff to physically (not chemically!) abuse whatever vegetation is left.
  3. Dig 4 holes at each corner for the posts
  4. Place the frame on the ground with the posts in your new holes
  5. Use a mattock or shovel or your hands if you have to, and break up the soil as much as possible to approximately 12″ below the surface.
  6. Do soily stuff
  7. Plant plants
  8. Mulch!
  9. Dance!

Soily stuff

This is worthy of a section of its own because your soil is where all of your hard work should go. If you want good plants focus on good soil, the rest will come.

There are a few ways to go with this.

Buy in bulk

If you have a local supplier who is worthy of your business, you can buy garden soil by the yard and haul it yourself or have it delivered. Just remember that 1 square yard covers 27 square feet 1 ft deep in soil. Make sure that the soil has been composted or “cooked”. You want the temperature to have been high enough to kill off residual seeds of weeds and such. You want the soil to have lots of awesome organic components to feed your plants and to provide that nice structural balance that promotes root growth and proper water retention. If they send you something that looks like topsoil send it back immediately. I have heard horror stories of raised gardens full of clay. Try not to let anyone unload anything other than rich, near-black, spongy awesomeness in your driveway/yard.

Buy by the bag

It’s easy enough to get bags of organic garden soil at your local garden center or big box retailer. And this is one of those times organic really pays off. Rather than just adding fertilizer to soil + filler, your typical organic soil will have those sweet sweet organic components that will continue to pay dividends both chemically and in terms of consistency for years.

Ammend your soil

This takes a lot more knowledge and work. You need to know your existing soil and know how to work it to get it the right consistency and achieve the correct chemical balance to make typical vegetables happy. This is more than a topic in itself.

After that

Start composting for next year

I can’t emphasize this enough. You will want it to amend your soil and for your expansion plans which will naturally result after you get hooked. You can start with just a pile or get a fancy composter. But get to work.

Irrigation

I got into setting up a rain barrel, and also setting up a drip-irrigation system. I’ve enjoyed playing with both. The latter especially seemed to help get bountiful vegetables and prevent disease by keeping leaves dry (in my humid climate).

Read science backed articles

I cannot emphasize this enough. Don’t use that spray you found on Pintrest to kill weeds. Yes it will kill the weed, but it will also make the soil you hit with it barren. Look at your state university’s agricultural information it’s often incredibly useful. Look for people citing sources.

When you can’t find science, go back to folk wisdom

It’s closer to science than internet wisdom. Talk to people at locally owned and operated nurseries and garden centers. Ask what works. Judge what they say by what you know. If they make crap up about something you know, then they’re probably not reputable.

Hubot Inspiration

At puppetconf I had the pleasure of attending Phil Zimmerman’s awesome Killer R10K Workflow session.

While R10K, and his voodoo were awesome, the use of Hubot as kind of the center spoke for the communications of the workflow had me feeling a little inspired.

I am unable, however to use anything as-is in the workflow because we can’t use external services (github, hipchat, etc.).

We have set up a workgroup XMPP host (running Openfire). And since we both develop Ops infrastructure and run a R&D lab, we set up a couple chatrooms for SysAdmin topics, and Development topics.

So now using Hubot and hubot-xmpp, we have our own friendly chatbot, Virgil. It’s a play on Dante’s Divine Comedy: Virgil being the guide through heaven and hell. Also the historical poet Virgil himself, and the Southern use of the name. I think it captures what I call the “redneck scholar” personality well… The guy who well read, and can operate a tractor. Theory and practice in a single person. In a way, what the whole devops movement strives for.

Currently Virgil is tied into our git repositories with a post-receive hook. He has the typical hubot, and random quote functionality.

In particular I saw one I liked that was made to cheer people up who mentioned failure. But it just spit out a single quote. So I wanted a little variety. I would have just used the msg.random piece, but one thing that bugged me was the single string. So I made him pull a random array from an array so I could store quotes and attribution reasonably, and deal with them as distinct, but related pieces of data.

Nothing crazy. Just pick a random number between 0 and n (and make sure it’s an int) and grab that element from the outer array. This lets me manage quotes and attribution in a more flexible way and give Virgil some personality.

I’m now polishing off putting him into the escalation chain on our monitoring system that uses a convoluted email->procmail->python script->json-via-http->hubot. It sounds more complicated than it is, as basically it’s a script feeding the hubot script. Procmail also buys me the ability to take all those annoying things that have to use email for notification and reduce them to just being chat notifications, where more of them belong.

The key being that I snarf the data as data, and use the bot script to present with personality.

Next up will be tying him into our API in front of Foreman and Puppet to magically provision machines for us. I can’t wait to ask him for 3 VMs with Tomcat.

Introducing GardenBuddy

I’ve put together a little project for the Raspberry Pi to monitor environmental conditions for my garden. For now I’ m calling it garden buddy, and the code is available here: https://github.com/mmessmore/garden_buddy

Using a few sensors (light, soil temperature, moisture) and available weather data from NOAA, I can monitor my garden and look for trending data.

The software is all in python and includes the little daemon for stuffing the data into RRD files, and a couple CGI’s (kickin’ it old-skool) for viewing the graphs.

I’ve made it so the sensor and graph configuration is all done in an INI formatted config file, so you don’t have to necessarily know Python to use it.

It’s like performance monitoring, but for your tomatoes.

I dream of one day intelligently managing a watering system with it, but for now semi-pretty graphs will do.

Some TODO items I have are:

  • interfacing with more sensor types
  • making it prettier
  • Unrolling the requirement for a “real” webserver
  • A rainbarrel/soakerhose/valve management piece

Battery Replacement on the Nexus 4

Just wanted to note I found this article, which describes the battery replacement process well (the Youtube clip helps immensely). One piece of errata, however: you need a 00 Philips not a 0 as described for removing the battery connection itself. My ebay-bought battery seems to be working great. Hopefully I can get a bit more life out of the thing before I buy my Nexus Eleventy-two.

Birds!

I just have to share this great collection of bird photos that helps me identify birds in the backyard. It really does cover 99% of what I have ever seen in west Tennessee:

Birds of Tennessee by Bruce Cole

papply

I’ve started on cloning ksb’s excellent xapply in python for two reasons:

  1. It’s an interesting exercise
  2. There are many times I don’t have msrc or want to bring msrc with me for a one-off usage, where a python script would be perfect

Currently it just requires Python 2.7+ (I really love argparse).

I currently support:

  • Parallel jobs!
  • Input from command arguments
  • Input from arbitrarily many files
  • Fancy dicer syntax (eg %[2,4])

So far it does most of what I need, but it is nowhere near feature parity yet. I was considering going with different command line arguments, but I decided to stay as close to the original as I can (although I cannot guarantee argparse will behave the same as ksb’s getopts behavior).

Feel free to contribute if you’re bored. Feel free to use if it helps. I’m releasing it under the standard 3-clause BSD License.

retro-cool: tcpmux

TCPMUX is a wonderful (and potentially terrible) protocol for one-off network services. It’s described in RFC1078.

Basically TCPMUX is a service itself (usually built into or run from inetd) that listens on port 1. To access a particular service it provides you give it the name of the service plus a CRLF. ‘help’ is a special service that lists all available services.

So for example I wanted a way for one host to poll a list of ports installed on another host. I have two lines in my /etc/inetd.conf file:

I have a dumb little script that generates the output:

Then I can quickly get this data from everywhere like so:

Now most implementations are a little forgiving on the newline sequence but YMMV.

xinetd doesn’t provide ‘help’ typically and has been known to just segfault sometimes, although I think this had been fixed in recent versions.

But this is a great alternative to setting up a user with SSH keys or doing something more complicated for passing data that should be allowed to go across the wire plain-text unauthenticated. nc + tcpmux is an incredibly handy (and potentially powerful) combination.

Now there are some obvious limitations here:

  • Be very, very careful with user input. Acting on user input in something like a shell script is fraught with danger.
  • Passing tcpmux through a perimeter firewall is probably not the best plan unless you have control of everything. tcpmux can be abused to provide ANY network service.

systemd enters the real world

Despite my frustrations with systemd and the attitudes surrounding it, it has now been accepted by both Debian and Ubuntu in addition to Fedora. And this is a great thing.

Read what is going through the community now. Things like this blog post are floating around now. The conversation is happening. The concerns are out there. And now they have to be addressed. Now the concerns being addressed are not the concerns of a small group but the concerns of a greater community. It’s the old if “you can’t beat them, join them”…. and change them.

Debian alone introduces a large stabilizing force. Up until now, systemd has been controlled by a community of like-minded people. That’s normal. But when it becomes Linux infrastructure the variety of minds contributing and consuming increases. And heterogeneity is a wonderful thing. I just may hold off using it myself, until some of this takes shape. :)

FreeBSD’s pkgng keeps being awesome

So there is plenty of work left to do, but I keep finding new ways to love FreeBSD’s pkgng.

Now that the official repo is up and running I use it rather than building everything from ports. I only build two packages now because I use options that aren’t selected by default: php5 (for mod_php) and mutt (mostly for IMAP header caching).

So I build them from ports, and use the ‘pkg lock’ command to keep pkg from updating them incorrectly.

So I couldn’t remember if I had anything else locked and looked into the ‘pkg query’ command. It takes an amazing set of format string options which allowed me to make a quick one-liner:

And when I want to check for vulnerabilities I can just ‘pkg audit’ which is just amazing. Imagine being able to just run this across a server farm without needing to buy or build something. Or like I imagine many do, just version scanning network services.

They really didn’t just reinvent the wheel here. They really have put in a lot of effort and learning from others (like yum and apt) into creating a best-of-breed package management tool that actually integrates with ports very well.