What I’m Digging about Terraform

Scripts vs. Cloudfront vs. Terraform

I had to see why Terraform was gaining such a marketshare so rapidly for Infrastructure Automation (IA). After all, Cloudfront seemed to do the job. Sure, it was a pain at times, but generally speaking I could accomplish what I needed to, and honestly, the promise of “cloud vendor agnostic” both feels like a pipe dream, AND just isn’t something I worry about generally (I’m not a CTO of a fortune 500 company). I worry about DR, sure, multi-region, sure, but not “cloud vendor”. Isn’t that why we pay a premium for AWS? So we’re confident?

I wrote a small application – a Cloudwatch trigger scheduled Lamba function that reads from an API and writes to S3. Big enough to express some complexity but small enough to not feel monstrous for a comparison effort. There’s probably a thousand other ways to do this and of course one would need to get highly modular to support crafting an entire infrastructure but I like the idea of figuring out ways to keep stuff reasonably self contained.

Manual Scripts

I’m a traditionalist, and I love that the AWS CLI gives me pretty much all the power I need. For smaller apps it is reasonable to just script up what you need, a bit of Bash, a touch of Python, and poof, reproducible infrastructure. I like it. I get it. I understand its not for everybody, and of course, if you are building projects with self-contained IA this is a reasonable approach. But get to over 10 or so individual resources and things quickly become a maintenance nightmare.

  • (WIN!) Simple. Readable. No extra tools beyond the CLI. If you know Bash, and ideally a bit of Python or Ruby
  • (WIN!) Self contained, a solid solution for the small-project use case
  • (LOSE!) Becomes complex quickly
  • (LOSE!) Not so good for that demographic of web-developers trying to “build and ship” on their own and possibly don’t have this history of ops skills

All in all, I like this approach when I can keep it small. Trouble is…nothing ever wants to stay small!


The AWS gold standard approach to IA works. It’s a bit of a chore with the way functions and references work in the context of JSON but hey, its AWS, they know what they are doing, and you can get the job done. Big CFT’s (cloud formation templates) become chore-some quickly however, and start to hurt my brain.

  • (WIN!) It’s AWS. It stays current with their product line.
  • (LOSE!) Comments. Yes ok, you CAN comment but you do it as a “metadata” node in the json. 1. That doesn’t REALLY feel like a comment, and 2. The comment is buried in the definition.
  • (WIN!) In CFT you can simply inline “ManagedPolicyArns” as part of the properties of the group definition. This is more effort and (seemingly) unnecessarily explicit in Terraform.
  • (WIN!) I like the definition of a users groups as part of the user definition.
  • (WIN!/LOSE!) The Cloudformer tool is a bit of a pain and though I’ve used it a couple of times, I’m dubious about its value. The way it presents the resource ID’s, most of the time I have to go manually cross check everything to figure out which ID’s I want to pull in. Also – what’s the deal with having to use CloudFormation to launch an instance to run Cloudformer? C’mon guys.


HashiCorp’s IA offering, Terraform. At first I disliked the “it’s not json” nature of the HashiCorp Configuration Language (HCL). I quickly realized though that it was partly what enabled two of my favorite aspects, terser syntax and simpler functions.

  • (WIN!) Comments in the config.tf file.
  • (WIN!) That’s worth repeating – comments in the config.tf file, one-line or block.
  • (WIN!) One more time – comments in the config.tf file //A Comment or /* some comments*/
  • (WIN!) Deltas inferred through diff, rather than “change sets”. Less “idempotent infrastructure” approach but ultimately I think more desireable.
  • (WIN!) Smart variable management. Somehow the approach simply feels more articulate. Maybe it’s because I’m chiefly on the dev side and not the ops side, but ${something} reads better than the JSON equivalent.
  • (WIN!) `terraform plan` – yeah – tell me what you are going to do before you do it. What brilliance! In CFT land I have to just send up the config and then troll for “events” and errors and see what happened.
  • (WIN!) More terse syntax and much more standard, non-json-based function application, e.g. ${var.var_name} vs { “Ref” : “targetEnvironment” } and bucket=”${var.var_name}foobar” vs “BucketName”: { “Fn::Join” : [ “”, [ { "Ref" : "targetEnvironment" }, "foobar" ] ] }
  • (WIN!) Code completion in Intellij (WIN WIN WIN)
  • (LOSE!) The explicit sub-definition as-an-object of every managed policy. It seems like you could just have this inline as a list. In CFT you can simply inline those as part of the properties of the group definition.
  • (UNDECIDED) Adding a group to the user is defined at the user level in CFT and this feels right. In Terraform it is not done in either, both are defined and there is then a join target “aws_iam_group_membership” – which doesn’t feel wrong but is another block…On the other hand there is a general pattern of 1. Create A, 2. Create B, 3. Join A to B. This feels natural after a minute and though it might slightly bloat the config it is still very easy to “reason about” and well-articulated in the config – and the config is STILL smaller than CFT.
  • (WIN!) Ability to create S3 resources directly. The hokey workaround in CFT land is to create a lambda that creates the resources that then get torn down. Or any of a number of other hokey solution.

It’s strange – all the same data is there but it just feels cleaner and smaller with Terraform.

Altogether Now

Docs – all of this is really well documented. I didn’t have any trouble finding information I needed in any case I tried. The weakest was probably the Boto examples but overall that was great to not struggle with or have to peruse source code to get the job done.

There’s something to like about every approach, and something to dislike, I think. I’m sure I missed somebody’s favorite feature in one of these, and failed to state a flaw somebody thinks is critical. Let it go, internet. My only intent was to represent first experiences and key points that were valuable to ME, hoping this helps YOU.

DISCLAIMER: I do NOT work for HashiCorp – and in all honesty fully expected to hate Terraform before this exercise began, because it seemed like just another me-too solution that will always lag behind. Well…it’s not. It has legitimate differentiators and improvements over the alternatives.

Why I’m Walking and Working

Some fads are fads. Some fads are causes. This one is, I believe, a cause. Eight years ago I built my first walk-n-work desk for health reasons. As a software engineer, I sit for a living…or maybe I sit to my death. In any case I had back problems early on which were identified by doctors, chiropractors, and massage therapists alike as “sitting injuries”.

So – I bought a treadmill and built a desk around it. I walked 3-4 hours a day while working. As a telecommuter I was fortunate to be able to do this. Combined with a reasonable diet and absolutely no other exercise, I lost around 35 pounds. Much less expensive that what I spent on doctors, chiropractors, and massage therapists, and with better results.  I thought I might start a business making things like this for companies who cared about ergonomics and health!

Well…the thing broke down, I got lazy. I didn’t start a company. I moved and didn’t bring it with me…etc. Gained a bunch of weight back. Developed worsening back problems, neck problems, shoulder and wrist problems, gout, high blood pressure…

The worst of it was, a friend of mine responding to my gout, said “ah, rich mans disease.” I realize it’s an old saying, but I was still offended. By american standards I’m perhaps upper middle class but not rich, and even at upper middle class wages I have a very middle class lifestyle and a reasonable diet. But – he was right. Historically it was a rich mans disease. High fat foods, sedentary lifestyle…gout happens. My doctor said the best cure he had for gout was me losing weight.

It isn’t that I’m a big fat gluttonous slug (mostly…I do like some cake now and then); it’s that watching my calories, cutting out soda and most sweets, stopping when I’m full, all of it doesn’t much help. Even eating mostly home cooked, organic, veggie-centric, reasonable meals – sitting for a living leaves me heavy, and getting heavier.

I noticed the stand up desk explosion happening at several companies. It seemed like the up/down desk and my old friend the treadmill desk were taking off. I really should look into what is out there.

I told a friend I was ordering one and he laughed loudly. Ridiculous! He is also a software developer. I told him there was real danger in sitting for a living.

“Bah. Pseudoscience at best I’m sure!”

So I went a-looking, and I came across the initiative for this “fad”. Apparently some clever scientists actually studied the problem, and, shockingly, learned that sitting is bad for our health. Of course in some way we knew that, but HOW bad, well – here’s the Mayo Clinic on the matter…


On Sitting:

50 to 70 percent of people spend six or more hours sitting a day
20 to 35 percent spend four or more hours a day watching TV

On Living:

If Americans would cut their sitting time in half, their life expectancy would increase by roughly:

2 years (by reducing sitting to less than 3 hours a day)
1.4 years (by reducing TV time to less than 2 hours a day)

And I think software developers sit 8-14 hours a day!

Some typical media-ified content on the matter -


and here is the actual science it references: http://archinte.jamanetwork.com/article.aspx?articleid=1108810

Here’s a punch line – exercise doesn’t really help -

“…demonstrates that inactive participants with high levels of sitting had the highest mortality rate, and the strong relationship of increased sitting time to mortality persisted, even among participants with relatively high levels of physical activity. ”

So – you can’t just do the gym thing an hour here or there. You actually have to GET UP out of the damn chair, and stand or walk. Since sitting burns 5/cal/hr, and standing only burns 15/cal/hr, I figure walking is the best option. That gets you >100/cal/hr – so in an 8 hour day you can burn 40, 120, or 1000+ calories, your choice. Obviously it isn’t just about calories, but that is where it starts. Get rid of the fat, get the body working naturally again, and the health will follow.

I find that I can’t REALLY walk 8 hours a day, at least not yet. But I can walk a lot. Anything is better than what I used to do. And the real kicker is this: I have better focus, concentration, and interest in what I am doing. Probably more blood to the brain or something, but, yay walk-n-work.

Some other articles if you are interested:




http://www.lifespanfitness.com/workplacesolutions-treadmill-desk-and-bike-desk-research.html  ..

I’ll report back on this blog if I actually manage to lose weight, and what type of steps/miles I burn.

todomvc.com is pretty dang cool

I recently had the opportunity to (read ‘curse of’) build a fancy modern website from the ground up. The client didn’t want the site to DO anything special, per-so, but it needed to look slick as shit and be highly maintainable in that enterprise kinda way.

To me this meant a backend that exposed API endpoints, and a completely disconnected front end. I’m a java dude, so – spring rest, some controllers, no jsp…etc. Simple, straightforward, testable, easy.

On the front end, I had used the javascript module pattern on a few projects. It was straightforward, but not terribly maintainable on an enterprise scale. So, I took a deep dive into the state of modern javascript frameworks.

Holy crap-in-my-eyes, they are like bunny rabbits. The last time I looked there was Dojo and YUI and this new kid jQuery that was up and coming. Sure I’ve used jQuery on a bunch of projects lately, but the last time I really did an ANALYSIS, this is what it was. I’ve used Dojo and YUI, in their old world forms, and they were OK enough.

So I started digging…and digging…and digging. I don’t think I’ve had this kind of analysis paralysis since trying to decide whether freemarker or velocity was better. My god so many of them are really good, in different ways. So many are pretty lame. Most have a few really bright spots and a lot of brown spots. There’s these new ways of thinking about JS, like the AMD pattern, dependency injection – WHAT?! DI for JS? Crazy.

I studied up on EmberJS, Backbone, Backbone-Marionette, AngularJS, CanJS, Knockout.js. On another front I looked into jQueryUI, Twitter Bootstrap, modern YUI, modern Dojo, myriad other jQuery plugins, bootstrap plugins. Dart. Closure. Clojure. (WTF?!)

On another front I read up on Underscore templates, Handlebars (already used and enjoyed mustache), Soy (Google Closure) templates. And on it went. Something of a disaster really. After a couple weeks of prototyping, testing, configuring, tinkering, debugging, mix n matching, one is left simply drooling and praying for 1999 to come back.

Anyway the point of this entirely lame rant is that I stumbled across something which, while it doesn’t tackle all of the tools/frameworks/template type above, it does a bang up job of hitting the high points and actually providing a reference implementation of the same app across many frameworks. I can’t take credit for it. I just think it’s cool.



Spring Data MongoDB Convert From Raw Query (DBObject)

I had the use case of needing to query mongodb through spring data directly with more or less a raw json query.  The first part was easy:

DBObject dbObject = (DBObject) JSON.parse(query);
DBCursor cursor = mongoTemplate.getCollection(“foo”).find(dbObject);

In spite of the fact that this turned out to be so trivial, I burnt several hours trying to find a solution to mapping the return objects back to my pojo/model/class.  Most solutions I found had me trying to use GSON or Jackson (directly) – basically working AROUND the logic I KNEW must be in there somewhere.

Let Spring do the heavy lifting with the stuff it already has built for this…

I just couldn’t do the hack in good conscience so I kept digging.  The solution turned out to be trivial as well, but either well documented or not easily google searched.

while (cursor.hasNext()) {
    DBObject obj = cursor.next();
    Foo foo = mongoTemplate.getConverter().read(Foo.class, obj); 

SO it goes...

Java Version Management on OSX

Been doing Java on OSX for a long time.  Either everybody else knows this and nobody told me, or this is a well kept secret.  Turns out there’s some convenience methods on OSX for toggling out the active java version via java_home.  What a surprise.  There’s also a JAVA console in system preferences!  Who knew!

Probably most of you don’t have to do this often, but I work on projects requiring 1.6 32 bit, 1.6 64 bit, 1.7, and 1.8, so this was a godsend.

Note that I am on OSX 10.8.3

System Prefs

To launch the Java Control Panel on Mac OS X (10.7.3 and above):

  • Click on Apple icon on upper left of screen.
  • Go to System Preferences
  • Click on View
  • Click on Java icon to access the Java Control Panel.

Swapping Versions

What I did, once I learned of this, is create some simple scripts to toggle between the versions I had installed.


export JAVA_HOME=`/usr/libexec/java_home -v 1.6 -a i386`
java -version

CORRECTION: The latest java update for mac removed the 32bit mode


export JAVA_HOME=`/usr/libexec/java_home -v 1.6 -a x86_64`
java -version


export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
java -version

Spring WebMvc Unit Test fails. “Caused by: java.lang.IllegalArgumentException: A ServletContext is required to configure default servlet handling”

Sometimes when you are using Spring java config and trying to run a unit test, you’ll find that you cannot run the tests unless you comment out @EnableWebMVC, which can cost you some time (or at least it did me).  The runner complains that “A ServletContext is required to configure default servlet handling” while you think to yourself “why do I care?”

The solution?  A simple combination of Spring profiles, and a custom test config class.

First, your webapp intializer which probably sets up your context.  Alternatively this may be in your web.xml.  In either case the important thing is that you are setting an active profile on the servlet dispatcher.

Things to note are the injection of the application config into the root context, the web config into the dispatcher context, and the active profile setting on the dispatcher.

public class WebAppInitializer implements WebApplicationInitializer {

    public void onStartup(ServletContext container) {

        //Load Annotation Based Configs
    	AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
        container.addListener(new ContextLoaderListener(rootContext));

        ... root config stuff ...

        // Create the dispatcher servlet's Spring application context
        AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext();

        // Register and map the dispatcher servlet
        ServletRegistration.Dynamic dispatcher =
                container.addServlet("dispatcher", new DispatcherServlet(dispatcherContext));
        dispatcher.setInitParameter("spring.profiles.active", "container"); 


Next, your MVCConfiguration.java. Things to note are just the profile to run in, and the fact that this contains your @EnableWebMvc.

public class MVCConfiguration extends WebMvcConfigurerAdapter {

    public void addResourceHandlers(ResourceHandlerRegistry registry) {

    ... The rest of your mvc config related stuff...

Then, your ApplicationConfiguration.java. Nothing in particular to note here except that it is your primary app config, no web mvc related stuff, and no profile specified.

@ImportResource( { "classpath:/spring/security.xml" } )
@PropertySource(value = { "classpath:some.properties"})
public class ApplicationConfiguration  {

    private Environment environment;

    ... Various beans for your application that aren't web specific and should be made available to tests as well...
    public @Bean
    MongoDbFactory mongoDbFactory() throws Exception {
        UserCredentials userCredentials = new UserCredentials(environment.getProperty("mongodb.username"), environment.getProperty("mongodb.password"));
        return new SimpleMongoDbFactory(mongo().getObject(), environment.getProperty("mongodb.database"), userCredentials);

Next, your TestConfig.java, which explicitly excludes the class that contains @EnableWebMvc, and runs it its own profile. Things to note are the customization of the @ComponentScan which has specific exclusions for the MvcConfiguration and the WebAppInitializer as well as the @Import of the ApplicationConfiguration and the setting of the active profile.

                    excludeFilters = { 
                              type = FilterType.ASSIGNABLE_TYPE, 
                              value = { MVCConfiguration.class, WebAppInitializer.class }
public class TestConfig {


Finally, the unit test which previously was blowing up because it wanted a servlet context, should now work by simply swapping out the config it uses for bootstrapping the context.

public class AnyTestRequireSpring {

    public void testSomeBeanBehavior() {

Hope this helps….

Spring REST Jackson JSON debug 400

Oh I’m so sick of trying to find this information for myself again when I’ve lost track of it.

Sometimes when you are doing rest calls to SpringMVC and using Jackson you get this nice 400 error  - it is barfing on the format of the request – but everything looks good.  Hmmm…nothing in the log.  Why not?  Because the log isn’t configured to report it.  How can I do such a thing?



ahhh…that felt nice.  The output instead of


org.springframework.http.converter.HttpMessageNotReadableException: Could not read JSON: Can not deserialize instance of java.lang.String out of START_OBJECT token
at [Source: [email protected]; line: 1, column: 2]; nested exception is org.codehaus.jackson.map.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token

And this isn’t the best example, usually you get missing properties and such, if you are using jackson to model binding.

MongoDB and Spring Data – simple fixes, no digging

This is here as much as a placeholder for myself as info for anyone else.  This is a list of minor and medium issues I’ve had working with MongoDB that were not instantly findable in a google search.

* “ns doesn’t exist”

In this case I was using MongoDB locally with a Spring Data connector.  I suppose the ‘ns’ here probably means “namespace”…not the most well thought out error.  The error, should you come across it might look like this:

Caused by: com.mongodb.CommandResult$CommandFailure: command failed [mapreduce]: { “serverUsed” : “localhost/″ , “errmsg” : “ns doesn’t exist” , “ok” : 0.0}
at com.mongodb.CommandResult.getException(CommandResult.java:88)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
at org.springframework.data.mongodb.core.MongoTemplate.handleCommandError(MongoTemplate.java:1658)
… 35 more

The actual problem is that the specified collection name doesn’t exist in the database you are configured to talk to .  Verify the database and collection name to solve the problem.

* Logging with the JS in a map-reduce job

Surpassingly difficult to find the answer to this, or I just suck at searching.  Searching the mongo site itself for ‘mapreduce logging’ or ‘print’ didn’t yield clear results.

It is really, really simple.

…js code…;print(“some debug” + debug); …more js code…

The content will flush out the mongodb log…which hopefully you have access to…

* How do I get my oid in a spring mapreduce job so I have access to the timestamp?

I couldn’t figure this one out.  Add a util.date to your json constructs.  I think it’s cause spring jacks with that ID.



© Copyright Revel Fire - site by ps3 design