What I’m Digging about Terraform

Scripts vs. Cloudfront vs. Terraform

I had to see why Terraform was gaining such a marketshare so rapidly for Infrastructure Automation (IA). After all, Cloudfront seemed to do the job. Sure, it was a pain at times, but generally speaking I could accomplish what I needed to, and honestly, the promise of “cloud vendor agnostic” both feels like a pipe dream, AND just isn’t something I worry about generally (I’m not a CTO of a fortune 500 company). I worry about DR, sure, multi-region, sure, but not “cloud vendor”. Isn’t that why we pay a premium for AWS? So we’re confident?

I wrote a small application – a Cloudwatch trigger scheduled Lamba function that reads from an API and writes to S3. Big enough to express some complexity but small enough to not feel monstrous for a comparison effort. There’s probably a thousand other ways to do this and of course one would need to get highly modular to support crafting an entire infrastructure but I like the idea of figuring out ways to keep stuff reasonably self contained.

Manual Scripts

I’m a traditionalist, and I love that the AWS CLI gives me pretty much all the power I need. For smaller apps it is reasonable to just script up what you need, a bit of Bash, a touch of Python, and poof, reproducible infrastructure. I like it. I get it. I understand its not for everybody, and of course, if you are building projects with self-contained IA this is a reasonable approach. But get to over 10 or so individual resources and things quickly become a maintenance nightmare.

  • (WIN!) Simple. Readable. No extra tools beyond the CLI. If you know Bash, and ideally a bit of Python or Ruby
  • (WIN!) Self contained, a solid solution for the small-project use case
  • (LOSE!) Becomes complex quickly
  • (LOSE!) Not so good for that demographic of web-developers trying to “build and ship” on their own and possibly don’t have this history of ops skills

All in all, I like this approach when I can keep it small. Trouble is…nothing ever wants to stay small!

CloudFormation

The AWS gold standard approach to IA works. It’s a bit of a chore with the way functions and references work in the context of JSON but hey, its AWS, they know what they are doing, and you can get the job done. Big CFT’s (cloud formation templates) become chore-some quickly however, and start to hurt my brain.

  • (WIN!) It’s AWS. It stays current with their product line.
  • (LOSE!) Comments. Yes ok, you CAN comment but you do it as a “metadata” node in the json. 1. That doesn’t REALLY feel like a comment, and 2. The comment is buried in the definition.
  • (WIN!) In CFT you can simply inline “ManagedPolicyArns” as part of the properties of the group definition. This is more effort and (seemingly) unnecessarily explicit in Terraform.
  • (WIN!) I like the definition of a users groups as part of the user definition.
  • (WIN!/LOSE!) The Cloudformer tool is a bit of a pain and though I’ve used it a couple of times, I’m dubious about its value. The way it presents the resource ID’s, most of the time I have to go manually cross check everything to figure out which ID’s I want to pull in. Also – what’s the deal with having to use CloudFormation to launch an instance to run Cloudformer? C’mon guys.

Terraform

HashiCorp’s IA offering, Terraform. At first I disliked the “it’s not json” nature of the HashiCorp Configuration Language (HCL). I quickly realized though that it was partly what enabled two of my favorite aspects, terser syntax and simpler functions.

  • (WIN!) Comments in the config.tf file.
  • (WIN!) That’s worth repeating – comments in the config.tf file, one-line or block.
  • (WIN!) One more time – comments in the config.tf file //A Comment or /* some comments*/
  • (WIN!) Deltas inferred through diff, rather than “change sets”. Less “idempotent infrastructure” approach but ultimately I think more desireable.
  • (WIN!) Smart variable management. Somehow the approach simply feels more articulate. Maybe it’s because I’m chiefly on the dev side and not the ops side, but ${something} reads better than the JSON equivalent.
  • (WIN!) `terraform plan` – yeah – tell me what you are going to do before you do it. What brilliance! In CFT land I have to just send up the config and then troll for “events” and errors and see what happened.
  • (WIN!) More terse syntax and much more standard, non-json-based function application, e.g. ${var.var_name} vs { “Ref” : “targetEnvironment” } and bucket=”${var.var_name}foobar” vs “BucketName”: { “Fn::Join” : [ “”, [ { "Ref" : "targetEnvironment" }, "foobar" ] ] }
  • (WIN!) Code completion in Intellij (WIN WIN WIN)
  • (LOSE!) The explicit sub-definition as-an-object of every managed policy. It seems like you could just have this inline as a list. In CFT you can simply inline those as part of the properties of the group definition.
  • (UNDECIDED) Adding a group to the user is defined at the user level in CFT and this feels right. In Terraform it is not done in either, both are defined and there is then a join target “aws_iam_group_membership” – which doesn’t feel wrong but is another block…On the other hand there is a general pattern of 1. Create A, 2. Create B, 3. Join A to B. This feels natural after a minute and though it might slightly bloat the config it is still very easy to “reason about” and well-articulated in the config – and the config is STILL smaller than CFT.
  • (WIN!) Ability to create S3 resources directly. The hokey workaround in CFT land is to create a lambda that creates the resources that then get torn down. Or any of a number of other hokey solution.

It’s strange – all the same data is there but it just feels cleaner and smaller with Terraform.

Altogether Now

Docs – all of this is really well documented. I didn’t have any trouble finding information I needed in any case I tried. The weakest was probably the Boto examples but overall that was great to not struggle with or have to peruse source code to get the job done.

There’s something to like about every approach, and something to dislike, I think. I’m sure I missed somebody’s favorite feature in one of these, and failed to state a flaw somebody thinks is critical. Let it go, internet. My only intent was to represent first experiences and key points that were valuable to ME, hoping this helps YOU.

DISCLAIMER: I do NOT work for HashiCorp – and in all honesty fully expected to hate Terraform before this exercise began, because it seemed like just another me-too solution that will always lag behind. Well…it’s not. It has legitimate differentiators and improvements over the alternatives.

Quick Hit: Add R Support to Conda Environment/Jupyter Kernel

Only tested on OSX. Notes to add R Notebook support to your Conda Environment(s).

Conda is an environment/package manager, allowing you to separate your workspaces, e.g. one for python 2.x, one for python 3.x, one for R…etc.

1. If you haven’t, install Miniconda or Anaconda: http://conda.pydata.org/docs/install/quick.html

2. (recommended) Create a new environment for R Kernels

conda create -n jupyter_r -c r r-essentials

3. Switch to this environment

source activate jupyter_r

4. Install the R kernel (in R) (and make it available to jupyter)

>R
>install.packages(c('repr', 'IRdisplay', 'evaluate', 'crayon', 'pbdZMQ', 'devtools', 'uuid', 'digest'))
devtools::install_github('IRkernel/IRkernel')
>IRkernel::installspec(user = FALSE)

5. Fire up a notebook!

jupyter notebook

- select an R kernel (now available under the “new” menu).  Happy coding!

Hibernate Mappings for Performance and Serialization

What

An example of how to do Hibernate Mapping/JPA in a manner most conducive to well-performing database queries and minimally-painful serialization.

Why

Because I seem to have this discussion way too often and this stuff is too abstract to talk about without something concrete.

Because I’ve been down this road too many times and wanted to share those learnings generally.

Because I believe in domain-driven design and development and often people shoot themselves in the foot when mapping things.

Where

https://github.com/revelfire/mappingexample

How

Basic Mapping Goals

  • Minimize the object “graph” to owned-relationships
  • Prevent stack overflow errors on serialization
  • Prevent massive descent into the object graph on serialization
  • Model the data closely to how it gets accessed
  • Keep it simple

Guiding Principles

  • No, or minimal, bidirectional relationships
  • Minimize entity relationships, maximize use of foreign keys
  • Allow services to store relationships rather than automating with instance references
  • Allow HQL/Repository loads and services to load relationships WHEN they matter

So much of this stuff is use-case based and should be considered in the context of access vs ownership, both in services and over the wire.  Please keep this in mind for the example, I will endeavor to explain the modeling choices in this light.

Ownership

OK lets talk about the example. (In case you didn’t read above: https://github.com/revelfire/mappingexample) We have here a very basic Account->User->Address set of domain/service/repository/test classes. One of the most common use cases you’re likely to find.

Notice that the Account object doesn’t actually contain a User object. User does contain an account_id.  Why?

It is generally the case that when we are interacting with a User, we don’t care about the Account (e.g. profile view/edit, access rights, password updates).  On the other hand, when we interact with an Account, we sometimes want to know a list of User (admin screens).

@Entity
@Table(name="account")
public class Account extends Identifiable {

    @Column(length = 50, nullable = false)
    private String name;
...
}
@Entity
@Table(name="user")
public class User extends Identifiable {

    @Column(length = 100, nullable = false)
    private String name;
// note that this is not a reference to Account
    @Column(name = "account_id", nullable = false)
    private Long accountId;

    /**
     * This COULD be @OneToOne as an "owned" relationship if we felt strongly
     * about having a separate table.
     *
     * This COULD be a one-many scenario in which case it would not be modeled on this
     * end, rather a repository.loadAddressForUser with address.user_id being the join point
     * via the foreign key reference.
     */
    @Embedded
    private Address address;
...
}

So – Account is an “owner” of users in the sense that without it, the users probably don’t make sense by themselves, and would go away if the account went away. So of course Account should have access to users, BUT it is one to many, so I really don’t want to cascade those changes, or even load the list of users in the general case – only when I really mean to (e.g. not when serializing). So I model it this way to create an avenue to access, but limit the ownership (delegating to the service tier).

Often times I will hear “but Chris we want to be able to easily get the list of USERS from the ACCOUNT by simply typing user.account!  Yes. Maybe you do. You are trying to be lazy but failing because you are actually causing more work for yourself down the line. You now will have to set up OpenSessionInViewFilter most likely, and that sucks. Or worse, you make it EAGER, and shoot yourself every freakin time you load it. (The ignorant developers way of solving LazyInitializationException)

How about instead you create a nice repository method for that one-off use case?

@Repository
public interface AccountRepository extends CrudRepository<Account, Long> {
@Query(“select u from User u where u.accountId = ?”)
List<User> getUsersForAccount(long accountId);
}

That wasn’t so bad, was it? Access granted.

Sometimes also there is concern about managing parent/child key relationships. To that, I make this case: You WANT to manage those. You don’t often set them, except for on create, and you are already doing that, just as an entity, not as an id explicitly. So you an id. You’ll live longer.

Also on the topic of ownership, note that Address is @Embedded into User.  You can read the comments there now because you skipped them before.  Basically, I only ever turn on cascade with @OneToOne and really the only reason to use that is to make your DBA happy or to prevent severely wide tables (which seems somewhat rare). This is more true ownership in the sense that with one loads the other, always.

What’s Wrong With Bidirectional?

Oftentimes in the documentation you will read things about mapping a bidirectional relationships. While possible, I assert that this is the wrong thing to do in 99% of the cases.  It is very rare that you need to ask about User from Address, or about Account from User, and in the cases you do, go ahead and load that entity by the id. Your code will be cleaner and you will have fewer errors.

Also, entity<->entity mapping and bidirectional mapping will often cause serialization to go bonkers and StackOverflowError. This is bad. Of course, you could @JsonIgnore that relationship or @JsonBackreference or whatever – but now you are most likely hacking in the solution. The (unnecessary) mapping itself is the problem.

What About Many to Many?

In many to many there is almost always no ownership. Just access, and we have different methods for creating availability.

I don’t have this use case in this example (yet). I may add it. Simple answer is, with many to many you are probably better off still letting JPA implementation manage this relationship.  However, you should be careful to disable cascading behaviors,

e.g.

@JsonIgnore
@ManyToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL)
 @JoinTable(
      name = "country_states", 
      joinColumns = { @JoinColumn(name = "STATE_ID", nullable = false, updatable = false) }, 
      inverseJoinColumns = { @JoinColumn(name = "COUNTRY_ID", nullable = false, updatable = false) })
public Set<States> getStates() {
  return this.categories;
}

I find I don’t run into this use case too often “in the wild” and typically wind up doing things in the services, or using (again) repository queries for these types of many-many loads. There is, however, much value of @ManyToMany in managing the join table when the case does arise.

And Finally

If you take these approaches you will find your database performs better (far fewer automated joins, access to data when you need it, not just because hibernate is a dumb animal), your REST calls are smoother (serialization doesn’t wreak havoc), you don’t need OpenSessionInViewFilter (slowing things down and locking up connections in the pool), and ultimately you start to model your API endpoints a little differently. They really ought to be resource based, and granular, unless performing some larger non-CRUD unit of work which, lets face it, is service backed anyway (not automated via JPA). Right?

 

 

Why I’m Walking and Working

Some fads are fads. Some fads are causes. This one is, I believe, a cause. Eight years ago I built my first walk-n-work desk for health reasons. As a software engineer, I sit for a living…or maybe I sit to my death. In any case I had back problems early on which were identified by doctors, chiropractors, and massage therapists alike as “sitting injuries”.

So – I bought a treadmill and built a desk around it. I walked 3-4 hours a day while working. As a telecommuter I was fortunate to be able to do this. Combined with a reasonable diet and absolutely no other exercise, I lost around 35 pounds. Much less expensive that what I spent on doctors, chiropractors, and massage therapists, and with better results.  I thought I might start a business making things like this for companies who cared about ergonomics and health!

Well…the thing broke down, I got lazy. I didn’t start a company. I moved and didn’t bring it with me…etc. Gained a bunch of weight back. Developed worsening back problems, neck problems, shoulder and wrist problems, gout, high blood pressure…

The worst of it was, a friend of mine responding to my gout, said “ah, rich mans disease.” I realize it’s an old saying, but I was still offended. By american standards I’m perhaps upper middle class but not rich, and even at upper middle class wages I have a very middle class lifestyle and a reasonable diet. But – he was right. Historically it was a rich mans disease. High fat foods, sedentary lifestyle…gout happens. My doctor said the best cure he had for gout was me losing weight.

It isn’t that I’m a big fat gluttonous slug (mostly…I do like some cake now and then); it’s that watching my calories, cutting out soda and most sweets, stopping when I’m full, all of it doesn’t much help. Even eating mostly home cooked, organic, veggie-centric, reasonable meals – sitting for a living leaves me heavy, and getting heavier.

I noticed the stand up desk explosion happening at several companies. It seemed like the up/down desk and my old friend the treadmill desk were taking off. I really should look into what is out there.

I told a friend I was ordering one and he laughed loudly. Ridiculous! He is also a software developer. I told him there was real danger in sitting for a living.

“Bah. Pseudoscience at best I’m sure!”

So I went a-looking, and I came across the initiative for this “fad”. Apparently some clever scientists actually studied the problem, and, shockingly, learned that sitting is bad for our health. Of course in some way we knew that, but HOW bad, well – here’s the Mayo Clinic on the matter…

http://www.mayoclinic.com/health/sitting-disease/MY02177

On Sitting:

50 to 70 percent of people spend six or more hours sitting a day
20 to 35 percent spend four or more hours a day watching TV

On Living:

If Americans would cut their sitting time in half, their life expectancy would increase by roughly:

2 years (by reducing sitting to less than 3 hours a day)
1.4 years (by reducing TV time to less than 2 hours a day)

And I think software developers sit 8-14 hours a day!

Some typical media-ified content on the matter -

http://www.theatlantic.com/health/archive/2012/04/confirmed-he-who-sits-the-most-dies-the-soonest/256101/#

and here is the actual science it references: http://archinte.jamanetwork.com/article.aspx?articleid=1108810

Here’s a punch line – exercise doesn’t really help -

“…demonstrates that inactive participants with high levels of sitting had the highest mortality rate, and the strong relationship of increased sitting time to mortality persisted, even among participants with relatively high levels of physical activity. ”

So – you can’t just do the gym thing an hour here or there. You actually have to GET UP out of the damn chair, and stand or walk. Since sitting burns 5/cal/hr, and standing only burns 15/cal/hr, I figure walking is the best option. That gets you >100/cal/hr – so in an 8 hour day you can burn 40, 120, or 1000+ calories, your choice. Obviously it isn’t just about calories, but that is where it starts. Get rid of the fat, get the body working naturally again, and the health will follow.

I find that I can’t REALLY walk 8 hours a day, at least not yet. But I can walk a lot. Anything is better than what I used to do. And the real kicker is this: I have better focus, concentration, and interest in what I am doing. Probably more blood to the brain or something, but, yay walk-n-work.

Some other articles if you are interested:

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(12)61031-9/abstract

http://www.bbc.co.uk/news/uk-wales-politics-18876880

http://www.americashealthrankings.org/all/sedentary

http://www.lifespanfitness.com/workplacesolutions-treadmill-desk-and-bike-desk-research.html  ..

I’ll report back on this blog if I actually manage to lose weight, and what type of steps/miles I burn.

todomvc.com is pretty dang cool

I recently had the opportunity to (read ‘curse of’) build a fancy modern website from the ground up. The client didn’t want the site to DO anything special, per-so, but it needed to look slick as shit and be highly maintainable in that enterprise kinda way.

To me this meant a backend that exposed API endpoints, and a completely disconnected front end. I’m a java dude, so – spring rest, some controllers, no jsp…etc. Simple, straightforward, testable, easy.

On the front end, I had used the javascript module pattern on a few projects. It was straightforward, but not terribly maintainable on an enterprise scale. So, I took a deep dive into the state of modern javascript frameworks.

Holy crap-in-my-eyes, they are like bunny rabbits. The last time I looked there was Dojo and YUI and this new kid jQuery that was up and coming. Sure I’ve used jQuery on a bunch of projects lately, but the last time I really did an ANALYSIS, this is what it was. I’ve used Dojo and YUI, in their old world forms, and they were OK enough.

So I started digging…and digging…and digging. I don’t think I’ve had this kind of analysis paralysis since trying to decide whether freemarker or velocity was better. My god so many of them are really good, in different ways. So many are pretty lame. Most have a few really bright spots and a lot of brown spots. There’s these new ways of thinking about JS, like the AMD pattern, dependency injection – WHAT?! DI for JS? Crazy.

I studied up on EmberJS, Backbone, Backbone-Marionette, AngularJS, CanJS, Knockout.js. On another front I looked into jQueryUI, Twitter Bootstrap, modern YUI, modern Dojo, myriad other jQuery plugins, bootstrap plugins. Dart. Closure. Clojure. (WTF?!)

On another front I read up on Underscore templates, Handlebars (already used and enjoyed mustache), Soy (Google Closure) templates. And on it went. Something of a disaster really. After a couple weeks of prototyping, testing, configuring, tinkering, debugging, mix n matching, one is left simply drooling and praying for 1999 to come back.

Anyway the point of this entirely lame rant is that I stumbled across something which, while it doesn’t tackle all of the tools/frameworks/template type above, it does a bang up job of hitting the high points and actually providing a reference implementation of the same app across many frameworks. I can’t take credit for it. I just think it’s cool.

http://todomvc.com/

 

Spring Data MongoDB Convert From Raw Query (DBObject)

I had the use case of needing to query mongodb through spring data directly with more or less a raw json query.  The first part was easy:

DBObject dbObject = (DBObject) JSON.parse(query);
DBCursor cursor = mongoTemplate.getCollection(“foo”).find(dbObject);

In spite of the fact that this turned out to be so trivial, I burnt several hours trying to find a solution to mapping the return objects back to my pojo/model/class.  Most solutions I found had me trying to use GSON or Jackson (directly) – basically working AROUND the logic I KNEW must be in there somewhere.

Let Spring do the heavy lifting with the stuff it already has built for this…

I just couldn’t do the hack in good conscience so I kept digging.  The solution turned out to be trivial as well, but either well documented or not easily google searched.

while (cursor.hasNext()) {
    DBObject obj = cursor.next();
    Foo foo = mongoTemplate.getConverter().read(Foo.class, obj); 
    returnList.add(foo);
}

SO it goes...

Java Version Management on OSX

Been doing Java on OSX for a long time.  Either everybody else knows this and nobody told me, or this is a well kept secret.  Turns out there’s some convenience methods on OSX for toggling out the active java version via java_home.  What a surprise.  There’s also a JAVA console in system preferences!  Who knew!

Probably most of you don’t have to do this often, but I work on projects requiring 1.6 32 bit, 1.6 64 bit, 1.7, and 1.8, so this was a godsend.

Note that I am on OSX 10.8.3

System Prefs

To launch the Java Control Panel on Mac OS X (10.7.3 and above):

  • Click on Apple icon on upper left of screen.
  • Go to System Preferences
  • Click on View
  • Click on Java icon to access the Java Control Panel.

Swapping Versions

What I did, once I learned of this, is create some simple scripts to toggle between the versions I had installed.

set1.6-32.sh

#!/bin/bash
export JAVA_HOME=`/usr/libexec/java_home -v 1.6 -a i386`
java -version

CORRECTION: The latest java update for mac removed the 32bit mode

set1.6-64.sh

#!/bin/bash
export JAVA_HOME=`/usr/libexec/java_home -v 1.6 -a x86_64`
java -version

set1.7.sh

#!/bin/bash
export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
java -version

Spring WebMvc Unit Test fails. “Caused by: java.lang.IllegalArgumentException: A ServletContext is required to configure default servlet handling”

Sometimes when you are using Spring java config and trying to run a unit test, you’ll find that you cannot run the tests unless you comment out @EnableWebMVC, which can cost you some time (or at least it did me).  The runner complains that “A ServletContext is required to configure default servlet handling” while you think to yourself “why do I care?”

The solution?  A simple combination of Spring profiles, and a custom test config class.

First, your webapp intializer which probably sets up your context.  Alternatively this may be in your web.xml.  In either case the important thing is that you are setting an active profile on the servlet dispatcher.

Things to note are the injection of the application config into the root context, the web config into the dispatcher context, and the active profile setting on the dispatcher.

@Profile("container")
public class WebAppInitializer implements WebApplicationInitializer {

    public void onStartup(ServletContext container) {

        //Load Annotation Based Configs
    	AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
        container.addListener(new ContextLoaderListener(rootContext));
        rootContext.register(ApplicationConfiguration.class); 

        ... root config stuff ...

        // Create the dispatcher servlet's Spring application context
        AnnotationConfigWebApplicationContext dispatcherContext = new AnnotationConfigWebApplicationContext();
        dispatcherContext.register(MVCConfiguration.class); 
        dispatcherContext.scan("com.foo");

        // Register and map the dispatcher servlet
        ServletRegistration.Dynamic dispatcher =
                container.addServlet("dispatcher", new DispatcherServlet(dispatcherContext));
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
        dispatcher.setInitParameter("spring.profiles.active", "container"); 

    }
 }

Next, your MVCConfiguration.java. Things to note are just the profile to run in, and the fact that this contains your @EnableWebMvc.

@Configuration
@Profile("container") 
@EnableWebMvc
public class MVCConfiguration extends WebMvcConfigurerAdapter {

    @Override
    public void addResourceHandlers(ResourceHandlerRegistry registry) {
        registry.addResourceHandler("/*.html").addResourceLocations("/");
        registry.addResourceHandler("/js/**").addResourceLocations("/js/");
        registry.addResourceHandler("/css/**").addResourceLocations("/css/");
        registry.addResourceHandler("/img/**").addResourceLocations("/img/");
    }

    ... The rest of your mvc config related stuff...
}

Then, your ApplicationConfiguration.java. Nothing in particular to note here except that it is your primary app config, no web mvc related stuff, and no profile specified.

@Configuration
@ImportResource( { "classpath:/spring/security.xml" } )
@PropertySource(value = { "classpath:some.properties"})
public class ApplicationConfiguration  {

    @Autowired
    private Environment environment;

    ... Various beans for your application that aren't web specific and should be made available to tests as well...
    public @Bean
    MongoDbFactory mongoDbFactory() throws Exception {
        UserCredentials userCredentials = new UserCredentials(environment.getProperty("mongodb.username"), environment.getProperty("mongodb.password"));
        return new SimpleMongoDbFactory(mongo().getObject(), environment.getProperty("mongodb.database"), userCredentials);
    }
}

Next, your TestConfig.java, which explicitly excludes the class that contains @EnableWebMvc, and runs it its own profile. Things to note are the customization of the @ComponentScan which has specific exclusions for the MvcConfiguration and the WebAppInitializer as well as the @Import of the ApplicationConfiguration and the setting of the active profile.

@Configuration
@ComponentScan(basePackages="com.foo",
                
                    excludeFilters = { 
                        @ComponentScan.Filter(
                              type = FilterType.ASSIGNABLE_TYPE, 
                              value = { MVCConfiguration.class, WebAppInitializer.class }
                        ) 
                    }
                 
)
@Import(ApplicationConfiguration.class) 
@ActiveProfiles("integration-test") 
public class TestConfig {

}

Finally, the unit test which previously was blowing up because it wanted a servlet context, should now work by simply swapping out the config it uses for bootstrapping the context.

@ContextConfiguration(classes=TestConfig.class) 
@RunWith(SpringJUnit4ClassRunner.class)
public class AnyTestRequireSpring {

    @Test
    public void testSomeBeanBehavior() {
    }
}

Hope this helps….

Spring REST Jackson JSON debug 400

Oh I’m so sick of trying to find this information for myself again when I’ve lost track of it.

Sometimes when you are doing rest calls to SpringMVC and using Jackson you get this nice 400 error  - it is barfing on the format of the request – but everything looks good.  Hmmm…nothing in the log.  Why not?  Because the log isn’t configured to report it.  How can I do such a thing?

log4j.logger.org.springframework.web.servlet.mvc=TRACE

 

ahhh…that felt nice.  The output instead of

becomes

org.springframework.http.converter.HttpMessageNotReadableException: Could not read JSON: Can not deserialize instance of java.lang.String out of START_OBJECT token
at [Source: org.apache.catalina.connector.CoyoteInputStream@79111260; line: 1, column: 2]; nested exception is org.codehaus.jackson.map.JsonMappingException: Can not deserialize instance of java.lang.String out of START_OBJECT token

And this isn’t the best example, usually you get missing properties and such, if you are using jackson to model binding.

MongoDB and Spring Data – simple fixes, no digging

This is here as much as a placeholder for myself as info for anyone else.  This is a list of minor and medium issues I’ve had working with MongoDB that were not instantly findable in a google search.

* “ns doesn’t exist”

In this case I was using MongoDB locally with a Spring Data connector.  I suppose the ‘ns’ here probably means “namespace”…not the most well thought out error.  The error, should you come across it might look like this:

Caused by: com.mongodb.CommandResult$CommandFailure: command failed [mapreduce]: { “serverUsed” : “localhost/127.0.0.1:27017″ , “errmsg” : “ns doesn’t exist” , “ok” : 0.0}
at com.mongodb.CommandResult.getException(CommandResult.java:88)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
at org.springframework.data.mongodb.core.MongoTemplate.handleCommandError(MongoTemplate.java:1658)
… 35 more

The actual problem is that the specified collection name doesn’t exist in the database you are configured to talk to .  Verify the database and collection name to solve the problem.

* Logging with the JS in a map-reduce job

Surpassingly difficult to find the answer to this, or I just suck at searching.  Searching the mongo site itself for ‘mapreduce logging’ or ‘print’ didn’t yield clear results.

It is really, really simple.

…js code…;print(“some debug” + debug); …more js code…

The content will flush out the mongodb log…which hopefully you have access to…

* How do I get my oid in a spring mapreduce job so I have access to the timestamp?

I couldn’t figure this one out.  Add a util.date to your json constructs.  I think it’s cause spring jacks with that ID.

 

 

© Copyright Revel Fire - site by ps3 design