Using REST Assured for Testing

I decided to start testing my Dropwizard RESTful interface by using REST Assured.  After poking around a little bit to figure out how to test my service as deployed, I arrived at using REST Assured after seeing it referenced in quite a few places.

After I figured out the basics of using it with its “given/when/then” syntax and setting headers, I wondered how to keep from setting the headers – including Content-Type – on each request/response.  I found out that to do this you must work with a RequestSpecBuilder.

I also found out a little bit about JSONPath while working on this aspect of my project.

Investigating Dropwizard

I was over at /r/java scanning the posts when I ran into a question about Java microservices and what people are using.  I saw two comments mentioning Dropwizard so… thought I’d take a look at it.

It’s similar to Spring Boot, but perhaps with a more heavy emphasis on simply banging out RESTful web services.  Dropwizard uses Jetty under the covers to handle all of the socket stuff and whatever other kind of servlet functionality it might need.

Within the Reddit thread, there’s also quite a bit of talk about Docker and how people are using it.  Dropwizard has introduced me to the idea of “fat jars” which I’d never heard of previously – which seem in a way to be a competitor to using Docker to package up your application.  Basically, a fat jar is one giant jar which includes all of the class files from every dependency of your application.  Here’s a web resource on the topic from 2012.

Doing Some Coursera Work

I’ve enrolled myself in “Algorithms, Part I” from Princeton University on Coursera.  I’ve put a hold on the textbook through my San Diego Public Library iOS app and expect it to be at my local branch by the weekend.  What a time we live in!

I hope to go all the way through the course and not get distracted by the many other topics which interest me.  Gotta stay on course!

Hadoop Time

It’s finally time for me to look into Hadoop a little more closely.

I cloned the Hadoop 2.7.3 repo and built it without too much difficulty.  I even got the daemon running locally in single node mode.  After running through a few of the examples, etc., I wondered what Amazon Web Services had to offer as far as running a Hadoop cluster goes.  That’s when I discovered Amazon EMR.  After looking into the documentation – which is as excellent as I expected from the Amazon folks – I decided I would run through their example and incur the $1.05 or so it would cost to bring up a cluster and play with it.  Hopefully another post will be forthcoming on the experience…

Building S2N

s2n is the new TLS/SSL implementation by Amazon.

I retrieved the git repository and tried to build it, only to have it crash while running a unit test.  I put some debugging statements in the test but they never showed up.  I was getting an “error 139” error and initial searches indicated it might have been a gcc problem.

Turns out, my stack was blowing up during initialization of an array and I needed to increase my stack size using “ulimit -s 16284”.  An interesting little problem for me since I haven’t worked in the C world for quite some time.