Right, so yes, five years ago I moved to github pages, and never bothered to redirect any of these pages there. Now I've moved on from there, and... Finally I am using my real domain, trishagee.com . My blog is now at trishagee.com/blog . See you there!
Get link
Facebook
X
Pinterest
Email
Other Apps
Spock is awesome! Seriously Simplified Mocking
Get link
Facebook
X
Pinterest
Email
Other Apps
We're constantly fighting a battle when developing the new MongoDB Java driver between using tools that will do heavy lifting for us and minimising the dependencies a user has to download in order to use our driver. Ideally, we want the number of dependencies to be zero.
This is not going to be the case when it comes to testing, however. At the very least, we're going to use JUnit or TestNG (we used testng in the previous version, we've switched to JUnit for 3.0). Up until recently, we worked hard to eliminate the need for a mocking framework - the driver is not a large application with interacting services, most stuff can be tested either as an integration test or with very simple stubs.
Recently I was working on the serialisation layer - we're making quite big changes to the model for encoding and decoding between BSON and Java, we're hoping this will simplify our lives but also make things a lot easier for the ODMs (Object-Document Mappers) and third party libraries. At this level, it makes a lot of sense to introduce mocks - I want to ensure particular methods are called on the writer, for example, I don't want to check actual byte values, that's not going to be very helpful for documentation (although there is a level where that is a sensible thing to do).
We started using JMock to begin with, it's what I've been using for a while, and it gave us what we wanted - a simple mocking framework (I tried Mockito too, but I'm not so used to the failure messages, so I found it really hard to figure out what was wrong when a test failed).
I knew from my spies at LMAX that there's some Groovy test framework called Spock that is awesome, apparently, but I immediately discarded it - I feel very strongly that tests are documentation, and since the users of the Java driver are largely Java developers, I felt like introducing tests in a different language was an added complexity we didn't need.
Then I went to GeeCON, and my ex-colleague Israel forced me to go to the talk on Spock. And I realised just how wrong I had been. Far from adding complexity, here was a lovely, descriptive way of writing tests. It's flexible, and yet structured enough get you thinking in a way that should create good tests.
Since we're already using gradle, which is Groovy as well, we decided it was worth a spike to see if Spock would give us any benefits.
During the spike I converted a selection of our tests to Spock tests to see what it looks like on a real codebase. I had very specific things I wanted to try out:
Mocking
Stubbing
Data driven testing
In the talk I also saw useful annotation like @Requires, which I'm pretty sure we're going to use, but I don't think it's made it into a build yet.
So, get this, I'm going to write a blog post with Actual Code in. Yeah, I know, you all thought I was just a poncy evangelist these days and didn't do any real coding any more.
First up, Mocking
So, as I said, I have a number of tests checking that encoding of Java objects works the way we expect. The easiest way to test this is to mock our BSONWriter class to ensure that the right interactions are happening against it. This is a nice way to check that when you give an encoder a particular set of data, it gets serialised in the way BSON expects. These tests ended up looking something like this:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
So that's quite nice, my test checks that given a List of Strings, they get serialised correctly. What's not great is some of the setup overhead:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Obviously some of the things there are going to be ringing some people's alarm bells, but let's assume for a minute that all decisions were taken carefully and that pros and cons were weighed accordingly.
So:
Mocking concrete classes is not pretty in JMock, just look at that setUp method.
We're using the JUnitRuleMockery, which appears to be Best Practice (and means you're less likely to forget the @RunWith(JMock.class) annotation), but checkstyle hates it - Public Fields Are Bad as we all know.
But it's fine, a small amount of boilerplate for all our tests that involve mocking is an OK price to pay to have some nice tests.
I converted this test to a Spock test. Groovy purists will notice that it's still very Java-y, and that's intentional - I want these tests, at least at this stage while we're getting used to it, to be familiar to Java programmers, our main audience.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
It's a really simple thing, but I like having the @Subject annotation on the thing you're testing. In theory it should be obvious which of your fields or variables is the subject under test, but in practice that's not always true.
Although it freaks me out as someone who's been doing Java for the last 15 years, I really like the String for method name - although in this case it's the same as the JMock/JUnit equivalent, it gives a lot more flexibility for describing the purpose of this test.
Mocking is painless, with a simple call to Mock(), even though we're still mocking concrete classes (this is done simply by adding cglib and obgenesis to the dependencies).
I love that the phases of Spock (setup: when: then:) document the different parts of the test while also being the useful magic keywords which tell Spock how to run the test. I know other frameworks provide this, but we've been working with JUnit and I've been in the habit of commenting my steps with //given //when //then.
Thanks to Groovy, creation of lists is less boiler plate (line 9). Not a big deal, but just makes it easier to read.
I've got very used to the way expectations are set up in JMock, but I have to say that 1 * bsonWriter.blahblahblah() is much more readable.
I love that everything after then: is an assertion, I think it makes it really clear what you expect to happen after you invoke the thing you're testing.
I'm interested though as to why you're using 'setup:, when:, then:' when you could simply use 'given:, when:, then:'?
I didn't know about the @Subject annotation so thanks for that, pretty useful when you've got more than a few fields (I normally just put the object under test as the bottom field...which is probably not even noticed by my fellow developers, but they'll have to take notice of @Subject)
Er, cos I didn't know about "given"! I was following some tutorial somewhere. I'll convert ours to "given", I prefer that.
@Subject is really nice for documentation. It's not so mandatory for genuine unit tests, because it'll probably be in a SubjectSpecification test, but for integration-style tests it's dead useful.
Cool. Also check out the 'where:' clause which comes after 'then:' - it's similar to JUnit Theories, which can be useful when you want to test a bunch of inputs (and outputs) without repeating the unit test.
Thanks! Good catch, we've been going Spock-crazy since I finished this spike, and our recent tests just use "def" for methods. I'm not sure about the semi-colons - we still write a LOT of Java so getting out of the habit is a pain when you switch back into the Java.
And yes, order is important. But I think that looks ugly. Still, we should do that actually - our JMock tests didn't care because it made the tests clumsy, but our Spock tests should assert the order.
Recently we open sourced the LMAX Disruptor , the key to what makes our exchange so fast. Why did we open source it? Well, we've realised that conventional wisdom around high performance programming is... a bit wrong. We've come up with a better, faster way to share data between threads, and it would be selfish not to share it with the world. Plus it makes us look dead clever. On the site you can download a technical article explaining what the Disruptor is and why it's so clever and fast. I even get a writing credit on it, which is gratifying when all I really did is insert commas and re-phrase sentences I didn't understand. However I find the whole thing a bit much to digest all at once, so I'm going to explain it in smaller pieces, as suits my NADD audience. First up - the ring buffer. Initially I was under the impression the Disruptor was just the ring buffer. But I've come to realise that while this data structure is at the hea...
This is the missing piece in the end-to-end view of the Disruptor. Brace yourselves, it's quite long. But I decided to keep it in a single blog so you could have the context in one place. The important areas are: not wrapping the ring; informing the consumers; batching for producers; and how multiple producers work. ProducerBarriers The Disruptor code has interfaces and helper classes for the Consumer s, but there's no interface for your producer, the thing that writes to the ring buffer. That's because nothing else needs to access your producer, only you need to know about it. However, like the consuming side, a ProducerBarrier is created by the ring buffer and your producer will use this to write to it. Writing to the ring buffer involves a two-phase commit. First, your producer has to claim the next slot on the buffer. Then, when the producer has finished writing to the slot, it will call commit on the ProducerBarrier . So let's look at...
The next in the series of understanding the Disruptor pattern developed at LMAX . After the last post we all understand ring buffers and how awesome they are. Unfortunately for you, I have not said anything about how to actually populate them or read from them when you're using the Disruptor. ConsumerBarriers and Consumers I'm going to approach this slightly backwards, because it's probably easier to understand in the long run. Assuming that some magic has populated it: how do you read something from the ring buffer? (OK, I'm starting to regret using Paint/ Gimp . Although it's an excellent excuse to purchase a graphics tablet if I do continue down this road. Also UML gurus are probably cursing my name right now.) Your Consumer is the thread that wants to get something off the buffer. It has access to a ConsumerBarrier , which is created by the RingBuffer and interacts with it on behalf of the Consumer . While the ring buffer obviously n...
Another spock convert! Good stuff!
ReplyDeleteI'm interested though as to why you're using 'setup:, when:, then:' when you could simply use 'given:, when:, then:'?
I didn't know about the @Subject annotation so thanks for that, pretty useful when you've got more than a few fields (I normally just put the object under test as the bottom field...which is probably not even noticed by my fellow developers, but they'll have to take notice of @Subject)
Er, cos I didn't know about "given"! I was following some tutorial somewhere. I'll convert ours to "given", I prefer that.
Delete@Subject is really nice for documentation. It's not so mandatory for genuine unit tests, because it'll probably be in a SubjectSpecification test, but for integration-style tests it's dead useful.
Cool. Also check out the 'where:' clause which comes after 'then:' - it's similar to JUnit Theories, which can be useful when you want to test a bunch of inputs (and outputs) without repeating the unit test.
DeleteYep, that's coming up in a later instalment....
DeleteTrish - This is groovy. Get rid of the semi colons and get rid of 'public'. You don't need it. Its the default.
ReplyDelete+ Since I suspect ordering is important in this test I would write something like:
then:
1 * bsonWriter.writeStartArray()
then:
1 * bsonWriter.writeString('Uno')
then:
1 * bsonWriter.writeString('Dos')
then:
1 * bsonWriter.writeString('Tres')
then:
1 * bsonWriter.writeEndArray()
Thanks! Good catch, we've been going Spock-crazy since I finished this spike, and our recent tests just use "def" for methods. I'm not sure about the semi-colons - we still write a LOT of Java so getting out of the habit is a pain when you switch back into the Java.
DeleteAnd yes, order is important. But I think that looks ugly. Still, we should do that actually - our JMock tests didn't care because it made the tests clumsy, but our Spock tests should assert the order.