Shortly Eurostar Conference will be under way from 4th of November to 7th of November in Gothenburg, Sweden. If you have never attended the conference, it's a great way to meet a lot of people that make up the European testing community. It's also one of the best ways to learn about software testing and what it means to go beyond the day to day job. The conference will be packed with people passionate about their trade eager to see what's new in the wild and get a different perspective on existing practices.
This year the conference will take a new turn in the form of cutting down the speaker's time to 30 minutes thus allowing for more audience interaction which can only be beneficial for both sides.

I'll be there presenting on the topic of Questioning Acceptance Tests on Tuesday 5th of November from 16:00 (UTC +1). It will be a mixture of technologies we're using at LMAX Exchange to increase our testing coverage. You'll be hearing a lot about Spock, QuickCheck and property based testing. For more details checkout the official page of the talk
http://www.eurostarconferences.com/conferences/session/428/questioning-acceptance-tests

You can also get more details on the conference here http://www.eurostarconferences.com/conferences/2013

Hope to see you there.

I recently went to hear the LSO perform at their residence within the Barbican centre. The full complement of 91 people performed 3 classical pieces and 3 more "movie soundtrack" kind of pieces as a friend called them.
Half way into the first piece I started thinking of an analogy between the musical system that was the orchestra and a run of the mill system. First question that came to mind was how would you go about testing something as complex as this.
Now you experts out there would probably say in an instant, that the musicality matters the most i.e. if it's easy on the ear then probably the system is working well. Since I'm no expert and don't possess such finely tuned measurement tool I took a different approach.
My first idea was to grab a pen and write every idea that comes to mind. The programme got quite a beating. The intention was to then bring all those random ideas under the umbrella of a mind map. Enter mindmup, a tool that a colleague suggested some time a go. It's free and, as any other self respecting projects out there, it's online. And just to top it all up it's now open source.
So here's what I gathered while listening to the LSO performing Enescu's Rhapsody No. 1

Strategy for testing a symphony orchestra

Strategy for testing a symphony orchestra on MindMup

Recently at LMAX Exchange, we've started using more and more of what we call integration tests which prompted a post on how we use integration tests and how we differentiate them from acceptance tests.

To start off, a few definitions are in order:

  • we define an acceptance test as an external client that encompasses enough information on how to drive the system under test (SUT) in order to bring it to the point of asserting something.
  • integration tests, on the other hand, are used for testing the internals of the system and are geared towards validating contracts between modules of code.

[table colalign="left|left"]
Acceptance Tests,Integration Tests
cover at a minimum"," all the requirements in a story,focus on the same set of requirements as acceptance tests
used to capture the emergent behaviour that's inherent to a story,the code produces emergent behaviour as well; with the help of integration tests the units that drive the emergent behaviour can be tested

use the business domain language, they tend to leak implementation and use more of the code base language since they're targeting lower functional units which don't necessarily translate into business concepts

usually built on top of several abstraction layers and can be understood by the business users,we use abstraction layers in a similar way to our acceptance test framework to get the same efficiency in writing integration tests; business users have a limited interest in them

test the end to end wiring"," starting from the internals of the system all the way to the external users of the system,they focus more on the internals of the system and tend to be kept simple in terms of how many modules participate in making a test pass
feedback is slow but more comprehensive,quick feedback as the system doesn't need to be brought up
they will suffer from intermittency due to so many outside factors,quick to debug and run as part of every commit; it's hard to introduce any intermittency

they use third party tools/dependencies"," e.g. webdriver"," to complement testing the system through external clients like a browser,keep clear of external tools and the only dependencies are the ones used by the production code

have a well defined external API (most probably the same API that clients use) for interacting with the SUT,use the internal APIs of the modules under test

have bigger costs in terms of authoring and maintaining; there's always a judgement call about encoding a specific behaviour in an acceptance test,with integration tests you can go nuts and test a lot of edge and negative cases

although more costly"," acceptance tests catch more complex bugs,act as a pin pointing method for isolating bugs; they'll always be easier to debug

can successfully be used in proving the functionality to the users through showcases; their side effects are observable and familiar to the users,the output from an integration test run will seem cryptic and uninteresting to people outside of the feature teams
[/table]

We've started using more and more integration tests to validate requirements. This means they're no longer the lost cousin of unit tests but rather the new kid on the block right up there with the acceptance tests when it comes to testing in an agile environment.

If you're using integration tests leave a comment about how you're using them and how your peers see them.

As things have it, I decided to help out some fellow testers to navigate the waters of software testing. It all started with a bit of a chat on how to approach testing when you're a junior and generally inexperienced. What to start reading, what to try out and all sort of other "what" type questions.

So to that extent I gathered 3 guinea pigs and presented them with an offering of 6 potential subjects that I could cover. These included:

  • Exploratory testing
  • Automation
  • Trends in the industry
  • Learning material: blogs/books/videos/courses
  • Skills
  • Software development life cycle

We've decided, based on the time constraints of each individual, to go for 1 hour format. Due to this aspect, I've set up a poll to better understand the priorities in which to cover the agenda. I also enforced a number of only 4 subjects. The outcome of the poll was:

  1. Exploratory testing
  2. Automation
  3. SDLC
  4. Learning material

This whole learning exercise took the shape of a webinar using skype as a conference tool and some screen sharing. In order to prepare for this event I prepared a mind map to guide my thoughts and decided to have the first 40 minutes to do a bird's eye view of the 4 areas. After that I envisaged people to have questions that I could answer or maybe point them into the right direction for self enlightenment.

Bit of a context here - out of the 3 participants, one wanted to switch between development and testing, another one was only starting to get serious about testing and didn't quite know if this is for him, and the third one had already 2 years of experience but wanted to hear fresh ideas.

Right. So 1 hour, 4 subjects. How many did I manage to cover in 2 hours? Exactly 1 - Exploratory testing.

And here are some of the lessons I learned:

  • don't try and cover too much if you're trying to squeeze everything in 1 hour
  • don't choose broad areas
  • the knowledge gap between the participants can be a disadvantage. Some will expect different levels of approaching the subject. Also the questions coming from some participants will probably bore the others or even worse baffle them even more.
  • using skype isn't the best thing in the world when you try to convey information as you can't get a feel for people's shoulder shrugging
  • maybe a webinar wasn't such a great idea. It seemed more like a coaching kind of sessions. This would've probably went a lot better in a 1-to-1 session
  • get them "warmed" up on the subject by sharing some articles/videos they can go over and if they have questions they can bring them up during the online session
  • some of the feedback suggested a more practical session - this came from the more experienced of the 3 and probably falls under managing expectations and the knowledge gap

Well this was more of an exercise to me to try and get feedback on what works and what doesn't. I'll definitely try and do more sessions in the future. If you're interested, let me know and we'll see what we can do.