DEWT3 experience report

Last weekend the third Dutch Exploratory Workshop in Testing (DEWT3 for short) took place. The ingredients were: a very nice hotel in the woods, lots of talk about testing, beer, whiskey, a small to moderate amount of sleep, stickies and a group of fun and interesting people (You can see them here.)

On Saturday the talks (and thus discussions) were about systems thinking. A few years ago I did read Jerry Weinberg's "An Introduction to General Systems Thinking" and although a very interesting read, w.r.t. to applying it to testing I never got further then: Software is (part of) a system, so you can apply systems thinking to it. Of course, that's very much true, but it's also quite a vague piece of advice.

A primer on systems thinking

Enter James Bach, who kicked off DEWT3 with a primer on systems thinking. Systems thinking is just a way of thinking - just like logical thinking, analogical thinking, creative thinking, etc. - in which we approach a situation as being a system. So what's a system? It's a set of things in a meaningful interaction with each other. This definition raises all sorts of questions relevant in systems thinking:

Read more…

Lessons learned in some test automation

In the past two weeks I built a test tool in VBA (Visual Basic for Applications). I did this, because two weeks ago my fellow tester showed me an important test we have to do at least once for each major release. The test consists of having the application generate three reports in Excel format. On two of these reports you apply a set of filters and take the sum of a few columns, rinse and repeat. Then you add several of those sums together and the results of those calculations should match the numbers in the third report. So basically, the point of the test is to check if the numbers in the three reports add up. And it's a lot of work, i.e. about two days.

After being shown how to perform the test, the first thought that popped into my head, was: "Boring!" The second thought was: "It's automatable!" And since there was little else to do - delivery of the new test environments was delayed - I changed it from automatable to automated.

So now, after two weeks, I have this tool in VBA. For each of the three reports it contains a sheet in which you define the sets of filters and sums. If you click a big button, the report is opened and the filters and sums are applied. There's also a fourth sheet to do the second set of calculations and to do the check if the numbers match. This last part is not done in VBA; it's all formulas in Excel.

Read more…

DEWT2 - Becoming a context-driven tester

About a month ago ago (October 5th - 6th) I was in Driebergen to attend DEWT2, a peer workshop with as theme "Implementing Context-Driven Testing". As it turns out, implementing context-driven testing is not easy to do. That should not come as a surprise: it requires people to change and that is difficult. Luckily, I'm not a manager wanting to implement context-driven testing, so I can dodge most of that problem.

However, I do like 'spreading the word' on context-driven testing, because I would like for there to be more context-driven testers in the Netherlands (and Europe and the world, of course). So to promote context-driven testing I think there are three things I can do:
1) set an example,
2) be available to other people,
3) leave bread crumbs.

Read more…

Measuring review coverage

Recently it occurred to me we have plenty of ways to measure test coverage, but there doesn't seem to be a way to measure review coverage. So today I decided to fix that. The result is the scale below with which you can measure review coverage. And just like all good scales, it goes all the way up to 11.

0) I'm sorry, what document?
1) I think I do remember someone mentioning that document.
2) I'm sure I have it somewhere – well, at least fairly sure...
3) Look, it's on my to do-pile!

Read more…

Let's Test 2012 - time travel advice

If I had a time machine and could travel back in time to give myself some advice before attending Let's Test 20212, it would be this:

- Go!
- Don't worry too much about which session to attend. One of the most valuable sessions you will attend, you'll choose because it has the least appealing abstract.
- Bring your running gear.
- If you ask Rikard Edgren nicely, he will give you a dead-tree version of his 'Little Black Book on Test Design'.
- Enjoy the food. Don't worry, it's not much of a challenge.

Read more…

Some thoughts after attending the 'Getting a Grip on Exploratory Testing' workshop

About two weeks ago I attended James Lyndsay's 'Getting a Grip on Exploratory Testing' workshop in Amsterdam. So it's about time to write something about it…

Now one of the things I dislike about workshop blog posts is that people will say "It was great! And person X is such a good trainer!" without saying much about the content of the workshop. However, I find myself now writing this post and thinking: I shouldn't post a full summary of the workshop. Not that it would spoil too much for any future attendee: most of the workshop consists of exercises and discussion. But posting a summary of the workshop that James has put a lot of effort in to create, just doesn't feel right. So let me just say this: the workshop was great and James is such a good trainer! :-D

Now that's out of the way, there are a few things from the workshop I'd like to share. Of course, the usual disclaimer applies: these are my thoughts on what was presented during the workshop. Any misrepresentations are my responsibility.

Read more…

The irony of scripted testing

A bit over a week ago, James Bach posted on twitter:

"This video shows a nice simple contrast between heavy scripted testing and exploratory testing"
- James Bach (@jamesmarcusbach) March 30, 2012

So I watched the video, hoping to see something that would make me go 'Cool!', but instead I went 'Hmmm.'

First let me say that this video does get a few things right:
- Exploratory testing can be structured by using charters.
- Exploratory testing allows you to easily change your test approach based on your test results.
- Exploratory testing is very adaptable when confronted with inaccurate or changing requirements.
Yet notice how the above only talks about exploratory testing, because for every thing the video gets right about exploratory testing, it gets something wrong about scripted testing – or rather about how scripted testing works in practice.

Read more…

The Seven Basic Principles of the Context-Driven School - part not-three

This blog post should have been part three on the basic principles of context-driven testing. As you can see from the title, it is not. :-) The plan was to see if and to what degree the other schools of testing could accomodate the principles of the context-driven school - as a way of highlighting what makes the context-driven school different from the other schools. The problem is I'm not that interested in doing that at the moment, so the chance is small an interesting post would result from it. Perhaps another time.

Read more…

The Seven Basic Principles of the Context-Driven School - part two

After the introductory post (to be found here) it's time to take a closer look at each of the basic principles. In the past weeks I found out that it's quite possible to take any one of these principles as a starting point for several different trains of thought. More importantly I discovered a story1 to the principles: the first five principles are ways in which software testing is intellectually challenging, as stated by principle six. And principle seven then wraps it all up. So below you can find some of the thoughts I had on the principles and the story I discovered.

1. The value of any practice depends on its context.

To get a better understanding of this principle I started thinking: what if the value of a practice did not depend on context? What else coud it depend on?

Read more…

The Seven Basic Principles of the Context-Driven School - part one

In the next several posts, I'd like to take a look at the seven basic principles of the Context-Driven School of software testing. The main reason for this is that I like what people belonging to this school say about testing, but I don't 'get' the seven basic principles. I do understand the principles when I read them, but I don't feel like I 'own' them.

So in the first few posts I will explore the principles as such. After that I will take a look at what other people have said about these principles. So please leave a comment if you know some good secondary sources on these principles!

The seven basic principles can be found here: They are also present as an appendix in "Lessons learned in software testing" by Cem Kaner, James Bach, Bret Pettichord, publised in 2002. This means that these principles were written in a context I'm not familiar with. They are at least four years older than my career in software testing, which began in 2006. And they were written by people from the USA, while I live and work in Western Europe.

Read more…