Information debt

Last week ago the following happened on twitter:

New word of the day: "information debt". It's like technical debt, but related to information, communication, models, docs, visualisations.
- Joep Schuurkes (@j19sch) July 8, 2014

@j19sch: New word of the day: "information debt" => blogpost coming up ?!?!?
- Simon P. Schrijver (@SimonSaysNoMore) July 8, 2014

@SimonSaysNoMore Wasn't planning to, but now you made me think about it, so yes. Damn you! :-P
- Joep Schuurkes (@j19sch) July 8, 2014

In case you don't know what technical debt is, you might want to read this first. (It's the oldest source I could find of the technical debt-tetris analogy, by the way. If you know of an older one, please leave a comment.)

Read more…

Why I am context-driven

(This post was first published on the DEWT site as part of a blog post series by the DEWT members .)

Why am I context-driven? Because it's more fun.

That's all there is to it.

Of course I could argue that becoming context-driven has made me a better tester and I do think it has. Yet it's not the reason I became a context-driven tester. Besides, how would I prove it made me a better tester?

So no, I am context-driven because it's more fun. Because it sees testing as an intellectual challenge. Because it allows human uncertainty to be at the core of what it is. Because it tells me that I'm in charge of what I do and how I do it. Because it encourages me to dive in at the deep end of some complex problem, trusting on my skills to get out on top and enjoying every step of the way.

Read more…

Five leftover thoughts on software testing

Defect severity 'Skynet': If this gets to production, we're screwed. Really screwed.

The problem with templates is that the people who need them shouldn't be writing the document in the first place and that the people who don't need them, have trouble shaking off the vague sense of obligation to adhere to them.

Happy path application: application that works perfectly fine as long as you stick to the beaten path with your eyes half-closed.

The contents of a test plan can be divided into the following four categories: good stuff, cover your ass, bullshit, trivialities.

Buddhist software development: there is no progress, only a never-ending cycle of suffering.

Read more…

Why I dislike test management

As I am enjoying these short, not very nuanced, not extremely well thought out blog posts, here’s another one.

Some people seem to think that it makes sense to think of testing as a project within a project, so they apply project management tools and techniques to testing. This simply doesn’t work.

Because what are the tools and techniques do they use? A plan with milestones no one is ever going to make as unexpected stuff tends to happen. A budget that is too tight because it’s based on that same plan. Entry criteria that are not met, but never mind, we’re running out of time so you need to start testing anyhow. And finally exit criteria that we fail to meet as well, but hey we’ll go live anyway, because the software really isn’t that bad (or so we hope).

So in the end, a lot of time and effort is spent on producing documents that are of little use in guiding the actual testing effort. The only thing they do is give some people a warm and fuzzy illusion of control.

Read more…

Test cases, can't do 'm no more

Continuing the style of my previous blog post...

Some days ago I found myself no longer able to think in test cases while testing. Of course, it's not as if I was using test design techniques to generate test cases one day and woke up the next day to find myself unable to do it anymore. But still, about a week ago I figured I had explored enough to be able to write down the test cases I wanted to execute and found myself staring at a blank page (well ok, empty Excel sheet) feeling alienated from what I was planning to do.

So what do I mean when saying "thinking in test cases". Simply put, you take a piece of functionality, let a test design technique loose on it and there you go: a set of test cases to execute. Combine test design techniques over the different pieces of functionality as required and you're all covered test strategy-wise. Or that's the idea.

Read more…

Why your Product Risk Analysis isn't

Ok, going to try to keep this short and ranty (rantish?).

Typical test advice is to do a Product Risk Analysis (PRA, mind the capitalisation!) and based on that you decide what to test and how thoroughly. The most common way to do a PRA is with a workshop. Put some people in a room with a lot of stickies, let them list all the risks they can think of and then have them score them. Et voilà, Product Risk Analysis is done!

But that doesn't really make sense, now does it? If someone were to give you an object and asked you "What could possibly go wrong with this?", what would you do? Gather a bunch of people with some knowledge of the object, yet no actual experience with it and do a workshop imagining things that could go wrong? That's not an Analysis (capital A!), that's a SWAG – a scientific wild ass guess.

Read more…

DEWT3 experience report

Last weekend the third Dutch Exploratory Workshop in Testing (DEWT3 for short) took place. The ingredients were: a very nice hotel in the woods, lots of talk about testing, beer, whiskey, a small to moderate amount of sleep, stickies and a group of fun and interesting people (You can see them here.)

On Saturday the talks (and thus discussions) were about systems thinking. A few years ago I did read Jerry Weinberg's "An Introduction to General Systems Thinking" and although a very interesting read, w.r.t. to applying it to testing I never got further then: Software is (part of) a system, so you can apply systems thinking to it. Of course, that's very much true, but it's also quite a vague piece of advice.

A primer on systems thinking

Enter James Bach, who kicked off DEWT3 with a primer on systems thinking. Systems thinking is just a way of thinking - just like logical thinking, analogical thinking, creative thinking, etc. - in which we approach a situation as being a system. So what's a system? It's a set of things in a meaningful interaction with each other. This definition raises all sorts of questions relevant in systems thinking:

Read more…

Lessons learned in some test automation

In the past two weeks I built a test tool in VBA (Visual Basic for Applications). I did this, because two weeks ago my fellow tester showed me an important test we have to do at least once for each major release. The test consists of having the application generate three reports in Excel format. On two of these reports you apply a set of filters and take the sum of a few columns, rinse and repeat. Then you add several of those sums together and the results of those calculations should match the numbers in the third report. So basically, the point of the test is to check if the numbers in the three reports add up. And it's a lot of work, i.e. about two days.

After being shown how to perform the test, the first thought that popped into my head, was: "Boring!" The second thought was: "It's automatable!" And since there was little else to do - delivery of the new test environments was delayed - I changed it from automatable to automated.

So now, after two weeks, I have this tool in VBA. For each of the three reports it contains a sheet in which you define the sets of filters and sums. If you click a big button, the report is opened and the filters and sums are applied. There's also a fourth sheet to do the second set of calculations and to do the check if the numbers match. This last part is not done in VBA; it's all formulas in Excel.

Read more…

DEWT2 - Becoming a context-driven tester

About a month ago ago (October 5th - 6th) I was in Driebergen to attend DEWT2, a peer workshop with as theme "Implementing Context-Driven Testing". As it turns out, implementing context-driven testing is not easy to do. That should not come as a surprise: it requires people to change and that is difficult. Luckily, I'm not a manager wanting to implement context-driven testing, so I can dodge most of that problem.

However, I do like 'spreading the word' on context-driven testing, because I would like for there to be more context-driven testers in the Netherlands (and Europe and the world, of course). So to promote context-driven testing I think there are three things I can do:
1) set an example,
2) be available to other people,
3) leave bread crumbs.

Read more…

Measuring review coverage

Recently it occurred to me we have plenty of ways to measure test coverage, but there doesn't seem to be a way to measure review coverage. So today I decided to fix that. The result is the scale below with which you can measure review coverage. And just like all good scales, it goes all the way up to 11.

0) I'm sorry, what document?
1) I think I do remember someone mentioning that document.
2) I'm sure I have it somewhere – well, at least fairly sure...
3) Look, it's on my to do-pile!

Read more…