Tuesday, February 3, 2015

On The Easy Way and the Fallacy of the Simple

I've forgotten which of the several books by Jerry Weinberg I first read this in. The gist of the lesson is that when a proposed solution starts "All you have to do is..." it is a safe bet that this "solution" is almost certainly not going to be the solution that is needed.

It's funny how that works.

How many times has some expert walked in and told you, or your boss or someone "important" that "The answer to your problem is {blah}? That clearly the best way to handle the situation is to... blah blah blah blah blah..." Followed by something where it looks a bit like they expect you to accept that 1+2 = Magenta.

Yeah.

It is kind of like "We need to test this stuff but it is hard and complex and figuring our the data relationships and how values interact with . So, we'll just pull data from production and that will be good enough."

Now, I've mentioned that very idea in articles from time to time, and given presentations on how that can be done in integration testing and other forms. In a nutshell, there is no "just" about it.

If you are looking for a shortcut to figuring out test data - this is an OK place to start. But it is not the end.  Except some folks it might be the end.  That is too bad. Frankly, I think it's a huge mistake to think it's the end, but I could be oversimplifying it.

The issue isn't about the data. The issue is more subtle than that.

Where I see that to be the case is generally where people don't understand the system they are supposed to be experts in.

(OK, Context alert - I understand some folks will say "But, Pete, not everyone gets a chance to learn about the system they are supposed to test." I understand, really, and I rarely see those folks being the ones who have the assertions made about "Just use production data..." for testing. OK? So, consider this a "your mileage may vary" disclaimer...)


Ummm, yeah. That is kind of brutal. It also tends to be what I see time and again.

Why? It's complicated.

Well, duh - unless you are working on a piece of software with 1 possible value for the 1 variable in the software, it gets complicated quickly. Frankly, even then it can get complicated. Software is complicated.

Get over it!

OK. Whew. Sorry. Let me try again.

What can we do to make it a little less complicated?

We can look at variables in question - like the possible values that will send the software down different paths. If we have specific conditions we want to exercise or recreate, what does it take to make that happen? What combinations of values do we need?

Maybe some basic data analysis to start? What are the range of values for the variables? What combinations lead to what paths?

We can check the business rules, right? Maybe have coffee with the people who actually use the software? Maybe spend some time sitting with them and seeing how they do the things we're trying to figure out how to test, right?

Maybe we can evaluate logic within the existing code to see what it does - and see what the changes might do, right?

Pretty straight forward, isn't it? So why do so many people punt and say "We'll just run some transactions from production through?" 

Data and transactions "from production" might be a start. Then look at the transactions that are weird - that cause problems, or odd behavior,in the wild. Of course, I've found that chatting with the people using the system on a regular basis can give us insight as to what these types of transactions are.

Doing that might give more information than sitting with them and chatting about what they do.  I've found that asking "what makes things go pear-shaped?" Ummm - maybe "What kinds of things do you run into once in a while - maybe every 6 months or once a year - that takes intervention by  someone?" It could be something odd or something really, really, common, but with something that makes it uncommon.

Having a coffee with them, or buying them a bagel and a coffee - might get a little extra help in finding information. It might get the experts to spend a little time with you working through the problem transactions. It might get you some really deep insights into how people actually use the software.

I find that to be valuable on many levels.

So, what about the people who find this too hard? The folks who are always surprised when there are problems even after "it was tested?"

Maybe because they "are busy"? Maybe because there is a pile of stuff to do and do all those other things take time and get in the way? I'm not sure. Maybe all these things.

Maybe coffee or a cigarette was more important.

Simply put, figuring things out takes time and right bloody hard work. If it was easy, anyone could do it. If it was easy, they would not need to pay someone to do it. 

Of course, I could be wrong. I could be making things more complicated in testing than I need to. I might be uncharitable to the people who don't go through the effort I suggest might be a good idea.

What I do know is that many of the systems people "test" with the "all you have to do is..." approach, tend to have issues that get in the way of people actually using the software.  Of course, it may not be their fault. After all, they used data from production to test it. Is it their fault that the data from production did not cause the problems other data from production caused?

Sorry about shouting earlier. I'll go have a tea and calm down a bit. I get a little upset when people put out rubbish as if it is a revelation from beyond...

No comments:

Post a Comment