Sunday, December 3, 2017

End to End Testing and Integration Testing Revisited

After a crazy busy "black Friday" at work,  I was hiding at the small local coffee shop enjoying a really good cup of coffee. It was really very good coffee. Very nice.

I was half way though my first cup (I limit myself to 2 at this shop) looking out the window, glad I was not part of the mob scene of young people debating going to the local bar, the fancy coffee shop or replenishing the supplies from the "party" store next door. (There were loads of parties Wednesday night and Friday in the area.)

A guy I sort of knew from the local test meet up pulled up a seat and began talking about how awesome things were at work. He was really really proud and needed to tell someone about it. Their End-to-End Test Automation had been up and running for 3 releases and it was AWESOME! They made some tweaks and it ran clean and fast and all you had to do was click "Start" and it ran.

OK - Sounds pretty cool so far.

It has 300 steps - well, closer to 400  but, you know... (More coffee for me, this might be a long conversation.) And the first few times it ran it found a bunch of bugs and they could get them all fixed and now it is running really really clean!

Wow - OK, that sounds line something. So I ask the next question that seemed logical to ask...

So, this has been running for 3 releases? You found a bunch of bugs in the test environment early on, right? (Yeah, we did - saved our butts.) Cool. That's good. So, how about bugs in the  production environment? Got any cropping up there?

Well, there are always bugs in production, right? Something always goes wrong and there really isn't anything we can do to prevent them in advance. You know how it is. There's one the keeps cropping up. Seems like this reappears every few weeks. We just fix the data and move on.

Hmmm. So, is there a way to recreate the scenarios in the test environment and see if you can head off these issues you had in production? Is there a way to maybe isolate why you  need to "fix" data every few weeks?

Well, it's really complicated. We don't control that part of the data. Another team does. They claim their stuff works fine and the problem is with our stuff, so it is our problem.

Ouch. That kind of sucks. (Thinking about breaking my rule and going for a third cup of this really good coffee.) Ummm, what about integration testing? Is that a thing for you? or, not really?

Well, all the teams are testing their stuff in the System Integration Test environment we have. But we never seem to find these kinds of bugs there.

Hmmm. Yeah. That can be frustrating (wishing I was drinking a nice red wine at this point.) What would it take for that to happen?

We can't do that. We're Agile. We don't do that kind of thing. But hey - I see my wife outside - Gotta go! Later - See ya next month maybe at the meeting!

I sigh. realize I'm half way through my third cup of coffee and I hear my friend the Unicorn start laughing. YEAH - the Unicorn! I had not seen him in MONTHS.

He wasn't there when you talked about "Pulp Fiction Integration Testing" was he. Too bad.

(I blogged about it a long time ago here.)

So the Unicorn and I chatted for a while.

The problem we both have with much of the "automated end-to-end testing" stuff is that thinking humans tend to stop thinking and allow things to run on auto-pilot. If everything is green at the end there are no problems and anything that comes up in production was something that could not have EVER been anticipated.

Except often times they CAN be anticipated. Running tests in an "environment" does not make them "integration" tests. It does not mean they are telling you anything of value.

I might suggest this idea - try getting other teams involved - Other people who deal with systems or applications that interact with yours. Try comparing notes. Try setting up scenarios emulating what actually happens when the software is released to the wild.

Can you evaluate the touch points? Can you monitor the database or the logs and see if there are any warning signs popping up? What happens if you keep those multiple scenarios running for an extended period of time? What happens when OTHER scenarios get introduced into the middle of this?

What happens when something "impossible to happen" is introduced? Because all those things that "no real user/customer/person" would do? Oh yeah. They'll do them They'll do stuff because it makes their life easier - or they think it will - until they do it and stuff hits the fan and goes EVERY where. Then their life will get very unhappy.

Then they'll look at you for why you screwed things up.

Then you'll need Mr Wolfe to help clean up the problem.

Better you see the problem in advance and cut down those calls to Mr Wolfe.

I finished my coffee. The Unicorn finished his and we wished each other a Happy Thanksgiving and went our separate ways.

1 comment:

  1. Yes, I've seen things that were hailed as having the "best testing ever" falling over when they were released into the production environment. In this case, it was the original specification of the project that had not been drawn up with any reference to how the existing system was used in real life by real users. And the consultants who had done the spec were no longer involved with the project, and all the in-house people who were on our side of the project had left before it all went pear-shaped.

    The next project we kicked off was properly specified in advance, with the testers involved in requirements gathering and challenging users' and managers' assumptions, It was great. But it took twice as long as the first project and eventually the plug got pulled on the entire thing because the company's owners found a cheaper way of doing things (they bought a rival company's product; then they liked it so much, they bought the rival company and sacked all their in-house developers and testers).

    ReplyDelete