Wednesday, October 30, 2013

LIVE! Agile Testing Days 2013 - Day 2! In Potsdam!

Wednesday dawned bright and early (well, it dawned about the same time it always does) on a group of very tired conference participants.  Last night there was the "Most Influential Agile Testing Professional" awards banquet (congratulations to Markus Gaertner who won!)  This also featured a Halloween them, complete with costumes and ghoulish decorations.

Loads of fun, but made getting to Lean Coffee almost an impossibility and cost me time getting into the "Early Keynote" by Chaehan So.

So, here we go!

The "Early Keynote" title is "Business First, Then Test" - which pretty well sums up the core ideas presented.  Begins with a fair description of product owner and tester having potential areas of conflict and the problems that result from that.  A simple (well, maybe not simple - common perhaps?) approach to addressing this is to share experiences and discuss the intent in an safe environment.  Chaehan's example was "drink beer" (Pete: yup, I can agree!)

Instead of mapping use cases/user stories to abstract buzz-wordy terms, use the same use case or user story name/identifier the Product Owner is familiar with.  Pretty solid idea, not new (to some of us) but important to state.

References coming from the use cases/user story, including data relationships, can result in complexities not obvious to the technical staff, often caused by abstraction to "simplify" the representation.  However, sometimes the representation itself is the issue.  (I'm not capturing this idea well, but I think this covers the gist of it.) 

The idea of relationships and abstraction argues against the common "IT/Geek" approach to simplify for them - DON'T DO THIS.  Keep the reductions at a business intent level.  Chaehan suggests doing THIS by mapping the user story across multiple channels - not redefining the stories to track to the cahnnels themselves.

If you are working on a web-based ordering system, the "story" is replicated in each use channel.  This makes for a complex (and difficult to execute)test path and representation of needs, process and the presentation of information - Even if the implementation of this is complex.

Keep the information as simple as possible!  This is the point of ALL reporting to Management! 

Design to Community - D2C - create a simple design that reflects what needs to be done.  Like many things this allows for multiple levels of abstraction - and avoids the itchy-scratchy feeling that some people have in relation to having tests/progress reported to them in terms they don't use.

Discusses  how the cost curve of correcting problems in the application is usually presented in a manner appropriate to "waterfall" and not so much to Agile.  This is a n interesting view.  If the commonly referenced hockey stick graph/chart is used (yeah, the same one shot to pieces in "Leprechauns of Software")

==

Second Keynote - Christian Hassa on "Live it - or leave it! Returning your investment into Agile"

Describing his presentation with Matt Heusser at Agile Conference in Nashville, Matt made the observation that "scaling Agile" was interesting but how does that related to testing?  (Pete Comment: gulp)

Scaling Agile is often presented as AgileWaterScrumFall - OR Disciplined Agile Delivery (DAD).  The then draws comparisons to "Underpants Gnoomes" who have a business plan something like:
Phase 1 - collect underpants;
Phase 2 - ??;
Phase 3 - profit.

Except the problem is that phase 2 thing.  Most people confuse the "get ready to produce" as phase 2 - it actually is in phase 1.

Scaling Agile Framework - not so different than the Underpants Gnomes. There are still gaps in the model.  There are holes that seem present in phase 2.

If we fail to focus on unlocking value, and instead focus on costs, we miss opportunity.

SAP "Business by Design" model is not that far from this either.  The published estimations from 2003 simply failed to materialize.  The problem was related to attemptign to model the product on current clients/users of SAP, not on what the intent was. 

Presents an example of applying (mis-applying?) Scrum to a given list.  As the team worked forward, the backlog of requirements grew.  How?  The team dove in and aggressively worked on the project diligently. Except the backlog grew.

After a high level meeting with "what is wrong?" as the theme, it dawned on the Product Owner that the problem with the backlog was attemptign to identify all the possible requirements and focusign on the core aspects that were needed/wanted so the product can be delivered /finished/ on time.  The additional ideas may be incorporated into future versions/upgrades, but get the stuff outthere so the procut can be used - then people can figure out what is really needed.

"Your job as developers is not to develop software, your job is to change the world." Jeff Patton

Assertion: "Your job as tester is NOT to verify software, job is to verify the world is actually changing (fast enough.)"

Yup.  The problem we in Dev (including testing) have is that we're a bit like Pinky & the Brain - We want to change the world/take over the world, but we fail to do so - We don't look long enough - we focus on the minutea and not the big picture.  (Pete Comment: OK, I'll reserve judgement. though I like the P&B reference!)

Turns to Scaling TDD for an enterprise.  Cyclical feedback loops (loops within loops) can provide insight within each pass/iteration. (Pete note: ok - seems interesting - consideration needed here on my part.)

Turns to Impact Maps as a tool to facilitate communication / transparency with stakeholders. Interesting example walk through (but it sounds a bit hypothetical to me) on applying the core ideas to this.  Goals/Actors/Impacts/Deliverables - (Pete: OK - I get that.)

Pete question is, does this translate well to people who may not recognize the intent?  I suspect it does - by forcing the consideration that seems "obvious" (by some measure) to someone (who may or may not matter.)

By using impact maps, we can then apply "5 whys" to features - (Pete: that is an interesting idea I had not considered.  I kinda like it.)

Working on scaling /anything/ tends to get bogged down in goals -  Create a roadmap of goals to define what it is / where it is, you'd like to go.  Predicting the future is not the point defining goals - instead look to see what you'd like to achieve.

Test Goals & impacts are similar in that they can act as guides for Scale, Measure and Range of each goal/activity.  Finally - Deliverables - Smaller slices delivered to production make it actually easier to the  get the product out there and improve the product while still developing it (Pete: fair point.)

Story Maps allow us to examine what it is that we are trying to implement, no?  Mapping the story can make aspects clear we have not considered.  Rather than "aligning to business goal" we can align to "actor goal" - This can help us view our model - and see flaws, holes or conflict.

By defining a "likely order of events" we can see what the experience of the user will be, without defining what the software does.  It allows us to remain true to the spirit of the purpose through each potential path. 

This, in combination with the other tools described, help measure the progress and check scope creep.  If we can identify this, we can then identify the purpose more clearly and identify potential and problems being introduced.

We can also use Story maps to control flow and define relationships between components and find potential conflict.  As we get more information we can define the higher/lower priority decisions around the story maps.   The higher the priority, the finer/more detailed the story maps become.  The lower the priority, the chunkier, more nebulous the story maps become. 

WOW! a Real example! (as opposed to hypothetical) 



Sprints expanded to 4 weeks in this case.  The first sprint had issues (ok, not uncommon) Yet by the end of Sprint 2, the core functions were in place.  By focusing on the MOST IMPORTANT features - the top priority story/story maps could be implemented cleanly, expanding ideas/needs as project developed to include the lower priority needs.

Pete: OK - completely lost the thread of his last points but I got pictures!!



General Gist - COMBINE TOOLS and TECHNIQUES to make things work.  A SINGLE tool or technique may have value, by combining them we can balance things a bit better.

Book Recommendations -

How to Measure Anything - Douglas W Hubbard
Impact Maps - ???

And BREAK TIME!

==

Track Session - Gitte Ottosen - Making Test-Soup on a Nail - Getting from Nothing to Something

Gitte is a Sogeti consultant speaking on Exploratory Testing.  OK. Here we go With a Unicorn!!



Starts with James Bach's (classic) definition of Exploratory Testing.  (Pete: yeah, the one on the Satisfice page)

Describing fairly common challenges in project limitations, liabilities and personality conflicts and potential for problems.  PM does not want "too many hours" used - views testing as overhead.  And Test Mangement Org wants "everything documented... in HP QC."

Fairly obvious solution - keep it simple.  When people pretend to be Agile, it is a challenge for everyone involved.  The challenge is to make things work in a balanced way, no?  Gitte was not an "early adapter" of mind maps, and described how she created bullet lists and converted them later - OK - I can appreciate this.  Then There were issues with documented structure of the app - which were not existent.  This is something we all get to play with sometimes, no?

so what's available?  Boundary analysis, pair-wise (orthogonal arrays - same thing, diff name), classification trees, etc.  (Pete: Yup - all good approaches).  AND she plugs Hexawise (Pete: yeah, way cool product!)

On examination - it was possible to lok at "cycles" and how users/customers are expecting the new app to work.  The "documented requirements" did not exist - and maybe they were not discussed and understood.  So - the question becomes when expectations are different between dev/design folks and customers/product owners - what happens?  "Learning opportunity"

Decision trees and process flows can help with this - examine what the customer/user (or their representatives) expect to happen and compare those with developments - as a whole.  Then exercise the software.  See what happens.  Exercise the things of interest THEN.

Testers (her) worked to support the team by "translating" the user stories into English - well, because of the team distribution, writing them in Danish was kind of a problem - as some folks spoke/wrote Danish (Danish company) but others did not - ewww

The good news is, when she exercised the software over documenting what she was going to test, she found problems.  The product owner noted this and thanked her.  By focusing on testing, she found she enjoyed testing again (Pete note - yeah - that helps)  

Interesting variation on mind maps - Use them to document testing - instead of step-by-step approach, simply mind map of function points to be tested.  (Pete Note: I do something similar to define sessions and charters for the sessions.)

==

Track Session: Myths of Exploratory Testing Louis Fraile and Jose Aracil

Starts with a fairly common BANG - Who is doing Exploratory Testing?"  Loads of hands go up.  (Pete note: ET by what model? Are they doing what I think of as ET?) (Note - they also did a pitch they are looking for people to join the company - boys - is that cricket?)

To do ET well, you need to...
"Inspect and Adapt" - change your thinking and observe what is going on around you. 
"Be creative/Take advantage of you're team's  creativity" - let people do their thing
"Additional to other testing" - don't just do ET - do other testing "like automation"
"Quickly finds defects" -  wait - is that a key to success or an attribute of ET?
"Add value to your customer" - hmmmmm what does this mean?
"Test Early! Test Often!" - what?

Myths...

Myth 1 - ET is the same as Ad-hoc Testing
"Good ET Must be planned and documented" -
You must know -
what has been tested;
When it was tested;
what defects where logged.

Some ideas -
Testing Tours - Whittaker
Session Based Testing - Bach/Bolton
Something Else (Huib suggests Mike Kelly's ideas and thrashing Whittaker's tour ideas)

Myth 2 - ET Can't be measured
Multiple measurements available - SBTM, etc.,
Pete comment - blah - what?

Myth 3 - ET is endless
Pete comment - no idea what their point here is.  sorry

Myth 4 - ET Can't reproduce defects
Be an explorer - really?
Be like David Livingstone from the video/computer game -
    Guys he was a real person ( http://en.wikipedia.org/wiki/David_Livingstone )
         not just a guy in a video game
    Record video, use screen capture, analog recording (pen & paper)
Empower developers - adopt one.
   Was that video really needed?

Myth 5 - ET is Only for Agile Team
Pete comments 
   - what?
   - CMMi works with ET?  REALLY?  By what definition of "CMMi Works?"

Myth 6 - ET is not documented 
Testers do things by not doing things in "Lonely Planet"
And then there are the ones who DO things in "Lonely Planet"

Pete comments - here to end
  - stretching the metaphor from Whittaker's tours just a little?

What?  " They don't do TDD with ET?"
Boys - TDD is a DEVELOPMENT tool - not a TEST TECHNIQUE
ET is an APPROACH not a TECHNIQUE

DIFFERENCES MATTER.  (shouting in my blog - not the room)

===

Keynote - Dan North (@tastapod) - Accelerating Agile Testing - beyond automation

Opening assertion- Testing is not a role it is a capability.

The question is - How do Agile teams do testing and how does testing happen?

Much effort is put into things that may, or may not, move us along.  The idea of backlog grooming is anathema to Dan North.  (Pete - something to that)   The thing is, in order to improve practices, we need to improve capabilities.  When people are capable of doing something, it makes it easier for them to actually do that thing.

We can divide effort into smaller pieces, sometimes this make sense, sometimes there are problems.  Sometimes there is a complete breakdown in the economic balance sheet to the software.  When they shift to "short waterfalls" you get "rapids."  Rapids are not the same as "rapid development."  Sometimes things don't make it better.

"User Experience is the Experience a user has." (OK - that was a direct quote.)  Translated - people will have an emotional reaction (experience) when they use the software/app/whatever.  Thus, people line up all night and around the corner to buy the newest Apple device.

"Don't automate things until they are boring."  If you are 6 sprints into something and have not delivered anything to the product owner/customer/etc., you are failing.  They can have developed all the cool interface stuff, test engine, internal structure - but if the product is not being produced - you are failing.

You have to decide the stuff you want to do - and base that on the stuff you choose not to do -

Opportunity cost - all the other things you could be doing if you weren't doing what you are.

The problem of course is we may not actually know what those things are.  The question of what can be tested and the actual cost of doing that testing is a problem we may find hard to reproduce, let alone, understand.

When there are problems, we need to consider a couple of things - Is it likely to happen again?  What is the chance of that happening again?  How bad will that be if it happens again?  These lead to "If that happens here" how bad will it be?

Thus Netflix (more traffic than porn by the way) does not worry too much if their server is down - they (and chaos monkey) may be interested what portion of their thousands of servers are down right now.  How much of the total is not available?  Since failure of some portion is determinate, why do we pretend if must be avoided/

Cites xkcd AND the  Leprechauns of Software book - stuff we know is bogus.  There is little evidence for many of the things we believe have little or no evidence supporting them..

Discusses coverage - Look at the important bits of the product, then see what make sense.  The stuff that is high impact of failure and high likelihood of failure had better get a whole pile more test effort than the stuff that no one will notice or care about if it DOES fail.



The question around this is CONTEXT - the context drives us - if it doesn't we are wasting time, money and losing credibility amongst thinking people.  We can get stuff worked out so we get a 80% coverage of something in testing, but if it is a context that is irrelevant, it doesn't matter.


Stakeholders, product owners, etc., MUST be part of the team for the project - they are also part of the context. However - we must know enough about something to know if it is important OR by what model it is important or  not.  Without these things we can not appreciate the context. 

Doing these things increases the chances that we have a clean, solid implementation - which makes ops folks happy.  They should be excited in a good way that we are implementing something - and looking forward to working with us to get it in.  If they are excited in a bad way about our deployments, we are doing it wrong.









TEST DELIBERATELY.

===

After spending time chilling in the hallway with people conversing on a variety of topics.  A needed afternoon off - it is time for Matt Heusser's keynote.  Scheduled talk is "Who Says Agile Can't Be Faster?"

Brief introduction of himself... developer/programmer - tester - agile guy - and ... author and - stuff.

After giving people a choice of topics - he launches into "Cool New Ideas and some old ones too."

And he gives away money... until he smacks the entire audience (except Seb Rose and those of us who heard him choose which game to try at the start.)  We become complacent - relaxed - and fall into "automatic" responses.  A cool video on attention awareness (or lack there of) launches him into his main theme.

Unless we know what to about what we are not looking for in particular.  Like "And nothing else goes wrong."  Except that takes really hard work.

Presents Taleb's Black Swan work - Risk at casinos - protect us from cheating and fraud and ... stuff.  Except when the tiger mauls Roy of Seigfried and Roy - Insurance on the performer, who recovered, except the point of the show was to bring people into the casino to spend money.  They didn't so Casino lost a bundle.

Walks through several examples - some more dramatic than others.  A brief survey of problems and examples of types of testing (Pete: favorite is "soap opera" where you run through elaborate stories that "no user would ever do - except this one does... what happens?)

Consider - Coverage decays over time, but we're never sure what parts decay at what rate.   We become complacent with automated tests or scripted manual (regression or whatever) and the more complacent we become the greater the odds that something will go horribly wrong.

This is the issue we all face whether we are aware of it or not.

Minefields!  (with a picture of a minefield) We get complacent and forget about stuff.  Its so easy because this always works - until something goes boom.

We MUST remember and keep this solidly in mind that this is a risk (awareness of a problem does not eliminate it, BUT - it helps us to keep it in the foreground and not slip into "system 1" thinking (autopilot mode.)

Presents, discusses a kanban board he used for test process/planning explicitly - the only thing on the board was testing stuff.  Thus - anyone can see what is being worked on in testing AND anyone can ask about it.  When people ask then about where are we?  They can look at the board.

OK - Matt has moved on to his Titanic story ... (Pete: I need to talk with him about this... there are some... issues.)  BUT he gets his Boat into the presentation!!

===

Break - and Game night!

Signing off from Potsdam for the day -

PS:  Evening testing/agile games night was loads of fun.  Matt did his Agile Planning session game, I did a collection of games around estimation and pattern recognition - gave away Scrabble Flash and puzzles made from erasers.  Then more beer and conversation at the conference hotel's bar.











































2 comments:

  1. Hi Pete,
    I really admire your live blogs. It allows me to follow the conference from home (sick today :( )

    I was there yesterday.. and reading your notes about the second day., I get the feeling that I am missing new Ideas in the sessions.. except some small exceptions naturally.. all people are dealing with known concepts and approaches and presenting them with new covers..

    It´s a little bit disappointing isn´t it..

    Note: I get sometimes confused in your bolgs as you are somtimes using the "I" form and some times talking abt "Pete"... :)

    ReplyDelete
    Replies
    1. Ah - noted - thanks - I'll try and get that cleaned up later today or this week. (mostly concerned with getting the ideas/impressions down - fix the errors later.)

      Thanks for the comment!

      Delete