Genesys Business Driven Testing

Yet another old project which I found when trawling through my archives!

Behaviour-Driven Development (BDD) is an evolution in the thinking behind Test Driven Development and Acceptance Test Driven Planning. The intent of BDD is to focus development on the delivery of prioritised, verifiable business value whilst providing a common vocabulary that spans the divide between Business and Technology.

BDD relies on the use of a very specific (and small) vocabulary to minimise miscommunication and to ensure that everyone – the business, developers, testers etc. are not only on the same page but using the same words.

Although BDD is not truly applicable to packaged application deployment such a Genesys, Behaviour-Driven Testing (BDT) certainly is. BDT requires a very specific, small and common vocabulary to develop and execute tests. The goal is a Business readable and domain specific language that allows you to describe a system’s behaviour without detailing how that behaviour is implemented.

BDT means that the tests (plain text feature descriptions with scenarios) are typically written before anything else and verified by the business e.g. non technical stakeholders.

Back in early 2010 I asked myself the question – how could BDT be applied to Genesys projects?

Cucumber

Image

Cucumber is a tool that can execute plain-text functional (feature) specifications as automated tests. The language that Cucumber understands is called Gherkin. It is a Business Readable Domain Specific Language lets you describe a system’s behaviour without detailing how that behaviour is implemented.

In the context of a Genesys implementation, here is an example of a feature specification:

Image

The nice thing about feature specifications is that we can also use Scenario Outlines. Scenario Outlines are the solution to repetitive Given-When-Then scenarios since they allow us to separate the structure of the test, which doesn’t change, from the data, which does. With Scenario Outlines, Cucumber turns each example-each table row-into a concrete scenario before looking for matching step definitions.

Here is another example using scenario outlines:

Image

From the perspective of a business user or tester that is all there is to it!

For a developer’s perspective, they implement in ‘code’ the step definitions that ultimately get executed. Each of the Given/When/Then calls in the feature description is a step definition. When there’s a matching line in a Cucumber test, the step definition gets executed. Effectively the development methodology is outside-in (the outside being the feature, the inside being the low level code).

Putting it all together here is the end to end process:

  • When the Business decides they want to add a new feature or fix a bug, they (or a tester) start by writing a new feature or scenario that describes how the feature should work. At this point no ‘code’ is written
  • The feature is run in Cucumber and results in a display of yellow (pending steps) – or red (failing) ones. If all steps are green the feature has already been implemented!
  • At this point a developer implements the feature, or more precisely, implements the ‘code’ behind each step definition
  • The feature is run again Cucumber and results should all be green (like a Cucumber!)

Cucumber for Genesys

Rather than developers implementing step definitions I implemented a common library of step definitions related to Genesys in the context of Business Driven testing – I call this “Cucumber for Genesys”.

Even though Cucumber is written in the Ruby language we can use Cuke4Nuke to invoke some Microsoft C# .NET code which wraps the Genesys Platform SDK. Then when Cucumber runs the feature specification, Cuke4Nuke will look for methods marked with Given, When and Then attributes that match steps by regular expression. Any capture groups in the regular expression are passed to the method as arguments (they don’t have to be Strings; you can use other .NET types as well).

Here is a subset of my current Genesys feature step implementations:

Image

Putting all of the above together means I can define feature specifications for test telephony related functionality which allows tests to be automated and regression suites developed.

A test call:

Image

Testing advisor functionality:

Image

Cool as a Cucumber!

Image

Share

More ASR Tuning post Release 2 Go-Live

Another busy week listening to ASR utterances and tuning the IVR application as a result. In order to better understand the 10% of people who never say a postcode I listened to 4526 postcode utterances. I really wish I could share some of these with you!

My findings and recommendations were:

  • Barge-in on the postcode prompt resulted in Customers not expecting to have to say a postcode. This was an issue since Customers then heard silence. The recommendation was to disable barge-in on the postcode prompt and set the continuous recognition timeout to 7 seconds (average time is 3-5 seconds) and silence timeout to 4 seconds
  • A  few people say “YES”, “NO”, “ADVISOR”, “AGENT”, “NOT KNOWN” to try to opt out
  • Some people say an account number instead of a postcode (because they have barged-in and not heard the postcode prompt)
  • A few people did not know where the hash key is on the phone. Recommended changing the initial prompt to “If you haven’t got an account number just press hash on the bottom right of the keypad

The good news is that incremental changes are now having a positive effect on the Customer Experience and overall Customer identification success rates.

Image

Image

Share

ASR Tuning post Release 2 Go-Live

As I posted last week we went live on Monday (27/06/2011) with our Release 2 solution and I am pleased to report that everything went very smoothly (for once!). Of course we had a number of minor issues which the team have worked hard on to resolve this week.

The Release 2 solution includes the rollout of Nuance Speech Recognition (ASR) for existing Customer identification. This is based on them saying their postcode and then the first line of the address. I have been buried in Nuance ASR logs all week and at the same time reviewing the associated recorded utterances. In fact, I analysed at total of 40000 utterances from Monday and 4 hours of utterance audio from 2 of the 9 Nuance Recognizer ASR servers!

As a result the following tuning recommendations have been made:

  • Increase the confidence level on postcode recognition from 0 to 4. This is because we were getting false positives on postcodes and then asking the customer to match against a list of addresses which would never match
  • Change the wording on the address prompt to include house number or name. This is because we observed that Customers were just saying a street name which would never match against a full address line

We have also identified a problem with invalid grammars when the address line contains 4 digits addresses e.g. 1234 SOME ROAD, when house numbers are prefixed with zero e.g. 01 SOME ROAD and when the address line also contains contact details such as telephone number. The result of this is that Customers are transferred directly to an advisor after giving a valid postcode.

Share