VAS issues and Empirix Performance Testing (again)

We are still trying to get a formal baseline test completed. The latest problems have been with the Genesys Virtual Agent Simulator (VAS). It turns out that VAS must be configured with an “Answer” step otherwise it crashes out each time.


PSN2414 and TSAPI Link Flooding (again)

In a previous post I mentioned TSAPI link flooding problems and that the fix was contained in PSN2414.

Basically PSN2414 is an upgrade to AES 4.2.4 plus configuration to use reserved TSAPI licenses as opposed to checking global licenses in and out via Avaya WebLM.

At this client however we have an Enterprise Wide licensing model which means that reserved licenses cannot be used ….

Currently there are discussions about this taking place between Genesys and Avaya. I suspect the outcome may be a “commercial agreement”. Other than that there may be some options if we upgrade to AES 5.2.2 since the release note for this version contains a section “Reserving TSAPI User Licenses” which implies that with AES 5.2.2 reserved licenses can be implemented with an Enterprise Licensing model.

The release note also contains the comment “For AE Services 5.2, the use of floating licenses is not recommended”.

Will keep you posted!


TSAPI Link Flooding

Another problem this week which caused an aborted Empirix performance testing cycle. When we added in additional extension DNs we started to see problems with DNs not being registered / monitored correctly. The Avaya TSAPI server logs were full of the following:

error requestTimeoutRejection

error outstandingRequestLimitExceeded

Also, in the Avaya AES logs we saw the following:

16:30:35 ERROR:CRITICAL:TSAPI:TSERVER:../ClnMsg.cpp/417 10 CLNTMSG[1]: Message CSTAMonitorCallsViaDevice for client Genesys avayatsapi_server ac, driver AVAYA#SWITCH#CSTA#KFNAY6206P, is being rejected because of driver flow control. The number of messages for this driver exceeds the allowed threshold. Messages Queued to Tserver/Driver: 752 (0x2f0), Priority Messages Queued: 0 (0x0), Messages Allocated: 51 (0x33), Max Flow Allowed: 800 (0x320)

16:30:35 ERROR:FYI:TSAPI:TSERVER:../ClnMsg.cpp/417 10 CLNTMSG[1]: If flow control occurs frequently for driver AVAYA#SWITCH#CSTA#KFNAY6206P, consider distributing traffic for this driver across additional AE Servers. If this problem occurs only intermittently, use CTI OAM Administration (Administration > Security Database > Tlinks) to increase the value of the Max Flow Allowed field.

We are using T-Server for Avaya TSAPI ( connected to Avaya AES 4.2.1.

After a bit of solution searching we upgraded the Avaya TSAPI client from 4.1 to 5.2.4 without any success. After a bit of playing around we found a temporary workaround by setting various T-Server options:

background-processing = false
use-link-bandwidth = 8 ***
use-link-bandwidth-startup = 8 ***
use-link-bandwidth-backup = 8 ***
max-attempts-to-register = 10
register-attempts = 5
register-tout = 2 sec

However, the above settings limit the CTI link bandwidth to 8 messages/second so it takes a long time to restart T-Server!

Further solution searching came up with the following:

“In most cases Avaya PSN2414r2 is required for TSAPI TServer to function correctly. Avaya PSN2414r2 is a restricted patch that allows TSAPI to pull licenses up front instead of individual SSL sessions each time TServer registers DNs, routes calls, or monitors calls.”

We are running version AES 4.2.1 which does not include the PSN 2414 patch. The next step is to upgrade AES to 4.2.4 (version 5.2 although officially supported by Genesys is a major upgrade and deemed to risky at the moment)


Release 1 Operational Acceptance Testing (OAT)

We are now into week 2 of Release 1 OAT testing. Some notable fixes this week are:

Configuration Server

After failure of Config Server, it is not possible to login Config Manager or CCPulse to Config Server Proxy when the primary configuration server is down.

This has been fixed by setting the reconnect timeout to 10 (seconds) on configuration server rather than the default of 0. This is fixed on configuration server as part of ER# 221781113. However, since we are on configuration server the fix does not seem to have made it into version 8 of configuration server!

Stream Manager Resilience

We have 4 stream managers in each site. When shutting down 3 out of 4 stream managers all calls to advisor default route. If there are two stream managers up, the calls also default route. If there are 3 stream managers up the call routes correctly to an advisor.

Seems to be fixes in SIP server 8.0.400.25 as part of ER# 230151967 and ER# 102209228:

“SIP Server now retries treatments only on media servers that are still in service (the out-of-service check shows the Voice over IP Service DN (service-type set to treatment) as available)”

“SIP Server no longer sets a DN to out of service in a scenario where a call is routed to an unresponsive device and a caller abandons the call before the sip-invite-timeout timer expires. If the caller does not abandon the call during the sip-invite-timeout time period, then, when this timeout expires, SIP Server sets the unresponsive device to out of service. Once the recovery-timeout timer configured for this device expires, SIP Server sets it back in service”

There are some possible workaround but it looks as though a SIP server upgrade is on the cards – lets hope our Avaya SIP interoperability issues from last year do not come back!

For information, the possible workarounds are:

1. Set option “sip-invite-treatment-timeout = 5” on SIP servers


2. Remove OOS configuration on VoIP Service DNs. To do this add a new option:

sip-oos-enabled = false

This should insure that a treatment is re-applied to a call on the Stream Manager during a failover

3. Change the VoIP Service DN options to:

sip-oos-enabled = true


Empirix Performance Testing

I mentioned in an earlier post that we have been (or trying) to undertake performance testing of the end to end solution at this client prior to rollout of the pilot (which went live in November 2009) to an additional two Contact Centre sites and a couple of 1000 advisors.

The Empirix testing infrastructure consists of 6 G5 Load Generators (2 at each of the 3 Contact Centre sites) and 3 Virtual Agent Simulators (VAS) at each of the Contact Centre sites.

We are injecting calls directly into the centralised Avaya SES server to be as representative of ISDN calls ingress as possible. From a callflow perspective this means that calls are injected in SES which then forwards the SIP INVITE to Avaya Communication Manager (CLAN cards). The call hits a VDN in the same way as for normal ISDN calls and is routed to Genesys via SES over SIP signalling links.

We have been working on 2 major issues for the last few weeks:

  • GVP (IPCS) crashes under load at a rate of 50 calls/minute. Although new calls continue OK we observe “stuck” calls on GVP ports
  • Calls hanging on Avaya stations even though they have been cleared down on the G5 load generators e.g. SIP BYE message sent. We do not want to clear down from the VAS end as this is not representative of the business process whereby advisors must wait for the caller to hangup

This week we have finally resolved both issues and have had an informal test run at moderate load. Here is what we found ….

IPCS Crashes

This turned out to set a JavaScript issue affecting IPCS 7.6.410 (MR1) all the way up to IPCS 7.6.470 (MR7) which is the latest release at the time of writing. The root cause is still under investigation by Genesys Engineering since it is not a good idea to allow a GVP Studio application “bug” which caused a JavaScript exception to crash a whole IPCS.

The error occurred when retrieving configuration data from a custom “config.xml” file in the following JavaScript line:

<assign name=”VOXFILEDIR” expr=”GetData(VOXFILEDIR, ‘VOX_FIlE_PATH’)”/>

And was fixed by changing this line to:

<data name=”VOXFILEDIR” src=”Config.xml”></data>
<assign name=”document.VOXFILEDIR” expr=”VOXFILEDIR.documentElement”/>
<assign name=”VOXFILEDIR” expr=”GetData(VOXFILEDIR, ‘VOX_FIlE_PATH’)”/>

Hung calls on Avaya stations

This turned out to be a SIP interoperability issue (surprise surprise!) and was fixed by a “downgrade” to the Empirix G5 SIP state machine.

We believe that the problem was caused by the SIP routing information (record-route attribute) being updated to include the IP address of the Avaya CLAN card in addition to the Avaya SES server via which the initial INVITE was sent:


Since the Empirix G5 is stateful this updated routing information was being maintained and then re-used on the BYE message at the end of the call:


As a result the BYE was being ignored and the call hanging (the Empirix G5 though that the call had been disconnected event though it never received an ACK back). To fix this problem the Empirix G5 SIP state machine was modified to ignore updated routing information (record-route attribute),

I’m not going to argue who is in the wrong here although I strongly suspect it is Avaya! The reason for saying this is another SIP interoperability issue that has popped up since. This time Avaya SIP interoperability with Kofax which we are using for Fax channel integration (hopefully!)

Please see:

“The problem we always see is when Cisco sends a BYE to Avaya. Avaya sees the BYE but for whatever reason Avaya will never send an OK back to Call Manager. This results in a hung call leg in the Avaya. The hung call leg stays up in Call Manager until my timers expire and then the call is flushed.”

“What we have found is that Avaya fails to honour any SIP method unless record-route is used. If we run our SIP proxy servers in non-stateful mode (record-route off) Avaya fails to honor any method that didn’t come back from the first proxy that routed the call”

In the case of Kofax, Kofax does not include the record-route attribute in any response methods. This can be seen in the Wireshark trace below:


“The only way the SIP stack on Avaya will function with a stateless proxy and respond to all parties that the proxy may send the call to requires the creation of a “dummy” signaling group on the Avaya PBX. Basically, you have to build your main signaling group with trunking to the proxy and then add a “dummy” signaling group into Avaya for each end point IP address that you may see SIP methods come back from. E.g. Kofax”

We have one main signaling group with trunking to the stateless proxy. Since the proxy is stateless it will only be in the call flow until the proxy sends the final OK response back from Cisco Call Manager.

In addition to the signaling on Avaya to the proxy, we also had to build a signaling group on the Avaya that has no trunking but has the IP address of the far end Call Manager server that would be in the call flow after the proxy (SES) sets the call up. This “dummy” signaling group has no trunking in Avaya – we have only defined the far end IP address in the signaling group page on Avaya.

Therefore, from a Kofax perspective I think what they are saying here is create a “Kofax signaling group” with the IP address of the Kofax server specified as the far end IP address.

Will let you know how we get on with this is a future post.


GVP 7.6 PCI Compliance

Lots more work over the last few weeks on compliance of our GVP 7.6 deployment from a PCI (Payment Card Industry) perspective since in Release 2 we will start to take card payments.

Current focus is on implementing session border controllers (SBC) acting as a B2BUA between the secure Avaya infractucture (SRTP) and the insecure GVP 7.6 IPCS components (RTP). From a PCI perspective the issue is the detection of out of band DTMF digits in the RTP payload.

Again, I will update this post when I get some more time.


Empirix Performance Testing

Sorry for the lack of posts recently.

At this client, we are currently in the middle of Empirix Performance Testing ( Interesting stuff and progress to date!

We have two G5 Hammer Load Generators installed at each of our Contact Centre sites as well as a Genesys VAS (Virtual Agent Simulator) at each site. Calls are being injected directly into the Avaya switch via SES and a SIP trunk.

When I get a bit of free time I will update this post with some of my findings.


Virtual Hold Preview Mode

At this client we have deployed Virtual Hold ( to provide in-queue callbacks (e.g. queue buster type functionality). Whilst VH is a great standalone product, when integrated with Genesys CIM there are some limitations.

One of the biggest limitations is that VHT is only integrated with Genesys T-Server (via the Queue Manager service) and there is no integration with Stat Servers which means that VHT Queue Manager applies its own logic to derive the Estimated Wait Time (EWT).

In reality this means that VH Queue Manager in predictive mode initiates callbacks (via TMakePredictiveCall) based on Queue Manager maintained statistics rather than Stat Server statistics. As a result, when the callback is made there is no guarantee that a) there are any advisors available and b) the customer will not have to queue again.

As an alterative VH queues can be configured on preview mode.

In preview mode after a customer requests a callback, Virtual Hold uses a Genesys Virtual Routing Point (VRP) to queue the call virtually. When it is the customer’s turn to speak to an advisor and an advisor becomes available, VH will receive a message from Genesys T-Server that the advisor is available; VH will then send the virtual call and its information to the agent desktop through a user event. The advisor desktop gives the advisor options, which can include: call the customer, reject the call (the call will be given to another advisor), cancel the callback, or reschedule the callback. If the customer is called back, the advisor desktop forces the advisor to select a call result value.

Effectively, the solution is a manual callback which is presented to an available advisor and they are then expected to either accept or reject the callback request. If they accept the callback request they are required to dial the customers’ number manually.

There are a number of limitations with the approach such as:

  • Requires custom functionality on the advisor desktop. Out of the box this functionality is only provided by the Genesys Agent Desktop (GAD)
  • Manual dialing of contact numbers is prone to error

As a proof of concept I have developed a Virtual Hold Preview Dialler which connects to Genesys T-Server and acts as a T-Server client, registering for events on Advisor extension (station) DNs and processes callback user events. The Virtual Hold Preview Dialler responds advisor desktop callback user events as follows:

1. If a user event with key “VCB_USER_EVENT_REQUEST” is received and the value is “RequestCallbackPreview” an outbound call to the callback number (KVP VCB_CONTACT) is initiated from the advisor’s station / extension (TMakeCall).


2. The Virtual Hold Preview Dialler waits for EventEstablished and then sends a user event with key “VCB_USER_EVENT_RESPONSE” and value “RequestCallbackProcessed”. VCB_CALL_RESULT is set to 33 (Answered).

Note that VH only considers the call result value of “Answer” to be a successful callback. Unless the callback is canceled or rescheduled; any other call result value will cause the callback to be marked as “callback failed”, and the callback will be re-initiated according to the VH retry parameters.


The diagram below shows the overall architecture:


As a result, the Virtual Hold Preview Dialler allows us to run Virtual Hold callbacks in preview mode without any customisation of the advisor desktop application. This means that manual intervention by the advisor is not required, business process is enforced and dialing errors are prevented.




My first iPad application

Apple have had another $99 from me to join the developer program (in addition to the cost of buying a MacBook Pro and iPad!) so I thought that I had better make use of them.

Over the last few weeks I’ve been getting to grips with development on the Apple iPhone OS (iPod Touch / iPhone / iPad). It’s been quite a learning curve with lots of new languages, tools and frameworks to get used to: Objective-C, Xcode IDE, Inferface Builder, Cocoa Touch, Framework, UIKit etc. Fortunately there are lots of good tutorials out there and some excellent vidoes on Apple iTunes U.

At first it all seemed a bit daunting given my Microsoft C# and .NET background (although I also know C on UNIX from many years ago which makes Obective-C at bit easier). However, the more I get into this the more I realise that a lot of the design patterns such as MVC and tools are very similar. It’s just a case of different terminology for the same thing.

Using the tutorials I have managed to get a simple Objective-C based “Hello World” application running the hard way on my iPad. The reason I say the hard way is because I have also discovered MonoTouch (

“MonoTouch allows developers to create C# and .NET based applications that run on Apple’s iPhone and Apple’s iPod Touch devices, while taking advantage of the iPhone APIs and reusing both code and libraries that have been built for .NET, as well as existing skills.”

I have also been able to get a MonoTouch based GPS application running on my iPad as well. Given that my application is pretty simple it is too early to say whether I am going to splash out another $399 to license MonoTouch and use the associated MonoDevelop IDE as my primarly iPhone OS development tool. It also depends on how the iPhone Developer Program License Agreement pans out in OS 4 aka the Apple v Adobe war.

MonoTouch is obviously a lot more familiar and allows me to re-use existing C# code that I have in my toolbox. Also, with MonoTouch I don’t need to worry as much about memory management e.g. retain / release and the autorelease pool.

Regardless of whether I use Objective-C or Mono / C# I still need to learn Interface Builder (IB) so this is where I will focus my learning for now.

Still a long long way to get towards my full first “proper” application but at least I have made a start.


Nuance ASR with GVP 7.6 and GVP 8.1

We have had quite a big of fun and games trying to get Nuance Recognizer 9 working with both GVP 7.6 and GVP 8.1. We finally have the answer!

The following configuration works fine:

  • Nuance RealSpeak 4.5.0 patch 1
  • Nuance Recognizer
  • Nuance Speech Server (NSS) 5.0.5

The problem was that versions of NSS greater than 5.0.5 do not work with GVP7.6 without a hot fix (MR4). Given the problems installing GVP 7.6 in the first place it seemed easier to downgrade from NSS 5.0.7 to 5.0.5.