Using AWS Transcribe to get IVR prompt verbiage

I’m a stickler for documentation and a bigger stickler for good documentation. Documentation allows the work you’ve produced to live on beyond you and help others get up to speed quickly. It feels that documentation is one of those things everyone says they do, but few really follow through. There’s nothing hard about it, but it’s something you need to work on as you’re going through your project. Do not leave documentation to the end, it will show. So DO IT!

I’ve recently started a new project to migrate an IVR over to CVP. To my pleasant surprise the customer had a call flow, prompts, a web service definition document, and test data. A dream come true! As I started the development I noticed that the verbiage in the prompts didn’t match the call flow and considering my “sticklerness” I wanted to update the call flow to ensure it matches 100% with the verbiage.

I’m always looking for excuses to play around with Python, so that’s what I used. I hacked together the script below which does the following:

  • Creates an AWS S3 bucket.
  • Uploads prompts from a specific directory to bucket.
  • Creates a job in AWS Transcribe to transcribe the prompts.
  • Waits for the job to be completed.
  • Creates a CSV names prompts.csv
  • Deletes transcriptions jobs
  • Deletes bucket

The only things you will need to change to match what you’re doing is the following:

 
local_directory = 'Spanish/'
file_extension = '.wav'
media_format = 'wav'
language_code = 'es-US'

The complete code is found below, be careful with the formatting it might be best to use copy it from this snippet:

</pre>
<pre>from __future__ import print_function
from botocore.exceptions import ClientError

import boto3
import uuid
import logging
import sys
import os
import time
import json
import urllib.request
import pandas

local_directory = 'French/'
file_extension = '.wav'
media_format = 'wav'
language_code = 'fr-CA'

def create_unique_bucket_name(bucket_prefix):
    # The generated bucket name must be between 3 and 63 chars long
    return ''.join([bucket_prefix, str(uuid.uuid4())])

def create_bucket(bucket_prefix, s3_connection):
    session = boto3.session.Session()
    current_region = session.region_name
    bucket_name = create_unique_bucket_name(bucket_prefix)
    bucket_response = s3_connection.create_bucket(
        Bucket=bucket_name,
    )
    # print(bucket_name, current_region)
    return bucket_name, bucket_response

def delete_all_objects(bucket_name):
    res = []
    bucket = s3Resource.Bucket(bucket_name)
    for obj_version in bucket.object_versions.all():
        res.append({'Key': obj_version.object_key,
                    'VersionId': obj_version.id})
    # print(res)
    bucket.delete_objects(Delete={'Objects': res})

s3Client = boto3.client('s3')
s3Resource = boto3.resource('s3')
transcribe = boto3.client('transcribe')
data_frame =  pandas.DataFrame()

# Create bucket
bucket_name, first_response = create_bucket(
    bucket_prefix = 'transcription-',
    s3_connection = s3Client)

print("Bucket created %s" % bucket_name)

print("Checking bucket.")
for bucket in s3Resource.buckets.all():
    if bucket.name == bucket_name:
        print("Bucket ready.")
        good_to_go = True

if not good_to_go:
    print("Error with bucket.")
    quit()

# enumerate local files recursively
for root, dirs, files in os.walk(local_directory):
    for filename in files:
        if filename.endswith(file_extension):
            # construct the full local path
            local_path = os.path.join(root, filename)
            print("Local path: %s" % local_path)
            # construct the full Dropbox path
            relative_path = os.path.relpath(local_path, local_directory)
            print("File name: %s" % relative_path)
            s3_path = local_path
            print("Searching for %s in bucket %s" % (s3_path, bucket_name))
            try:
                s3Client.head_object(Bucket=bucket_name, Key=s3_path)
                print("Path found on bucket. Skipping %s..." % s3_path)
            except:
                print("Uploading %s..." % s3_path)
                s3Client.upload_file(local_path, bucket_name, s3_path)
                job_name = relative_path 
                job_uri = "https://%s.s3.amazonaws.com/%s" % (
                    bucket_name, s3_path)
                transcribe.start_transcription_job(
                    TranscriptionJobName=job_name,
                    Media={'MediaFileUri': job_uri},
                    MediaFormat=media_format,
                    LanguageCode=language_code
                )
                while True:
                    status = transcribe.get_transcription_job(TranscriptionJobName=job_name)
                    if status['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
                        break
                    print('Transcription ' + status['TranscriptionJob']['TranscriptionJobStatus'])
                    time.sleep(25)
                print('Transcription ' + status['TranscriptionJob']['TranscriptionJobStatus'])
                response = urllib.request.urlopen(status['TranscriptionJob']['Transcript']['TranscriptFileUri'])
                data = json.loads(response.read())
                text = data['results']['transcripts'][0]['transcript']
                print("%s, %s "%(job_name, text))
                data_frame = data_frame.append({"Prompt Name":job_name, "Verbiage":text}, ignore_index=True)
                print("Deleting transcription job.")
                status = transcribe.delete_transcription_job(TranscriptionJobName=job_name)

#Create csv
print("Writing CSV")
data_frame.to_csv('prompts.csv', index=False)

# Empty bucket
print("Emptying bucket.")
delete_all_objects(bucket_name)

# Delete empty bucket
s3Resource.Bucket(bucket_name).delete()
print("Bucket deleted.")</pre>
<pre>

I hope this helps someone out there create better documentation.

~david

CVP Studio Features Dialog

Multiple CVP Studio Versions on the Same Machine

If you’re like me and do a fair bit of CVP work, including development, and you like to run the same version of Studio as the version you’re deploying against, then you’ve struggled with the fact that CVP Studio doesn’t play very nice with other versions of CVP Studio. Hopefully this blog helps you get all versions of CVP Studio running in harmony on your Windows machine. This method has worked well when going from CVP 12.x to 11.x to 10.x. As well as going the other way around.

  • Backup your registry.
  • If you have a 12.x version of CVP Studio installed:
    • DELETE:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{763E1DF9-41BC-4C54-9705-A0C6D1594B26}
  • If you have a 10.x or 11.x version of CVP Studio installed:
    • DELETE: [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\{763E1DF9-41BC-4C54-9705-A0C6D1594B26}]
  • Install the new version of CVP Studio in a different directory. I like to install mine in c:\cisco\CallStudioxxx, where xxx is the version (e.g. 100, 115, etc).
  • Repoint and create new shortcuts for the various versions.

You will have to repoint your shortcuts, but other than that you should be all good.

CVP Studio Features Dialog

CVP 11 and 12 running side by side.

~david

Revisting Saving Host Names in Cisco AnyConnect Client

One of my most popular blog posts is this where I talk about how to set your AnyConnect VPN client to remember the addresses of the various VPN URLs you use. I figured I would be good to revisit it since it’s been 5 years since I last talked about it to ensure things still worked the same way as they did back in 2014.

First, everything still works the same way on Windows 10. You go to C:\ProgramData\Cisco\Cisco AnyConnect Secure Mobility Client\Profile create a new xml file. I call mine Profile.xml and use the following format:

<?xml version=”1.0″ encoding=”UTF-8″?>
<AnyConnectProfile xmlns=”http://schemas.xmlsoap.org/encoding/” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://schemas.xmlsoap.org/encoding/ AnyConnectProfile.xsd”>
<ServerList>
<HostEntry>
<HostName>dcloud-rtp-anyconnect.cisco.com</HostName>
<HostAddress>dcloud-rtp-anyconnect.cisco.com</HostAddress>
</HostEntry>
<HostEntry>
<HostName>2dcloud-rtp-anyconnect.cisco.com</HostName>
<HostAddress>dcloud-rtp-anyconnect.cisco.com</HostAddress>
</HostEntry>
<HostEntry>
<HostName>3dcloud-rtp-anyconnect.cisco.com</HostName>
<HostAddress>dcloud-rtp-anyconnect.cisco.com</HostAddress>
</HostEntry>
</ServerList>
</AnyConnectProfile>
Save it and restart your client and it will look like this:
VPN Client With Multiple URLs Saved
There is another method I’ve found and this involves some additional software. First, from software.cisco.com you have to install the Profile Editor (Windows), you only need to enable the Cisco AnyConnect Profile Editor. Once installed run the program.
Screen Shot 2020-01-14 at 9.38.43 AM
Go to Server List > Add:
Screen Shot 2020-01-14 at 9.44.24 AM
Go to File > Save As and save the XML file to same location C:\ProgramData\Cisco\Cisco AnyConnect Secure Mobility Client\Profile, restart the client and you should be all set.
~david

VMWare Fusion 11.5 Windows 10 VM Black Screen on Catalina

I wanted to do a quick post on this as I had to try multiple things to get my Windows 10 VM up and running. There are two posts which helped. TCC DB Updates and the processor change post 10. To summarize it both links:

Turn Off the Rootless

  1. Reboot your MAC and hold CMD+R to enter the recovery mode.
  2. Open the terminal.
  3. Enter csrutil disable
  4. Restart

From a terminal run:

tccutil reset All com.vmware.fusion   
 
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServiceScreenCapture", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'   3, 
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServiceListenEvent", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'   4, 
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServicePostEvent", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'

Turn On the Rootless

  1. Reboot your MAC and hold CMD+R to enter the recovery mode.
  2. Open the terminal.
  3. Enter csrutil enable
  4. Restart

Launch Fusion and ensure that under your Mac’s Security and Privacy:

Accessibility > VMWare Fusion is checked
Screen Recording > Vmware Fusion is checked

With your VM shutdown go to settings > Processor and Memory and check these two settings:

VMProcessor&Memory

Hope this helps another poor soul.

~david

AWS’s AI in the Contact Center Pitch: A Swing and a Miss.

Recently AWS released a “Knowledge Brief” illustrating how Fortune 1000 companies are taking a deeper interest in AI related products and services for their contact centers. While I think there are plenty of points which could be argued, for the sake of this post, I will focus on the intro graph as this is the springboard to the whole document created by the Aberdeen Group’s research. Let’s start with the graph:

Capture.PNG

First, I was surprised of the atribution for the spike in contact center solutions research to the Google Duplex presentation during I/O 2018. Second, the report goes on to state that the red line declining off to the right are the search results for PBX because “firms are not as active in researching best practices and trends in use of PBX.” These two points stuck to me as odd specially if you’re building a whole paper on those two premises so I took it upon myself to see if I could indepedently confirm their positions.

Considering the paper states that this is all about research I decided to go to world’s research webpage: Google; specifically Google Trends. Let’s tackle the spike in research due to the announcement of Google Duplex. You will see that Google registered the terms “google duplex” spiking in May which matches with their blog post linked above. The report’s graph has this spike happening in July which is not correct. But let’s give them the benefit of the doubt that the x-axis is mislabled since there certainly was a spike in research on these terms.

Capture.PNG

The papers second point is around the decline of research around the term PBX. The document states “..it’s reflected through the dark red line that’s particularly trending downwards between July and September 2018.” The main reason why this caught my eye is because of the term PBX. As those of you in the conctact center business know the term PBX really has gone out of use in the late 90s and even more today in the 2000s. Mainly because with VoIP the PBX term is not used as broadly. Make no mistake things like Cisco’s CommunicationManager and Asterisks are PBXes, but they are so much more thus why the term has fallen out of favor. Given this information let’s compare how the term PBX and ACD, a more broadly used term to almost mean the same thing, have trended for the time period this report covers.

Capture.PNG

Neither term has really seen a decline. Heck you could argue that PBX saw an increase between May and July while ACD saw an increase after July. Ultimately debunking the premise this whole document stands upon.

AI/ML is the hot new topic, but there’s a time and a place for everything. This paper’s whole premise for an AI future relies on faulty data which causes the whole article to fall apart. This, like may other pieces, are more hype than substance.

~david

 

 

 

 

 

 

 

My take on easily improving your customer’s experience with not a lot of money and without having to hire me.

Recently I was talking to an acquaintance about our top IVR annoyances. While we debated back and forth on the merit of each annoyances it got me thinking about the current wave around customer experience, customer journey, and the amount of money and products some companies are putting in to try and get marginal improvements. While I’ve been working in the contact center for over a decade, I certainly don’t know it all, but I’ve come to realize that before spending a lot of money businesses should do a few small things. These small things will provide small improvements and will set you up to be better prepared for bringing some vendor to help you “revolutionize” your customer experience.

Now there is no data to backup these thoughts, but I like to think my experience should carry a bit of weight. Here we go:

Your IVR should reflect your personality. Every IVR sounds the same, is your business just like every other business? All businesses stress over print ads, color schemes for the website, logos, commercials, but their IVR still feels like every other IVR. Why not carry that stress over to something which can be personalized with just a few words and voice inflection?

Know your callers. We find ourselves in a data rich and information poor world. Are your callers millennials? Are they senior citizens? Is there a specific social economic status which gravitates towards your IVR while others go through a different channel?  All of this information is critical in trying to figure out what options you should be offering in your IVR and what your personality should be.

Make it sound fresh. Has your IVR welcomed every caller with “Thank you for calling…” since the dawn of time? Every modern and not so modern IVR in the world can play an array of greetings, use slightly different language depending on the time of year, and create some personalization without much work. No one likes to talk to that one person who always tells the same stories. Your IVR can easily and cheaply break that monotony, sound fresh, and make the wait seem more engaging.

Don’t make me tell you again. One of the things that’s most annoying is when you call a company to fix a problem, you think it’s solved only to find out a few hours later or a few days later that it’s not resolved. You know what I do, and the rest of the world does? Call right back. It’s very easy for modern IVRs to see that a customer called recently and there’s a very high likelihood they are calling for the same reason again. So why put them through your self-service menu? Get them immediately to an agent, you failed at first call resolution, you know it and they know. Here’s a second shot at making it better. Extra points if you get them to the same agent who will have some context from the first call.

Everyone likes surprises. Every once in a while, have calls be sent to agents without having to go through the full gamut of the IVR. Specially if you know you have agents available. which will increase your agent utilization, but this only works if your agents are able to handle most types of calls. Because if you’re going to have to transfer callers, do not do this!

Don’t pretend to care. Saying that my call is important is such an insult. It’s not, otherwise you would have staffed accordingly and not made me wait. Offer a way to call back a customer instead of having them wait hostage to your queue treatment.

Silence is golden.  If your call center deals with extremely high hold times greater than 15 minutes. Give the caller the option to not hear any music and announcements at all. An occasional beep and maybe a short message on how to enable music again will make the wait time much more enjoyable. If I can detect how often your music on hold loops, I will not be very pleasant when the agent takes my call.

~david

 

 

Connect Squirrel SQL to UCCX DB

I found a good bit of information on this, but none of it was in a single post. Figured it might help to see all the steps in a single spot. This assumes 11.x UCCX from a Windows machine.

  • Download Squirrel SQL https://sourceforge.net/projects/squirrel-sql/.
  • Open a command prompt with Administrator privileges.
  • Move to were the Squirrel jar file is found.
  • Run “java -jar squirrel-sql.XXX.jar”.
  • Besides the standard options add the Informix drivers.
  • Download the latest Informix JDBC driver from https://mvnrepository.com/artifact/com.ibm.informix/jdbc/
  • Place JDBC drive in the Squirrel SQL lib folder. You shoul be able to click on Drivers and scroll down and see a check mark next to Informix.
  • Go to UCCX Administrator > Tools > Password Managenet reset the password for uccxhruser. If you do this and you have a HA setup, make sure that you click on “Check Consistency” to validate that both nodes have the latest password. If they don’t, login to both nodes and do the previous step on each.
  • Connection URL format is: jdbc:informix-sqli://<fqdn or ip>:1504/db_cra:INFORMIXSERVER=<hostname>_uccx
  • Connect and to validate that you get some data go to SQL and run a query like “select contactType, applicationName from ContactCallDetail where ContactCallDetail.startDateTime >= ‘2019-07-01 00:00:00′”

~david

Release Nuance ASR License in Cisco Contact Center Enterprise & Virtual Voice Browser (VVB)

This topic seems to come up everytime I’m on an ASR project. I finally got a definetive answer out of Cisco on the “right” way to do this. The most popular approach to release the Nuance license is to have a dummy label in your ICM script. You can read all the details in this post. However, a much cleaner way of doing it involves adding a CVP collection/capture element, I like to use Digits, set the Input Mode to dtmf, all collection timers to 1, and play a very short silent prompt. Additionaly, add the following VXML property com.cisco.asr-server = Default.

CVP Studio Digits Elements for Nuance Release

How to confirm this actually works? Start Wireshark from one of your Nuance servers and and use the following filter:

(!sip.CSeq.method == "OPTIONS")&&(sip)&&frame.len in {874 504}

The above filter will only show the Invites and Byes to the Nuance service. Which will yield the following output:

"No.","Time","Source","Destination","Protocol","Length","Info"
"100","8.769243","10.10.10.16","10.10.10.17","SIP/SDP","874","Request: INVITE sip:asr@nuancesvr:5060;transport=tcp | "
"3699","95.355613","10.10.10.16","10.10.10.17","SIP","504","Request: BYE sip:mrcpserver@nuancesvr:5060;transport=TCP | "

You should confirm that the IPs in the capture are those of your VVB and Nuance box.

~david

Cisco Finesse Workflows Troubleshooting

I ran into this problem today and I had never really thought about how you could troubleshoot issues where a workflow wasn’t working. It had always been, if it worked it worked. Took a bit of time to figure out how to dig in to the right Finesse logs to see why exactly my workflow was not firing. In my particular case I have a screen pop workflow which is supposed to pop if a call variable contains a specific word. We’re going to figure out why the workflow never worked.

First, you should go to the URL below enable persistent logging and sign in to Finesse as your agent. To be safe you might want to clear the local storage, but that’s not really necessary.

https://fqdn/desktop/locallog

Second, send in a new call which is supposed to trigger the workflow.

Thrid, in a new tab open the locallog URL and let’s walk through what we see.

One of the first things you’ll see is that Finesse pulls all the workflows associated with your team:

2019-02-19T15:46:46.901 -05:00: BF8760: FQDN: Feb 19 2019 12:46:47.214 -0800: Header : [ClientServices] Workflows: requestId='undefined', Making REST request: method=GET, url='https://FQDN:/finesse/api/User/9056/Workflows?nocache=1550609206901'
2019-02-19T15:46:47.227 -05:00: BF8760: FQDN: Feb 19 2019 12:46:47.540 -0800: Header : [ClientServices] Workflows: requestId='undefined', Returned with status=200, content='&lt;Workflows&gt;&lt;Workflow&gt;&lt;name&gt;PARTICIPANT OVERVIEW WORKFLOW&lt;/name&gt;&lt;description&gt;PARTICIPANT OVERVIEW&lt;/description&gt;&lt;uri&gt;/finesse/api/Workflow/1&lt;/uri&gt;&lt;TriggerSet&gt;&lt;name&gt;CALL_ARRIVES&lt;/name&gt;&lt;type&gt;SYSTEM&lt;/type&gt;&lt;triggers&gt;&lt;Trigger&gt;&lt;comparator&gt;IS_EQUAL&lt;/comparator&gt;&lt;value&gt;Voice&lt;/value&gt;&lt;Variable&gt;&lt;name&gt;mediaType&lt;/name&gt;&lt;node&gt;//Dialog/mediaType&lt;/node&gt;&lt;type&gt;CUSTOM&lt;/type&gt;&lt;/Variable&gt;&lt;/Trigger&gt;&lt;Trigger&gt;..

Next you’ll see that Finesse will see if there’s a workflow to run if the agent has logged in or gone ready. So it will evaluate the workflow conditions based on this trigger. This happens always even if you don’t have a workflow with these trigger conditions.

2019-02-19T15:46:47.237 -05:00: BF8760: FQDN: Feb 19 2019 12:46:47.550 -0800: Header : [WorkflowEngine] Entering 'Busy' state, from: 'loggingIn'. Triggering start of queued event processing.
...
2019-02-19T15:46:47.412 -05:00: BF8760: FQDN: Feb 19 2019 12:46:47.725 -0800: Header : [WorkflowEngine] "" IS_EQUAL "Voice" evaluates to FALSE

So far so good, but we’ve not gotten to our workflow which is supposed to launch on call arrival.

2019-02-19T15:48:14.103 -05:00: BF8760: FQDN: Feb 19 2019 12:48:14.419 -0800: Header : [WorkflowEngine] Entering 'Busy' state, from: 'idle'. Triggering start of queued event processing.
...
2019-02-19T15:48:14.191 -05:00: BF8760: FQDN: Feb 19 2019 12:48:14.507 -0800: Header : [WorkflowEngine] Evaluating conditions for workflow: {"workflowName":"CASE SEARCH","eventType":"Dialog","eventUri":"/finesse/api/Dialog/33558863"}
2019-02-19T15:48:14.191 -05:00: BF8760: FQDN: Feb 19 2019 12:48:14.507 -0800: Header : [WorkflowEngine] "ATTORNEY" CONTAINS ""ATTORNEY"" evaluates to FALSE

As you can see in the last line our workflow is supposed to launch if ATTORNEY contains ATTORNEY, but we have “” arround the string which is causing it to not match. By going to Finesse adming and changing your workflow to not contain the quotes fixed the issue right up.

~david

Deploy to Firebase hosting using Bitbucket Pipeline

On and off I’ve played around with Firebase and one thing I had never tried out was the hosting part of Google’s offering. I spent a bit of time a few nights ago getting familiar with it and didn’t like the fact that I had to deploy from the command line as I generally like to do my deployments via git.

This excellent blog post showed me what I needed to do and everything seemed easy enough. Here is my original pipeline.yml

image: node:8.4.0
pipelines:
branches:
master:
- step:
deployment: production
caches:
- node
script:
- npm install -g firebase-tools
- firebase deploy --token=$FIREBASE_TOKEN --project mySite --non-interactive

When my pipeline ran I received the following error.

Error: Authorization failed. This account is missing the following required permissions on project &lt;project&gt;:
firebase.projects.get
firebasehosting.sites.update

The issue was resolved when I looked at the –project parameter. Everything I read said project name, but in reality it needs to be the project ID, which you can get from Firebase project console. Once this was rectified the pipeline ran succesfully.

~david