VMWare Fusion 11.5 Windows 10 VM Black Screen on Catalina

I wanted to do a quick post on this as I had to try multiple things to get my Windows 10 VM up and running. There are two posts which helped. TCC DB Updates and the processor change post 10. To summarize it both links:

Turn Off the Rootless

  1. Reboot your MAC and hold CMD+R to enter the recovery mode.
  2. Open the terminal.
  3. Enter csrutil disable
  4. Restart

From a terminal run:

tccutil reset All com.vmware.fusion   
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServiceScreenCapture", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'   3, 
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServiceListenEvent", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'   4, 
sudo sqlite3 "/Library/Application Support/com.apple.TCC/TCC.db" 'insert into access values ("kTCCServicePostEvent", "com.vmware.fusion", 0, 1, 1, "", "", "", "UNUSED", "", 0,1565595574)'

Turn On the Rootless

  1. Reboot your MAC and hold CMD+R to enter the recovery mode.
  2. Open the terminal.
  3. Enter csrutil enable
  4. Restart

Launch Fusion and ensure that under your Mac’s Security and Privacy:

Accessibility > VMWare Fusion is checked
Screen Recording > Vmware Fusion is checked

With your VM shutdown go to settings > Processor and Memory and check these two settings:


Hope this helps another poor soul.


AWS’s AI in the Contact Center Pitch: A Swing and a Miss.

Recently AWS released a “Knowledge Brief” illustrating how Fortune 1000 companies are taking a deeper interest in AI related products and services for their contact centers. While I think there are plenty of points which could be argued, for the sake of this post, I will focus on the intro graph as this is the springboard to the whole document created by the Aberdeen Group’s research. Let’s start with the graph:


First, I was surprised of the atribution for the spike in contact center solutions research to the Google Duplex presentation during I/O 2018. Second, the report goes on to state that the red line declining off to the right are the search results for PBX because “firms are not as active in researching best practices and trends in use of PBX.” These two points stuck to me as odd specially if you’re building a whole paper on those two premises so I took it upon myself to see if I could indepedently confirm their positions.

Considering the paper states that this is all about research I decided to go to world’s research webpage: Google; specifically Google Trends. Let’s tackle the spike in research due to the announcement of Google Duplex. You will see that Google registered the terms “google duplex” spiking in May which matches with their blog post linked above. The report’s graph has this spike happening in July which is not correct. But let’s give them the benefit of the doubt that the x-axis is mislabled since there certainly was a spike in research on these terms.


The papers second point is around the decline of research around the term PBX. The document states “..it’s reflected through the dark red line that’s particularly trending downwards between July and September 2018.” The main reason why this caught my eye is because of the term PBX. As those of you in the conctact center business know the term PBX really has gone out of use in the late 90s and even more today in the 2000s. Mainly because with VoIP the PBX term is not used as broadly. Make no mistake things like Cisco’s CommunicationManager and Asterisks are PBXes, but they are so much more thus why the term has fallen out of favor. Given this information let’s compare how the term PBX and ACD, a more broadly used term to almost mean the same thing, have trended for the time period this report covers.


Neither term has really seen a decline. Heck you could argue that PBX saw an increase between May and July while ACD saw an increase after July. Ultimately debunking the premise this whole document stands upon.

AI/ML is the hot new topic, but there’s a time and a place for everything. This paper’s whole premise for an AI future relies on faulty data which causes the whole article to fall apart. This, like may other pieces, are more hype than substance.









My take on easily improving your customer’s experience with not a lot of money and without having to hire me.

Recently I was talking to an acquaintance about our top IVR annoyances. While we debated back and forth on the merit of each annoyances it got me thinking about the current wave around customer experience, customer journey, and the amount of money and products some companies are putting in to try and get marginal improvements. While I’ve been working in the contact center for over a decade, I certainly don’t know it all, but I’ve come to realize that before spending a lot of money businesses should do a few small things. These small things will provide small improvements and will set you up to be better prepared for bringing some vendor to help you “revolutionize” your customer experience.

Now there is no data to backup these thoughts, but I like to think my experience should carry a bit of weight. Here we go:

Your IVR should reflect your personality. Every IVR sounds the same, is your business just like every other business? All businesses stress over print ads, color schemes for the website, logos, commercials, but their IVR still feels like every other IVR. Why not carry that stress over to something which can be personalized with just a few words and voice inflection?

Know your callers. We find ourselves in a data rich and information poor world. Are your callers millennials? Are they senior citizens? Is there a specific social economic status which gravitates towards your IVR while others go through a different channel?  All of this information is critical in trying to figure out what options you should be offering in your IVR and what your personality should be.

Make it sound fresh. Has your IVR welcomed every caller with “Thank you for calling…” since the dawn of time? Every modern and not so modern IVR in the world can play an array of greetings, use slightly different language depending on the time of year, and create some personalization without much work. No one likes to talk to that one person who always tells the same stories. Your IVR can easily and cheaply break that monotony, sound fresh, and make the wait seem more engaging.

Don’t make me tell you again. One of the things that’s most annoying is when you call a company to fix a problem, you think it’s solved only to find out a few hours later or a few days later that it’s not resolved. You know what I do, and the rest of the world does? Call right back. It’s very easy for modern IVRs to see that a customer called recently and there’s a very high likelihood they are calling for the same reason again. So why put them through your self-service menu? Get them immediately to an agent, you failed at first call resolution, you know it and they know. Here’s a second shot at making it better. Extra points if you get them to the same agent who will have some context from the first call.

Everyone likes surprises. Every once in a while, have calls be sent to agents without having to go through the full gamut of the IVR. Specially if you know you have agents available. which will increase your agent utilization, but this only works if your agents are able to handle most types of calls. Because if you’re going to have to transfer callers, do not do this!

Don’t pretend to care. Saying that my call is important is such an insult. It’s not, otherwise you would have staffed accordingly and not made me wait. Offer a way to call back a customer instead of having them wait hostage to your queue treatment.

Silence is golden.  If your call center deals with extremely high hold times greater than 15 minutes. Give the caller the option to not hear any music and announcements at all. An occasional beep and maybe a short message on how to enable music again will make the wait time much more enjoyable. If I can detect how often your music on hold loops, I will not be very pleasant when the agent takes my call.




Right way to block ANIs using Amazon Connect

In this blog I’ll cover a potential financial issue you might face if you try to ANI block customers and they are calling you through a SIP trunk.

As I continue my journey of getting familiar with Amazon Connect I ran into an interesting and a bit worrisome issue. The use case I was working on was to create a table which blocks or allows specific ANIs to call in. Ultimately, when a blocked caller came in I wanted to just hang up on the call. My original flow looked like this:


Pretty straight forward, invoke Lambda, check attribute and if blocked = true, disconnect the call. When calling from my cellphone this worked great. However, when calling from my home phone (using a Flowroute SIP trunk) I got a nice surprise in the logs:


What you’re seeing is a partial log of my home phone constantly retrying to connect to Amazon Connect and generating a new call each time. Since there was no prompt play and no ring back heard I assume the network believes there as a connection issue and continues to try and connect. Which means that you could easily incur a huge expense both on your phone provider and on your Amazon AWS bill.

The way to fix this was to play a 1 second of silence prompt before disconnecting the call.



Connecing an ESXi host to a QNAP NAS using NFS

Interesting little issue I ran into when trying to create a new datastore in my ESXi server. I had to use NFS v2/v3 even though the ESXi documention states v4 is supported. Here are my specs:


– VMWare ESXi v6.0.0 Build 5050593

To configure the NFS share in the NAS. Control Panel > Win/MAC/NFS > NFS Service

QNAP NFS Service

To ensure your share has NFS partitions click on the link that says “Click here to set the NFS access…” Choose your shared folder and Edit Shared Folder Permissions. And click

QNAP NFS host access screen

In ESXi, click on the host server. Configuration > Storage > Add Storage…

1. Network File System

2. Server IP, Folder, Datastore Name

3. Finish

ESXi Locate Network File System

When done, you should see your NAS as a data store.

ESXi Datastore Configuration


Installing Laravel 5.5 & MySQL in Ubuntu 16.04 with Nginix already installed

Here are my notes on how to install these components.

sudo apt-get install php7.0-mbstring php7.0-xml composer unzip
sudo apt-get install -y php7.0 php7.0-fpm php7.0-mysql php7.0-zip php7.0-gd
sudo apt-get install mcrypt php7.0-mcrypt
sudo apt-get install -y php7.0-mbstring php7.0-xml –force-yes
sudo apt-get install php7.0-curl php7.0-json
sudo vi /etc/php/7.0/fpm/php.ini
sudo service php7.0-fpm restart
sudo mkdir -p /var/www/laravel
sudo vi /etc/nginx/sites-available/default
server {
        listen 80;
        listen [::]:80 ipv6only=on;

        root /var/www/laravel/public;
        index index.php index.html index.htm;

        # Make site accessible from http://localhost/
        server_name <serverName>;

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ /index.php?$query_string;
                # Uncomment to enable naxsi on this location
                # include /etc/nginx/naxsi.rules
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
cd ~
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
sudo composer create-project laravel/laravel /var/www/laravel
sudo chown -R :www-data /var/www/laravel
sudo chmod -R 775 /var/www/laravel/storage
sudo chmod -R guo+w storage
sudo apt-get update
sudo apt-get install mysql-server
sudo mysql_secure_installation

Using git to push and pull repositories.

I’ve been trying to get more proficient with git and figured the only way to do that is to get my hands dirty and write some simple app(s) and push them to a production server. There are my notes, mainly for me to help me remember how this stuff works.

The very first time you do a git init to initialize your repository.

git init

You do this every time you want to add new changes to your reposiotry.

git add .

To see the status of things which are going to be added, removed.

git status

Once you’re happy with what you want to commit, leave your future self a little love note.

git commit –m “Doing something”

You only do this the first time to setup your remote repository destination.

git remote add origin git@github.com:dmaciasSS/myrepo.git

Then you do this every time you want to push your commit.

git push –u origin master

Now, let’s say I want to pull my repository to a new host e.g. production. If you’re using Laravel make sure you’re setting up your environment name in your vhost file first.

sudo git clone https://github.com/dmaciasSS/myrepo.git locaton/in/my/server

Laravel specific commands.

chown -R :www-data app/storage

chmod –R 777 app/storage

‘composer install’

php artisan cache:clear

I’ve found for 5.1 you might have to do this the first time too.

mkdir storage/views

mkdir storage/sessions

chown –R :www-data app/storage

Now, once you’ve cloned your repository you need to updated it every time you want to pull down any new commits.

sudo git pull origin master

composer update

php artisan cache:clear


Uniform Server (WAMP) and Laravel

I’ve been playing around with Laravel a bit just to try it out and figured I would bundle this with a light weight WAMP solution.  Went through the install and everything worked fine exept when it came time to migrate my database:

C:\UniServer\laravel1>php artisan migrate
{"error":{"type":"Symfony\\Component\\Debug\\Exception\\FatalErrorException","message":"Call to undefined function Symfony\\Component\\Console\\mb_detect_encoding()","file":"C:\\UniServer\\laravel1\\vendor\\symfony\\console\\Symfony\\Component\\Console\\Application.php","line":721}}

Took some googling around, but the solution was pretty easy.  Assuming you’re running a stock and fairly recent version of Uniform, all you have to do is modify your php-cli.ini file and add:


Restart Apache and presto.


New Years Resolutions and Data Visualization

At the end of last year (December 2010) I set out to change a few things.  The biggest one being to try and burn 10,000 calories a month through exercise.  The first few months I kept track of my calorie expenditure base on a heart monitor, when my heart monitor met an untimely death I switched to an Android app call CardioTrainer. I highly recommend this app for anyone else who wants to keep track of their workouts.

While I was only able to accomplish this 6 times, I have burned over 100,000 calories so far and lost about 10 pounds.  I will continue working towards 10,000 a month in 2012.  So, now I have a year of data and I’ve been looking to play around with Google’s Chart Tools and figure this would be the combination.  Here’s my first stab at visualizing the data I’ve collected so far.



May 2nd 2010 Pittsburgh Half Marathon

In my never ending attempt to punish my body and hate my knees I’ve been training to what I hope will amount to another full marathon this year.  Along the way I figured it would be a good idea to give a half marathon another shot.  If you’ve never been to Pittsburgh, I recommend you do, it’s called the city of bridges for a reason and running through its neighborhoods and streets is a very unique experience.

The morning of the race, we arrived with a little under 30 minutes to spare, which was not good.  I really didn’t get a good warm up run in and stretched out just a bit.  It was a cloudy day with chance of thunderstorms.  Which was an issues, since the race could get called off if there was lighting… the chances of getting hit by lighting are incredibly small, but I digress.

The race began promptly at 7:30 AM, but since I was so far back in the pack I didn’t cross the starting line until about 10 minutes after the gun went off.  The first couple of miles are always torture.  It’s really a free for all, you never hit your stride as you’re on the lookout for all other runners, ensuring you don’t step on someone or someone doesn’t elbow you as they are trying to squeeze by you.  At bout mile 5 is when the seas start to part and you really get to focus on what you came here to do.  The unfortunate part is that you hit the half way point before you know it.  For me this is the hardest part as I always feel that I’ve spent too much energy trying to make it through the pack that I start to run out of gas around mile 9.  Oh well, that’s what training is there for, right?  At the end of the day it was a great day for a race even though the rain was coming down pretty hard.  Final time below.



My only complaints about this event are the lack of Goo.  All they had for the runners was water and Gatorade, glad I packed some beans, but I was banking on some Goo to break up the monotony.  Second, I didn’t see any post race beer, not saying this is imperative, but I love me a beer or three after pounding out a 13 miler.  And finally, I should have used more lube.  Yes, my armpits were red and tender and it had everything to do with the huge amounts of water that came down on me while running.