More Than You Can See: A Story about Machine Learning

“It can’t be done, you can’t see or hear a difference between the calls”. This was the statement from the security analyst when I told him we would be using AI/Machine Learning to identify fraudulent sales calls happening in his department. He just shook his head. “I have listened to them all,” he asserted, “ the calls sounded exactly like a regular sales call and the only way to tell is after the fraud and theft has been committed”.

My data scientist just smiled wryly, she knew that there are some things that we humans just can’t see.

AI and machine learning are everywhere these days. From voice assistants like Siri and Alexa to LLM’s like ChatGPT, AI seems to be taking over the world! But here’s the kicker: despite all the buzz, we’re barely scratching the surface of what AI can really do, especially when it comes to business performance improvement.

Let’s break it down. 

AI and machine learning have incredible potential to revolutionise how businesses operate. We’re talking about optimising processes, predicting customer behavior, automating tasks, and so much more. The companies that are leading the pack in AI adoption are reaping the rewards. Take retail giant Walmart, for example. They use AI-powered algorithms to optimise their supply chain, predict customer demand, and personalise the shopping experience. As a result, they’ve increased efficiency, reduced costs, and boosted customer satisfaction.

But guess what? Most companies are barely tapping into this potential.

So, why the disconnect? 

Well, for starters, implementing AI can be daunting. It requires investment in technology, data infrastructure, and skilled personnel. Plus, there’s often a fear of the unknown. Will AI replace human workers? Can we trust AI algorithms to make the right decisions? These are valid concerns, but they shouldn’t hold us back from embracing the future.

Back to our security analyst… 

Why had he assumed that the algorithm couldn’t spot a fraudulent call? The reason was because he was basing his insights on what his brain could process. Much of the data he was looking at was in isolation, it was linear and he observed it over a wide timeframe. Our brains operate at a slower pace compared to computers and have a limited working memory capacity, meaning we can only hold and process a small amount of information at a time. Additionally, our brains are prone to biases and subjective interpretations, which can hinder our ability to accurately analyse large datasets objectively.

Undaunted by his skepticism, my Data Scientist and I set about collecting data from the calls.

It is incredible how much more you can analyse using correctly captured data: from voice recordings we could get call transcriptions that split the call’s into each participant, the agent and the customer. We also got the timestamps and cadence of the conversation. The first hypothesis was that there would be specific words or groupings of words that indicate a fraudulent call. First, we broke down a batch of known fraudulent calls and processed all the individual words used. Then we compared that to a second batch of fraudulent calls.

The result: Failure. It didn’t work. It did not predict that the fraudulent calls were actually fraudulent with any more accuracy than guessing.

Why did this fail? Because we were not thinking big enough. We were using our human brains rather than trusting in the big data brains of ML algorithms. We were making the same mistake as the security analyst.

After a quick huddle and regroup we changed tactics.

We layered on the additional call data and started adding in demographic and geo-spatial data related to the agent and the customer. We broke down the calls and the phrases so we could see how ‘positive’ they were, how many words were spoken, the length of different ‘sectors’ of the call, and so much more. We took all this data – imperceptible to the human eye – and built a model.

The result: Success. We could predict with 80% accuracy whether a call was fraudulent. This meant that the security team could reduce their focus down to a smaller group of calls and stop fraud before money was lost, saving the company millions in the process. All because fraudulent customers spoke 23% fewer words, were 19% more positive and had 17% more ‘deadtime’ when words weren’t spoken.

There are just some things humans cannot see

So what is the point of this story.

The point that I continue to make with executive teams that we consult for is that problem-solving using AI is not a pipedream for them. Applications of AI/Machine Learning exist for them right now. The data sits under their nose and all they need is a little imagine and a lot of commitment to solve their business problems.

Are you ready to start? Does your team have the capacity or creativity to find business solutions through AI? Let DigitLab Strategy find the right solution for you with a workshop.

Share the Post:

Sign up for our Newsletter

Enjoying our blog? Get the content delivered to your inbox a few times a year to keep you up to date on thought leadership from our team.