Customer Care - Call Reasons
The Challenge
Support teams had little insight into why customers were calling. Agents were forced to self-report reasons under strict time limits, leading to inconsistent data and making it impossible for product teams to identify patterns.
Context + Goals
Following outsourcing in 2017, agents had less than 30 seconds between calls to categorize issues, causing nearly 70% of calls to be logged under generic categories.
Goals:
Develop a framework to accurately capture call reasons.
Train Clarabridge AI to automatically categorize calls.
Reduce self-service eligible calls reaching live agents.
Methods
Phase 1: Call listening
To identify call reasons, the first step was to conduct call listening sessions. My colleague and I listened to care calls via the system that supervisors use for quality assurance and training for a week. We created a spreadsheet for each call to document call information, such as the date, the product the support inquiry was for, the case number created for the call in Salesforce, and the customer’s email. Then we documented the customer issue in their own words and listed each step the agent went through to troubleshoot or solve the issue during the call. Finally, we marked the call as “resolved” or “unresolved” based on whether the agent was able to help the customer during their call.
The call listening sessions provided many insights that helped improve the customer call experience. For example, we noticed that customers were being transferred multiple times between support teams that specialized in various product areas. Additionally, we saw that agents had difficulty transcribing customer emails over the phone, taking on average over a minute going back and forth on phonetics and spelling. Moreover, we found that customers were calling in to verify account information that was available to view online, driving up call volume for non-issues, which is a waste of company resources.
We forwarded these insights to the care operations teams, who implemented changes such as better routing from the support site and creating a case on the website before contacting support, where users could provide their email before speaking with an agent. The call listening sessions gave me a better understanding of the frustrations faced by both customers and agents during call interactions. They also provided a rough estimate of the volume of various call issues received by care agents on a daily basis, which helped me communicate better with agents about the common issues they solved and how frequently they encountered them.
Phase 2: Affinity mapping
To better understand the volume of call reasons, I conducted affinity mapping exercises with the care operations team and care agents. During interviews, I asked participants to list all the call reasons they worked on and group them using post-it notes. One way they sorted the notes was by affinity, with care specialists creating their own groups and names. This formed the basis of a two-tier model for streamlining call reason selection.
Another sorting method involved classifying call reasons by issue complexity and whether they could be solved through self-service or required a care agent. This revealed two possible solutions: simple issues requiring agent involvement could be made self-serviceable, while complex issues could be addressed through improved documentation and walk-throughs to reduce the number of customer support calls.
Phase 3: Training the algorithm
After collecting enough data, I restructured the system for selecting care call topics into a logical two-step process. Agents could first choose a category (such as billing) and then select a specific topic (such as updating payment method).
Around this time, the company began a trial of Clarabridge, a customer experience software that uses AI-powered text and speech analytics to track call topics, volume, and sentiment. With this tool, we could analyze audio from customer support calls and have the AI determine the call topic without relying on the agent's selection.
During the trial phase, I assisted the customer care team in establishing various algorithms for determining the call topic. This involved inputting different words, phrases, or combinations of words that would tag the call with the corresponding topic when detected in the audio.
To further improve the call reasons model, I conducted additional call listening sessions and compared the results to the call topics identified by Clarabridge. Through this process, I identified various issues that needed to be addressed, such as accounting for phonetic variations that the speech-to-text system often picked up, such as "admin center" being transcribed as "Edmond center" or "I am in center," as well as recognizing product or company names not being detected by the software. After a few minor adjustments, the system became highly accurate in identifying call reasons.
Key Insights
Agents mis-categorized calls due to time constraints.
Many issues could be solved with existing self-service content — users weren’t finding it.
AI initially struggled with product-specific terms, requiring manual intervention and retraining.
Outcomes
Designed and piloted a two-level categorization model for care calls.
Improved accuracy of Clarabridge AI, enabling scalable call analysis.
Informed product roadmaps with real customer issues, leading to a reduction in redundant support calls. This included restructuring the self-service support site to help with low-effort self service issues.