Geeking Data Tech for Call Center Operations

December 05, 2016

Data Management Meditation

Our development team is on fire building a next-generation platform for customer support and billing management. Before now, our support teams work by opening several browser tabs for our cloud applications. This isn’t ideal because each application requires a separate workflow and so the logistics of getting at information ultimately detracts from addressing customer needs as smoothly as possible.

We are starting with a smartly-tuned, self-service lead machine that will supply data on demand for sales and track the data downstream. At Lumikha we’ve experimented with this in the past, conceiving of a sort of lead-o-matic where we can smartly import and sort new data, dynamically suppress leads by campaign, retrieve dialed leads from our call center eco-system, as well as maintain a complete history on the data from the lead source to the last touch. I’m pretty sure this kind of platform can be had for a huge sum and requiring intervention by managers and a DBA (or two).

Many campaign managers use Excel to manipulate data; assume that their DBA has a handle on import, distribution, and recall of data; or give their ecosystem center free reign to acquire data — either explicitly or by turning a blind eye to agent data mining. Depending on management’s stance toward the campaign, they simply opt to throw unmanaged data at their centers under the assumption that they will either sink or swim, after all, there are plenty of BPOs in search of projects.  In all cases, managers sacrifice data history for expedience.

The Lead-o-Matic

This first module fulfills the requirements for cyclical data distribution and recall. This is typical for centrally-managed call center campaigns but applies to any project that requires data to leave the system and return enriched with additional information. Email campaigns work in similar fashion because they rely on specialized services like Mail Chimp or Campaign Monitor and the updates returned to the core data repository track opt-ins, opt-outs, click-throughs, conversions, and the like.

In the case of call center work, data is distributed and recalled because too much data will slow a predictive dialer down to the point where the predictive algorithms no longer result in speed gains for the center. To prevent overly frequent dialing, leads must be returned and rested. There is a side benefit to ensuring that all centers have equivalent data by periodically randomizing the leads. Finally, the dialing data should be reviewed and frequently analyzed to return insight to use for campaign optimization. Constant analysis is critical to campaign management as data strategies must continually evolve for optimal performance.

The Prime Directives of Data Management

Unlike Star Trek characters who frequently violate the singular commandment of non-interference, we rigorously observe a few prime directives that we’ve encoded into our data management business rules. These apply to business-to-business data since we are a B2B shop.

1. Never remove a lead unless the business is closed.

2. Comprehensively maintain every lead history.

Use leads in serial fashion. No lead can be marketed by multiple campaigns concurrently.

Procure Data – First Import

This process is familiar to any list marketer. Someone or something bounces new lead data against their current list to remove duplicates and run an initial suppression. Suppression (also called scrubbing) occurs on first import and when we distribute lists to our centers. Our suppression for fresh data restricts businesses that are unlikely to convert for a variety of reasons and rejects incomplete or improperly formatted data. We also suppress based on TCPA compliance requirements.

Once we remove the rejects and send them back to our data supplier, we conduct a series passes on the data to append and update it to make our operations more efficient downstream. Our SIC/NAICs decoder tool is a good example of this process. It allows us to use our own method of business classification by making SIC and NAICs codes we receive with our data buy more actionable. We align this to Google Business categories for the United States (each country has their own business categories) to accelerate our provisioning process somewhat for an online marketing campaign targeting small businesses in the United States.

To keep the semantics clear, we refer to acquired leads as batches and distributed leads as lists. Each has a unique designation. Batches are named by quality source date and vendor. Loosely, we have five grades of data based on cost, data points supplied, and the vendor’s track record. To be honest, this is pretty subjective, but it’s useful in terms of providing a mix or adjusting quality for specific scenarios.

procure data

Distribute Data – The Cycle of List Distribution

Generally we find that sending leads to our ecosystem is a weekly affair where the call centers receive half of the data they require for a two week cycle. We send a second list the following week with the balance. Each week the contact centers will return the oldest list for import, update, and analysis. We receive daily files of DNC and conversions and those records are updated accordingly.

Most of the steps in this process are self explanatory except “Designate List Parameters.” This effectively gives us control over the list we send by filtering or suppressing to remove or add data based on the campaign requirements as they unfold. Business rules are fluid as the data requirements evolve: if we add centers quickly, we must then open data to accommodate new agents; certain business categories are seasonal and are thereby adjusted based on the calendar; a glut of new data may permit a longer resting period between dialing cycles.

The data management aspect of the platform, known affectionately as the Data Schef, provides granular data filtering so we can maximize control over the leads on every cycle. This is important because we can use the empirical data collected on the prior data cycle to influence performance.

distribute data
 

Recall Data – The Chickens Come Home

When the lists are collected from our call center partners, we update the records in our database. Our database is currently on AWS (Dynamo), but we are moving to Google Cloud for speed and economy. Each record maintains the full history of source, activity, conversion, post-conversion activity related to provisioning and support.

We typically use two resting periods for active data based on their last disposition and dial count. For most dispositions, we rest a lead for six weeks. This reduces the likelihood of complaints and negative social media commentary. If, for example, the final disposition reflects “No Answer” after ten dials, we rest the lead for twelve weeks. We also use this duration for not interested dispositions and after we have distributed three times.

recall data

Here Come Da Schef

While our platform is hardly unique, we have managed to tune it sufficiently to move data management out of the IT group and into operations where insights built on our reports give us powerful tools to continually optimize for performance based on demographic data, center and agent effectiveness, dialer parameters, scheduling and the like.

Contending with data entropy is almost as important — using the Data Schef we maximize our lead investment and we can understand what’s happening to our data as it evolves over time.

FB LinkedIn Google+