Open source based technology makes it commercially viable to harvest impressive customer data insights, but first leaders must accept the connection between data and customer-centricity.
The rapid emergence of new technology - from cloud computing and mobile devices to open source data processing engines - means that, for a moderate investment, insurers could derive impressive customer insights through data analytics. But first they must open their eyes to the tangible benefits, and the compelling economics, of beginning the data journey.
“In terms of maturity of the industry in using data, we are in the silent movie era slowly moving into the “talkies,” observes Gary Richardson, Director of Data and Analytics Engineering with KPMG in the UK. “With a few early successes, it won’t be long before we rapidly graduate to high definition TV.”
But why hasn’t the industry moved faster? “Part of the problem is that companies feel no urgency to pursue data strategies when they are making money with their traditional, albeit brittle, data operating models,” explains Gary. “Thus, they don’t want to invest when business is good, nor do they want to invest when business is down, since cost-cutting trumps innovation.”
“The biggest reason to begin the data journey is that it directly aligns with insurers’ desire to become more customer-focused,” notes Gary. “If you want to build customer loyalty and connect customers emotionally with your brand, you need data – and the insights you can get from it – to develop relevant and personalized products that will resonate with customers.”
For example, an auto insurer that prides itself on proactively protecting its customers could leverage the telematics devices that auto manufacturers are now installing in vehicles. Suggests Gary, “Imagine if an insurer streamed this data and could alert customers when their tires are at risk of deflating or offer a first response package for stranded motorists. This illustrates how data can enhance a product offering as well as enable an ecosystem of partnerships to generate new revenue streams.”
But why heed the big data rallying cry now? Gary observes that current technology forces make it easier and cheaper to build a data strategy focused on the customer. Among them, cloud computing, which creates the scale, elasticity and economic model to store and manipulate data, and mobile devices that let companies stream data from the source, like the tire pressure alerts mentioned above.
In addition, the emergence of big data capabilities, such as the Hadoop open-source data processing ecosystem, make it possible to build data collection engines economically, to bring together company data stored in separate silos that were previously hard to join together. This enables insurers to move from data collectors to data connectors and drive actionable insight.
These converging technologies allow a company to construct something called a ‘data lake.’ As the image suggests, the lake is filled by constant ‘streams’ of data, or ‘cloud bursts’ of large data drops. Then, a network of turbines (the data operating system) churns and processes the data to provide information that can transform how a company sees its customers or does business.
Gary does caution that this increasing ability to process vast volumes of customer oriented data gives rise to data privacy concerns: “The ability to be a trusted data broker is crucial to compete on analytics, so we will see the need for data science to drive customer intimacy from ambiguity. That is, as the data is increasingly anonymized, algorithms will have to become smarter at detecting and understanding patterns in data while retaining customer’s rights to privacy.”
Although the idea of creating a data lake may sound like a massive, glacial task, Gary points out that a small investment can reap big results: “Thanks to the confluence of technology forces like cheap cloud computing and data streaming, the economics of building a data lake are really compelling and a company can do it with limited capital, as the infrastructure and software can be dialed up and dialed back on a pay-as-you-go model. Suddenly, the biggest cost is the opportunity cost of not doing it.”
For example, one popular online music service that serves millions of users globally operates with just a five-person team to run the infrastructure and do the analysis from their data lake. Similarly, one UK-based bank built its own data lake – prompted by a regulatory imperative – and is now running multiple data projects to improve client relationship management on a low cost data platform.
“Today you no longer need to make large upfront investments in hardware and software, since open source tools let you build the proof of concept cheaply and quickly,” explains Gary. “After you see some results, you can scale up the program using cloud computing, enabling the investment to be made in growing your in-house talent.”
Where to start? Says Gary, “Once an insurer defines what outcomes it wants to achieve – such as improving customer loyalty or reducing claims leakage – we can identify the data streams that can contribute, and recommend commercially viable tools to connect the dots and store, process and analyze all of that data.”
First, however, insurers must recognize that a data strategy is crucial to achieving their customer goals. Concludes Gary, “When leaders are educated on what can be done, and how they can do more complex processing and analysis at a cheaper price point, they are ready to begin the journey. Raising an organization’s data literacy is the key to changing culture and putting them on the path to become a data-driven organization with the ability to put customers first.”