Procurement data quality: the must-have to optimize spend management
As we saw in the first episode, cleaning the data allows businesses to have an excellent visibility on spends and reduce costs. But how best to obtain such a single view? And, what’s more, a single clean view, of an acceptably high data quality? Talk to procurement professionals and it is not difficult to discover an extensive – and impressive – list of barriers which must be overcome…
Episode 2: A long way towards data quality
First, it’s often a mistake to imagine that imposing a single product and supplier coding framework at the ERP level is the answer, says Milan Panchmatia, managing partner of procurement consultants 4C Associates. “ERP alone is rarely the solution, especially in larger and diversified businesses which have grown by acquisition,” he explains. “To solve the problem at the ERP level, through supplier and product re-coding, there’s a need for a level of local ‘buy in’ that can be very difficult to achieve. Once you start to impose standardized procurement processes and supplier relationship management processes, there’s a level of resistance that technology alone can’t overcome.”
Similarly, a move to a single consolidated ERP solution is not the answer, points out Richard Gane, a former partner at PwC, and a director of procurement consultants Vendigital. “A single global instance of an ERP system can be superficially attractive, but many organisations think it too great a risk: it places a lot of eggs in one basket and imposing a single way of working on a global workforce can be challenging,” he says. “It’s not so much the destination as the journey – in all sorts of areas, there are challenges, risks and costs.”
In any case, adds Jan Godsell,professor of operations and supply chain strategy at the University of Warwick, a focus on ERP can often serve as a distraction, masking a larger underlying problem of data quality. “Cleanliness of data is often a bigger problem than multiple sources – and that is generally a matter of how people have been using the system and generating the data, rather than how it is distributed,” she says. “Companies that are serious about getting to ‘one version of the truth’ for purposes such as spend analysis usually have to invest in data cleaning and data governance processes to ensure acceptable levels of data quality.”
Stephan Freichel, professor of distribution logistics at Köln University of Applied Sciences, agrees, pointing out that even within a single ERP system, data coding assumptions and errors can lead to flawed conclusions. “The result is that it can be extraordinarily difficult to reach spend analytics conclusions with any certainty,” he says. “Without a full understanding of the nuances, the danger is that apples will be compared to oranges. So companies try to apply those nuances manually, massaging multiple Excel spreadsheets by hand – and all the time getting further away from that vision of one version of the truth.”
The reality, then, is that for many businesses, is those multiple systems will not vanish any time soon, which leaves an awkward circle to be squared: with dirty data spread across distributed systems, how best to clean and consolidate it?
Towards data harmonization
One obvious answer is to throw resources at the task. Extract the data from the various systems, normalise it into a consistent format, and apply raw brainpower to the task of de-duplicating and recoding it.
Such projects are typically neither cheap nor quick, however. Although it is possible to assign some of the work to offshore contractors or outsourced labour in regions such as the India and South-east Asia, a requirement remains for purchasing specialists from each business unit to liaise in order to translate local codings of suppliers and purchased items into a single shared lingua franca or ‘Rosetta Stone’ dictionary. Once built, that dictionary can then be used to translate business units’ spend into data that can be used for spend analysis.
But unless the exercise is to be a ‘one-off’ piece of spend analysis, those translated codings have to be harmonised and fed back to the individual business units, prompting their own recoding exercises, so the entire enterprise is using the same product and supplier coding and data entry conventions. The result, inevitably, is that what may have begun as a perfectly sensible attempt at spend analysis quickly unravels into a sprawling data harmonisation project, the costs of which are likely to outstrip the gains from the initial spend analytics project.
Even then, says David Food, principal lecturer at the University of London’s Royal Holloway College, it is likely that data differences will re-emerge. “In the long term, even if carried out alongside a data harmonisation project, manual data cleaning tends not to work: people in the various business units start to generalise again, or revert to the standard codings that they use for ease of supplier onboarding. Rather than throwing the problem offshore, standard rules and codings need to be created, adhered to and monitored for compliance. It’s almost like perpetual cycle counting in a warehouse – that continual process of review, refresh, renew and reprocess.”
Are there some magic solutions to clean the data and increase the visibility on spends? The third and last episode will present you the alternatives to reach this goal. See you next week for the last episode!