ruler

Measuring Marketing: Six Ways To Do It Better

Measuring Marketing

Pressed for measures, it is sales and cost trends, numbers of qualified staff and so on that are usually the first to be wheeled in by the marketing function. But as marketing professionals know to their cost, such numbers used in support of performance can be easily knocked down. Are those glossy sales the effect of all boats being lifted by a rising tide of sales or prices? Is spending more money a good idea or a bad one? So what if staff are qualified – what do they achieve?

The next stage may be to try something more convincing, starting with figures showing how low is the proportion of marketing spend to sales compared to the industry average. Then there might be a chart showing trends of discounts squeezed out of targeted media. Alas, these too are vulnerable in the budget bear-pit. So what if we spend less than others if they do better? And can the figures be compared anyway – they're much bigger/smaller than us? Can we really show that we got better discounts than the others?

At this stage the temptation may be to give up, on the grounds that it's too difficult, and appeal to the chief executive on first principles. The marketing function creates revenue, the rest of your team don't – trust us. The temptation should be resisted. It's right to be asked to justify the use of scarce resources and to measure the value of investing in a marketing function compared to other uses for the money. So let's see what's possible.

WHAT TO DO?

The first step is to recognise – and let others know you recognise – that not everything is possible.

To start with there are issues about the boundaries of marketing. The positioning of decisions about product and price vary hugely, which makes performance comparisons difficult for the whole of the function. And even when marketing functions are responsible for similar tasks, how far they are integrated into the rest of the business may vary – the more successfully embedded in the line, or in cross-disciplinary teams, the more difficult to separate marketing's own contribution.

Then there are the measurement problems. To take two straightforward examples: we're not going to be able to put everything into numbers, and there's a need take account of the parts which can't easily be counted, such as creativity and staff quality; the other is the inevitability of a year-end effect on the way sales are booked.

Finally, there's the decision on how to handle the difference between the success of the marketing function and the success of marketing. The question here is who decides policy. If all aspects are decided by the function, then they should take the plaudits for good marketing results and the brickbats for disasters. If some or all policy is decided elsewhere – in corporate headquarters, on another continent or even in another part of the business – performance measures should cover only the parts for which a local marketing function is responsible. Do you first hear about your company's product launches from the trade press? Then why is 'per cent of revenue due to launches in the past three years' part of your set of measures? A commentary on performance should make clear where responsibility lies.

Recognising these limitations, here are six ways to improve the performance measurement process.

1 Make Sure Performance is Linked to the Organisation's Objectives

Many in marketing may take it for granted that 'What's good for us is good for the organisation', but keeping the function in the mainstream of the business, as opposed to a marketing-centric world, means clearly connecting to the expectations, objectives and constraints of the organisation as a whole. In practice it means linking more firmly to overall financial objectives, assuming these are driving the business.

One example would be agreeing measures with finance to judge the trade-offs between cash flow targets and credit policy. Another example is measuring investment in long-term cultivation of selected customers that matches profit targets. The impact of the measures needs to be reinforced by setting targets at levels defined by the organisation's objectives, not by marketing.

2 Improve the Sophistication of the Measures and their Use

More sophisticated means better, not more. The inclination may be to run riot with measures, but beware a loss of focus – probably the most common reason why balanced scorecards are dropped. Nor does sophisticated mean pseudo-scientific garbage or the incomprehensibly complicated.

Although there is room for some measures showing marketing activity and cost, such as the use of data envelopment analysis (1), the focus should be on what's achieved, not what's being spent or done, subject to that being the responsibility of marketing.

Price premium or higher volume at parity price are examples, providing they can be tracked to provide even a credible approximation. Most cost and activity measures ('Advertising cost per thousand target buyers'; 'Average number of calls per sales person per day') are best kept for internal use, since outside the function they are likely to provoke a 'So what?' reaction.

It hardly needs saying that the most straightforward way to improve sophistication is to use existing data to gain fresh insights. Different connections from the database, or peeling away the onion layers to get a better understanding of what's happening on the ground, may be all that's required. Instead of 'Sales staff turnover' as a whole, what's needed is a division of losses between those we didn't want to lose and the people we are pleased to see go. More website clicks? Yes, but what's the evidence of links between clicks and sales?

The next stage is to sharpen the targets and objectives for what is measured. Targets need to be checked against performance in previous years, and, if the information is available, against what others can achieve. Vague statements about progress in 'developing the brand' or 'enhancing customer loyalty' are not good enough. Better go for milestones for projects to do either.

Another approach is to make sure that when it's difficult to measure directly, any proxy measures used are credible. Calculating a 'return on marketing investment' will soon be subject to sceptical scrutiny by finance. If the assumptions are then discredited, the credibility of all other figures is affected. Much better to avoid measures where you can't find good ones, and give a cool and realistic analysis of major customer trends and prospects.

There's plenty around to provoke ideas about greater sophistication. My London Business School colleague Tim Ambler has lots in his book Marketing and the Bottom Line (2) and his suggestions on measuring brand equity are of particular value in this context. An example for those looking for lead indicators for brand/company performance might be to consider the Y & R model.

3 Give Priority to Consistency of Data Over Time

As the tenure of senior executives in all functions gets shorter, many measures get binned before there's time to build up a good run of figures. It's understandable that one of the first actions of a new marketing director is to chuck out the old measures and go for a bright new dawn.

But there are big disadvantages in a high turnover of measures. The data may be flawed, but if they covers a reasonable run of years or quarters they can at least be interpreted, with all their flaws. Short runs of figures, on the other hand, are of limited use, no matter how good the measure. Procter & Gamble's record in keeping records over a long period on such matters as relative price is an object lesson in the value of data consistency. It helps to isolate the effects of marketing efforts and to link suppliers to sales performance.

The most sensible strategy for a new marketing director is to hang on to existing measures for use inside the function for a period until any new measures are established. Old ones can then be dropped if found to be redundant or misleading.

4 Use Comparisons wherever Possible

Of course, you're already looking for best practice, not least because external comparisons will always be more credible than those used to measure against your own targets or what happened last year. But assuming that the definitions of what's covered by the term 'marketing' and the lack of exactly comparable organisations mean that it isn't feasible to find organisations that offer clear comparisons and that are willing to share detailed information, it may not be possible for the marketing function as a whole.

So it's probably worth focusing on parts of the function to compare, perhaps through mutual peer review or a benchmarking club (3). Once activities are broken down into smaller elements, these comparisons can be outside your own patch. Best practice in customer support, pricing policy or even discarding pain-in-the-neck unprofitable customers may well be very relevant, even though they're in a different industry or even a different country.

In addition, what about talking to other functions in your own organisation? Research and development and IT may have very similar problems about how to quantify costs and benefits. When was the last time you talked to them about their approach?

5 Improve the Quality of Feedback – Especially Face to Face

You know all about what the customer thinks about the product, but how much do you know about what your colleagues think about marketing? Well, you may argue, they don't really have the experience to make an informed judgement. Yet judgements are precisely what they make in discussions about financial priorities.

The information gap between their understanding and yours is most serious when the need is to capture not only whether marketing is doing a good job, but to help colleagues understand what is possible. An example here would be agreement about the costs of not undertaking a particular campaign. This kind of discussion is possible only if your other measures are credible. If they are not, careful analysis turns into special pleading.

Face-to-face feedback among a wide range of functions will be the way to get a feel for how marketing is perceived. The discussions can also provide the chance to try out new ideas and give colleagues a sense of ownership about your initiatives. How well did that product launch go? And what are the trade-offs involved in outsourcing?

6 Acknowledge the Limitations

Acknowledging the problems and limitations of measures is a sign of strength, not weakness. It also enhances the credibility of the measurement process. The data on enquiries, lost customers or billing errors may be suspect because the means of picking up the figures is fallible. Much better acknowledge the issue and deal with it before others find out – in many cases the need is to mitigate the problems, not solve them. If the data problem can't be fixed, drop the measure altogether rather than have misleading results.

Conflicts and ambiguities should be identified so that senior management understand what is possible. Above all, don't quantify everything as a means of justifying it. As with every other function, there will be unquantifiable elements that need to be acknowledged and accommodated and, in any case, a commentary will be needed to explain what's going on. Better by far to have a good description than a precise, but silly, measure.

CONCLUSION

The initial reaction to the call for better measurement of the marketing function is probably impatience at having to bother, worries at what might be shown up or concern about how to do it – perhaps all three. But pressures to measure will only increase, with the arrival of the Operating and Financial Review providing new challenges for forward-looking indicators.

The six suggestions above will help in being ahead of questions about what the marketing function is adding. As important, the suggestions should act as a check on whether current measurement is up to the job. The process of devising, implementing, refining and using the measures will, especially if done in-house, amply repay the time spent, since it will also force thinking about priorities and the role of the function itself.

  1. A statistical technique to compare relative efficiencies, in this case for sales units.
  2. Ambler, T. (2003) Marketing and the Bottom Line, FT/Prentice Hall, 2nd edition.
  3. An agreement to share information with a group of organisations through a third party, so your own figures can be compared to the average of the others but no others are individually disclosed.

This article featured in Market Leader, Autumn 2005.


Newsletter

Enjoy this? Get more.

Our monthly newsletter, The Edit, curates the very best of our latest content including articles, podcasts, video.

CAPTCHA
5 + 7 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Become a member

Not a member yet?

Now it's time for you and your team to get involved. Get access to world-class events, exclusive publications, professional development, partner discounts and the chance to grow your network.