The principles behind our scoring

The Warc 100 aims to track and rank the world's smartest campaigns, and the companies behind them.

We tracked 1,773 individual award winners across 75 different effectiveness and strategy awards schemes held around the world. All competitions we track are based on case studies.

The thinking behind the Warc 100 is as follows:

  • Entries to strategy and effectiveness competitions can, broadly, be treated as a global pool of case studies being judged by their peers.
  • Although criteria for these competitions may be similar, not all competitions have equal standing. It is widely held that some competitions are more attractive prospects than others – that may reflect the rigour of the competition, or the prestige of winning (or both). A method is, therefore, required to "weight" competitions.
  • To weight competitions effectively, a methodology is needed that is applicable to all competitions, regardless of market size or judging process, and treats all competitions fairly.
  • The weighting should rely on data, rather than our own subjective opinion.

The Warc 100 methodology was developed in consultation with Douglas West, Professor of Marketing at Kings College, London.

To avoid prejudicing future entries to competitions, the list of awards shows we track will remain confidential.

A method is required to ‘weight’ competitions

Assigning Award Points

The first step in building the rankings is to assign points to campaigns based on the awards they have won.

Most of the awards schemes under consideration have a single Best in Show (or Grand Prix) winner, as well as a broader group of Gold, Silver and Bronze winners (sometimes with a few standalone or special awards). We assign points on a range from 10-2 (see table at right).

For schemes that do not run a Gold/Silver/Bronze scheme, we have adapted this points scheme to reflect their structure.

Award schemes under consideration vary greatly in terms of size. Some awards schemes offer 100 or more individual prizes, others fewer than 10. In order not to over-reward campaigns that have won many awards at a single scheme over those winning awards in multiple schemes, we have capped the number of Award Points a single campaign can win at a single awards scheme at 10. No single campaign, therefore, can gain any more points than that awards scheme's Best in Show winner, regardless of how many individual prizes it won.

Non-case-study awards in these competitions (for example, 'Agency of the Year') are not included.

Step 1: Points are assigned as follows

Grand Prix: 10
Gold: 6
Silver: 4
Bronze: 2
Other 'Special Award': 2

Assigning the Competition Weighting

The most difficult element of the methodology was developing a means of weighting different competitions that would work consistently across award schemes.

Each competition in the Warc 100 is assigned a score (the Competition Weighting) between 1 and 5 – this is an assessment of how 'hard' the competition is to win, and how prestigious the award is.

Warc has developed a calculation that takes into account a number of factors.

The exact calculation is proprietary to Warc and, to avoid prejudicing entries to future competitions, we cannot reveal the weightings assigned to competitions.

The calculation includes:

Industry perception

It is widely held within the industry that some competitions are harder or more prestigious than others.

To reflect this, Warc has conducted a survey of senior agency planners and strategists. Respondents to the survey come from creative, media and digital agencies, and most have either pan-regional or global responsibility. As planners and strategists, they have intimate knowledge of the competitions under consideration for the Warc 100. The makeup of this panel of planners is regionally balanced between those with responsibilities in EMEA, Americas and Asia-Pacific, so that we have collected a truly global view. The results of this survey feed into the Competition Weighting.

The level of 'potential' competition

In theory, competitions that are open to a wider 'pool' of case studies will be harder to win than competitions that limit the size of the pool. So, for example, a global competition will usually be harder to win than a single-market competition. Or a competition that is open to all types of marketing activity will be harder to win than a competition that is only open to, say, digital marketing campaigns.

To reflect this, Warc takes into account how much of the global advertising market each competition represents. It is able to do this using Warc's comprehensive adspend data resources, which includes analysis by channel and by geography. Adspend is adjusted for Purchasing Power Parities, in order to strip out exchange rate fluctuations, thereby allowing us more accurate international comparisons.

Verifying our methodology

We have built our ranking system – detailed above – to be objective, based on data rather than personal whim. Our methodology has been developed in consultation with an independent third party: Douglas West, professor of marketing and programme director at Kings College, London.

Step 2: Each competition is assigned a weighting between 1 and 5

Building the scores

For each competition in which a case study wins, its Award Points are multiplied by the Competition Weighting to produce a score.

For example, if a case study wins a Silver in a competition with a weighting of 3, it will score 12 (4 Award Points x 3 weighting).

Many case studies win awards in multiple competitions. So a case study’s final score in the Warc 100 is the sum of all the scores it has achieved in different competitions. Where the same campaign has been awarded at different competitions under different campaign titles, a generic campaign name has been used for all of these entries.

By weighting the competitions between 1 and 5, we have created an awards 'universe' in which the Grand Prix in a competition assigned the lowest possible weighting (10 Award Points multiplied by a Competition Weighting of 1) is equivalent to a Bronze in a competition with the highest possible weighting (2 Award Points multiplied by a Competition Weighting of 5). It also means that the minimum a campaign can win from a single competition is 2 points (2 Award Points multiplied by a Competition Weighting of 1), and the highest it can win is 50 points (10 Award Points multiplied by a Competition Weighting of 5).

Step 3: Multiply Award Points by Competition Weighting, then add up scores from different competitions

Ranking agencies and brands

Once the scores for campaigns have been calculated, it is possible to assign points to the organisations behind them – both on the client and agency side.

The scores that have been generated for every campaign in the database are assigned to both an agency and a brand. This information is based on publicly released data, such as the winners lists published by awards organisers.

This allows Warc to build rankings of individual agencies, agency networks, agency holding companies, brands and advertisers.

These rankings reflect the points generated from all campaigns in the database, not just the top 100 campaigns in the Warc 100.

Points cap

As with campaign scores (see above), there is a cap of 10 Award Points (equivalent to the Grand Prix) that a brand or agency can win from a single campaign in a single competition.

We have also capped the overall Award Points a single brand or agency can win from a single awards scheme (ie from all its winning entries in one competition) at 20.

We have done this because a small number of competitions in the database award a very large number of prizes, making it possible for agencies or brands eligible for those competitions to pick up a lot of points from a single award scheme. This is unfair on agencies or brands ineligible to enter those competitions (for example, if the competition is in a local market and not open to entries from outside that market).

In reality, it is difficult to reach 20 Award Points from a single show (it is the equivalent of winning a Grand Prix, Gold and Silver for multiple campaigns). As a result, the 20-point cap affects a very small number of organisations in the database.

As with campaign scores, all Award Points are multiplied by the relevant Competition Weighting to produce the scores for agencies and brands.

Contributing agencies

Agencies listed as ‘contributing agencies’ for a campaign in the database are awarded half the Award Points assigned to ‘primary agencies’ for the same campaign (ie 1 for Bronze, 2 for Silver, 3 for Gold, 5 for Grand Prix, with a points cap of 5 for a single campaign in any competition).

Warc has used the information released by awards schemes to determine which agencies are classed as ‘primary agencies’ and which are classed as ‘contributing agencies’.

Table builder

We have introduced a Table Builder feature to allow users to create their own rankings from the database. This feature is available only to Warc subscribers.

Users can build rankings of agencies, agency networks, agency holding companies, brands or advertisers, which they can download to an Excel file. They can filter these rankings by product category, geography, or (for agencies, networks or holding companies) agency type.

We have divided agencies in the database into one of four broad groups: Creative, Media, Digital (including direct agencies), and Other Specialist Agencies – a broad category that includes specialists in PR, brand consultancy and other marketing services.

Only organisations that score 10 points or more across the year are listed in results for these custom rankings.

Because of the effects of the points cap (see above), there may be a very small number of minor discrepancies between custom-built filtered rankings from the Table Builder and the company's overall ranking as published by Warc.

Step 4: Assign points to agencies, networks, holding companies, brands and advertisers

Other issues

Date range

Only those awards handed out during calendar year 2013, or in early 2014 while being explicitly labelled as awards for 2013.

Public data

All awards information, including lists of winners and details of judging criteria, is based on data that is in the public domain, whether through public, free-to-access web pages, press releases or other information for the media.

Campaign locations

The location assigned to individual campaigns is based on the national location of the campaign's primary agency. The location of the primary agency is assumed to be the location of the original strategic insight behind the campaign.

Translations

Wherever possible, English versions of each campaign name have been obtained, whether by contacting the original awards scheme directly to obtain a translated version of results, or by using a translation service. Where the same campaign has been awarded at different competitions in different languages, the English version of the campaign title has been used.

Warc competitions

Warc runs a number of case study competitions that meet the criteria of the Warc 100. Where these have been included in the calculations, they have been subject to exactly the same methodology as all other competitions. To ensure third-party scrutiny, the weighting calculations for Warc competitions have been reviewed by Professor Doug West, Professor of Marketing at Kings College, London.

Contributing agencies

We have included information on contributing agencies as well as primary agencies, where it has been made available by awards organisers.

Case study authors

Some competitions release the names of the authors of the winning case studies. Where we have been able to source these, we have listed the authors with the entries.

Questions? Feedback? Contact us on warc100@warc.com