If you were really a member of the wine elite, you’d know this already: wine competitions are ruled by a powerful, secretive few or the so-called ‘concours illuminati’. They are a cadre of wine wonks with the sole mission of publishing fake wine awards, a fraternity plotting against consumers on behalf of a united council of wineries from all over the world.
Every so often a similar albeit less dramatic conjecture gains traction and is met with a lot of attention by people trying to prove it true, sometimes on the basis of statistics.
There are those studies (such as the work done by statistician Robert Hodgson) that claim that the same wine expert is incapable of giving the same wine the same score at different competitions when tasting blind.
I admit that it’s indeed likely that there will be variations (on average 4 points on a score of 100) in the scores of each judge. The firm figures don’t lie but in my opinion they are irrelevant.
Judges are not machines. Neither am I and, as a competition judge, I’m also incapable of attributing the exact same score to each wine each time. There are several reasons that come to mind: the length of the flights, judging conditions, quality of wine service, order in which the wines are presented, bottle variation and other distractions.
But it would be wrong to conclude that wine experts would not fare any better than the toss of a coin. If it were the case, faulty wines would also pick up awards. Personally, I haven’t yet witnessed that at any of the competitions I’ve been part of.
In fact, the chances of an inferior albeit correctly made wine wine getting lucky are very slim at the wine competitions I deem rigorous and honest. These are the Concorso Enologico Internazionale in Verona and Emozioni dal Mondo: Merlot e Cabernet Insieme, Bergamo, Italy, International Wine Challenge and Decanter World Wine Awards in London, England, the Chardonnay du Monde in Burgundy, and the Challenge International du Vin, Bourg, France.
On the whole these wine competitions reward good wines albeit not always the same ones.
This, of course, leads us on to the next argument which is that wine competitions just don’t make sense because one and the same wine will get one rating at one competition but another inconsistent score at another.
The rebut is simple. It’s one of stochastic independence, or another way of saying that two or more events – in this case wine competitions – are not related; like results of rolling a die. In other words, a wine winning a medal in one competition doesn’t impact what it will or won’t win in another competition. In fact, statistically one would expect just that. A wine is likely to receive different scores given the fact that the variables are not the same. It’s what you’d expect from a different competition with different rules, different entries, competing against different wines, and different judges.
In 1999 Decanter magazine arranged for a flight of 10 great Chardonnays, principally from California, to be tasted at the same time by a professional panel in London and another esteemed group of tasters in New York. The results turned out to be different. Opinion of the same wines can vary greatly among different people, especially persons from different cultures, countries and geographical areas which conditions preferences and stylistic difference to a certain extend.
Randomness might rule our lives and thus also wine competitions, at least to a certain degree. But that doesn’t prove that there is deliberate bias and medals awarded at wine competitions are pointless.
Having taken part in several competitions, I believe that the way the better competitions are structured eliminates most inconsistencies. Judges score the wines based upon their own preferences, of course, but they may at best recommend a medal level. The scores and possible medal recommendations are usually extrapolated as well as interpreted by an objective committee working from the judges’ numbered scoring sheets to finally reach a consensus.
As a ‘Maltese wine watcher’, I have seen particular vintages of a handful of wines score consistently well when tasted blind by experienced tasters and panels. Considering that these Maltese wines are no blue chip wines and virtually unavailable outside Malta’s shores anyway, there is no reason to implore false ratings. Of course, I know of one or two dubious wines to strike lucky in one wine show or another, but it’s a very, very rare example that can deceive several sets of judges.
Personally, I look out for wines that have been given awards in three or more reputable competitions by judging colleagues that I know fairly well or am acquainted with through the Circle of Wine Writers. The award-winning wines may not all be the best in the world but at least they meet certain professional standards.
Wine competitions provide information not only to me but also to consumers. They can help sift through the huge number of wines out there. At the very least, an award signals that the respective winning wine has been found to be of a certain calibre worthy of certain praise relative to the entries in their category at a particular event by the judges present. Moreover, wine contests help to promote the industry and inspire winemakers in Malta and all over the world to do better.
The idea that ratings and medals on which wines base their reputations are merely a powerful illusion doesn’t fly. Conspiracy theorists who go on saying that results of wine competitions can’t be reproduced and therefore are fake or worthless should reconsider. The vinous JFK conspiracy theory of wine contests doesn’t hold water.
Live your Wine!