By on July 1, 2009

In the wake of JD Powers’ Initial Quality Survey, several other lesser-known awards are giving OEMs a whole new reason to cobble together a press release touting their top place, improvement or mere presence in one of these meaningless satisfaction surveys. And why not? It’s summer, and things (sales, in specific) are slow. And the award fandango is win-win. The awards allow OEMs to ridiculously inflate the importance of their results, while publicizing the research firms that created the awards. Case in point, the Dodge Ram.

The Ram got top full-size truck honors in the “Strategic Vision Total Quality Index,” a result that prompted the Chrysler Blog headline “Ram Ranked as Best Truck Ever (No Exaggeration).” Except that the survey (like so many meaningless surveys) only gathers impressions of quality and satisfaction from owners of 2008/2009 models, providing a less-than complete picture of “total quality.” In other words, yes exaggeration. But by embracing subjectivity and endless categorization, the awards dance keeps shuffling along.

“We know Total Quality is strengthened by delighting customers and getting them to love you. We stand ready to include love in all the work that we do since measuring love is the next step in discriminating between winning and losing in today’s competitive environment,” explains Strategic Vision’s Darrel Edwards.

But how do you measure such an ineffable emotion with any reliability? As the Bard put it, “love is not love which alters when it alteration [Ed: or awkward panel gap] finds, Or Bends with the remover to remove. O, no! It is an ever-fixed mark.” In short, who doesn’t love their new car? Finding out whether a car lives up to its owner’s expectations is more a measure of the owners than the car.

“Vehicles that score highest in the Vehicle Satisfaction Awards hit the mark with their buyers by delivering value and satisfaction across a wide range of attributes,” says George Peterson, of Auto Pacific, and grand pimp of the 2009 Vehicle Satisfaction Awards. “The winners perform well in 48 separate categories that objectively measure the ownership experience.”

Leaving the challenge of “objectively measuring satisfaction” aside for a moment, that’s 48 freaking categories! Which means every OEM is guaranteed to have at least one “class-leading” vehicle to brag about in press release which backhandedly legitimizes the award. Which is the whole point.

Not that such circle-jerkery is necessarily an inherently bad thing. People often buy cars for irrational reasons, a fact that has gone a long way towards making the auto industry what it is today. If consumers want to factor an aggregation of opinion and after-the-fact purchase justification into their decisions, so be it. But it’s not like either partner in the awards fandango acknowledges that the data in question is scarcely an improvement on a single random opinion of a given car.

“In a year that promises to be the toughest in more than a decade, car buyers are being especially prudent, and the data we’ve analyzed for the Vehicle Satisfaction Award will help this year’s customers make wise purchase decisions,” says Peterson of his award. “We’ve found that more than 25% of respondents are positively influenced by awards like the VSA when deciding on a car and this trend will certainly continue given the economy.”

But wise purchase decisions have nothing to do with it. These awards are little more than marketing information, to be overemphasized by marketing departments. To the consumer, a test drive will tell you more about your likely satisfaction with a given vehicle than any survey can (incidentally,whatever happened to the 24 hour test drive?). Meanwhile, despite slow sales across the industry, every OEM has at least one “winner.” And therein lies the real problem.

The proliferation of meaningless awards contributes to what is already one of the banes of the auto industry: attention span drain. Just as most consumers would be hard pressed to match every automotive brand with its OEM, the public is so inundated with quality survey awards that it’s impossible to expect consumers to seperate the wheat from the chaff. And the wild divergence in results only adds to the confusion.

Jaguar/Land Rover and Volkswagen, for example, may rank towards the bottom in more objective long-term quality and reliability testing, but a press release based on the opinions of buyers who have yet to experience engine sludging or electrical issues conveniently allows them to tout their quality and out-publicize their negative results.

Meanwhile, the awards keep on coming. There are infinite paths to an ill-advised vehicle purchase, but awards purporting to measure intangible attributes using questionable methodologies continue to be the best publicized of the bunch. Deluding consumers and OEMs alike may be good for business, but not in any meaningful or sustainable way.

Consumers, in particular, would be well served to ditch the annual awards and focus instead on methodical, long-term reliability studies such as Consumer Reports or True Delta. If emotional reactions to a vehicle are (for some reason) important to your buying decision, even online forums offer a broader range of reactions and dialogue than an awards aggregate. The truth is out there, but only if you look past the press releases touting useless awards.

Get the latest TTAC e-Newsletter!

17 Comments on “Editorial: The Truth About Car Awards...”


  • avatar
    KixStart

    The survey that’s most important to me is the Mr. and Mrs. KixStart Customer Satisfaction Review.

    After that, CR and TrueDelta.

    I do admire these other organizations, mentioned in the editorial, for their creativity in gathering cash.

  • avatar

    Many people have strongly suggested that TrueDelta have awards. My problem with this is that accentuating the best obscures the fact that many competitors are usually close behind it. So I’ve been resisting.

    With quality and satisfaction, the most useful information would be which models fall below a certain minimum threshhold. This is something most awards organizations avoid doing–perhaps because there’s less money in it.

  • avatar
    theflyersfan

    A lot of these auto “awards” remind me of movie reviews – so many outlets screaming the same thing but really mean just nothing.

    I personally found that if you check your Consumer Reports, reviews and comments on this site, and also the reviews and comments on Edmunds and TrueDelta (since they are known by so many people) that you’ll get a good idea of what is award-worthy and what car/truck is blowing smoke.

    …and that kid’s Checker Cab, if it was a full-sized real deal cab, would outlast almost everything out there today! Next time, how about the University of Pennsylvania Ben Franklin statue if you want to start a bench-statue motif?

  • avatar
    highrpm

    I can’t name specific awards here, but I am positive that vehicles with the following parts have won vehicles at one point or another (at least one per manufacturer so nobody feels lonely):
    - GM 3.4L engine – any year
    - Chrysler auto transmission in the ’90s
    - Toyota 3.0L sludge engine
    - VW 1.8L turbo sludge engine
    - Ford 4.6L with weak con rods (late ’90s)
    - Honda auto tranny for most V6 applications (late ’90s)
    - BMW cooling systems – any year
    - Benz electrical systems – last 10 years
    - Hyundai rusting subframe mounts, weak trannies.

    These are known defects. How can a vehicle with one of these components be “Best Of” anything when it’s a weak design?

  • avatar
    John Holt

    @Edward Niedermeyer: I appreciate your continuing insistence on calling these what they are: shams. Extortion in some cases, as not only does the OEM pay sources like JD Power to conduct these dribble surveys, but they also have to pony-up cash just to make mere mention of the award they [in some cases paid to] win. These awards are a business, not an achievement.

    CR may be as bland and conservative as flat white paint but at least it’s a credible source.

    Should we even make the point that a study of only 20,000 customers given an annual sales rate of 9 MILLION – what with hundreds of possible vehicle models contending – is statistically dubious at best?

    The icing on the cake is the irrelevance of somebody’s “love” for a car. Relying on some hack survey to tell me what car other people “love” implies I have zero emotional integrity of my own on which to make a purchase decision. Those who lack this basic skill should fall into the default line for keys to a Toyota Corollappliance.

  • avatar
    reclusive_in_nature

    Being that we live in the (mis)Information Age, it’s no stretch of the imagination to believe a lot of user reviews are just posts from paid trolls. In the end it all comes down to individuals taking the initiative to test drive a spectrum of vehicles and finding out what THEY like. The best “award” a manufacture can get is number of vehicles sold and percentage of profit from said vehicles. It’s in the numbers, baby!

  • avatar
    50merc

    Michael Karesh: “accentuating the best obscures the fact that many competitors are usually close behind it”.

    Quite so. That’s one problem I have with CR’s reliability categories. The difference between above-average and below-average might be only a few percentage points. The other drawback is that problem frequency is not the same as out-of-pocket expense. A Mercedes and a Cobalt might both require a new water pump, but I’m sure the repair bills won’t be the same.

    John Holt: a sample of 20,000, even from a universe of 9 million, would yield extremely reliable statistical conclusions IF the sample is randomly drawn. Often there’s a high risk of self-selection (e.g., from angry customers who want to make their unhappiness known). Even the manufacturers, who have all the data from warranty claims, don’t have the complete story.

  • avatar
    cjdumm

    “Not that such circle-jerkery is necessarily an inherently bad thing.” -RF

    I respectfully disagree, although I would debate whether to properly call it ‘incest’ or ‘mutual masturbation.’

    For years I wondered if JD Power were simply the advertising arm of Chrysler, the way ‘Fisher Body’ was the in-house coach-builder for GM. For a ‘survey’ to have any merit it has to name the good AND the bad, and it can’t be divided into 7,227 subcategories. (And the nominees for “Best Domestic Sedan Under $14,000 Base MSRP Including Dealer Incentives For First-Time Purchasers With Credit Scores Above 620 That Comes With Remote Keyless Entry And Is Not Available With A Manual Transmission But Comes With Stock Alloy Wheels” are?)

    That’s why I’ve trusted CR, TrueDelta, and Edmunds. And TTAC’s own B&B.

  • avatar
    psarhjinian

    Worth noting about CR is that they do not allow their rankings or findings to be used in product advertising. A dealer can keep a copy around the showroom, but that’s about it.

    This is why you never, ever see something like “Volkswagen Passat: Consumer Reports Top-Rated Midsize Sedan Four Years in a Row”.

    I feel that gives CR a lot of credibility.

  • avatar
    redrum

    When awards are based on well thought out, fair judgment criteria they can be quite useful. But the temptation to unduly influence existing awards, or create sham awards for marketing purposes, is very strong.

    The Simpsons did a hilarious episode where Homer “wins” an award for “Outstanding Achievement In The Field Of Excellence” (which in reality is a sham award created by Mr. Burns to get Homer to sign a legal waiver).

    So many of these out-of-nowhere awards seem to be cut from the same cloth. I have no doubt that, with enough time and money, anyone can make an award for anything and eventually people will begin referring to it as if it really means something, even when in reality they have no idea how the result was arrived at.

  • avatar
    Runfromcheney

    theflyersfan:

    That is why I just read Ebert’s review and be done with it.

    ANYWAYS, these awards annoy me so much just because I see the adverts touting them, talking about initial quality, trying to imply that the car’s reliability is determined by how well the dash is screwed together. The opposite can be true. My 1994 Chevrolet Cavalier has an interior that could probably be taken apart with a butterknife, but it has 225,000 miles and still runs and drives like brand new.

  • avatar

    As 50merc points out, a sample size of 20,000 can be sufficient. The size of the population is irrelevent when determining the necessary sample size. Instead, it’s a matter of what you’re trying to measure, how much it tends to vary, and how precisely you need to measure it.

  • avatar
    wsn

    I remembered the Chrysler AD where the bald German guy (Dr. something) suddenly found a JD-Power something in a Caravan backseat…

    I mean, is that all the IQ Daimler has got?

  • avatar

    I had a 24 hour test drive of the G8 GT a week after my quick test drive. In fact, that’s what sold my wife and I on the car. I’ve know other people that have gotten that, so it’s still around.

  • avatar

    Years ago, British auto magazines would list very detailed breakdowns of each new car’s maintenance schedule (including the labor hours in each) and the cost of common repairs and commonly replaced parts (ranging from a new clutch to brake pads to new headlamps). They eventually also included projected depreciation.

    This was far less catchy than J.D. Power, but a lot more useful.

  • avatar
    John Holt

    I should clarify my “statistically dubious” remark. I understand 20K should be a significant sample size, but if you assume 200 (guessing) or more nameplates available to consumers, it averages to 100 respondents per nameplate, give or take a significant chunk based on the vast differences in sales volume. Further add to it complexity of judging such soft figures as “customer satisfaction” based not on objective data but on feel-good fuzzy feelings combined with regional differences in vehicle preference (a Floridian might chastize a Subaru for poor fuel economy because of the AWD where a Snow-belter would swear by the car), and you have a mess of incongruence with the data.

    Sites like TrueDelta at least allow the consumer to pick through and analyze the hard data in front of them to make informed decisions; even if the data may not (in some cases) statistically significant, at least it is factual and open.


Back to TopLeave a Reply

You must be logged in to post a comment.

Subscribe without commenting

Recent Comments

New Car Research

Get a Free Dealer Quote

Staff

  • Contributing Writers

  • Jack Baruth, United States
  • Brendan McAleer, Canada
  • Marcelo De Vasconcellos, Brazil
  • Vojta Dobes, Czech Republic
  • Matthias Gasnier, Australia
  • W. Christian 'Mental' Ward, Abu Dhabi
  • Mark Stevenson, Canada
  • Cameron Aubernon, United States
  • J Emerson, United States