Every time an automotive research firm releases the results of a reliability survey, the focus is the same: who “won.” Firms like J.D. Power only publicly release model-level results for the top performers. Even where these firms release scores for all contenders at the make level, journalists focus on the winners. After all, John Q. wants someone to tell him which car to buy in as few words as possible. In the process, any car buyer truly interested in identifying the best car for their needs and wants gets left in the dark.
Yet few people realize this. It seems so natural to focus on the winners, whether the topic is an election, the NBA playoffs, or vehicle reliability. But let’s play “which one of these is not like the others” for a second. If someone wins an election by one vote, they get the office. If a team wins a deciding playoff game by one point, they get to move on to the next round. And if you buy the car that won an award by one point, you get…what?
In most cases, you do not get the car that best suits your needs. When evaluating a car, most people care about more than whether or not it’s the least likely to break. They’re also likely to be concerned about how it looks, how it drives, how well the seat fits their rear and more. Often the award winner isn’t nearly as attractive, as fun to drive or as comfortable as a competitor. Tradeoffs are a necessary part of the purchasing process.
Most reliability surveys provide dot ratings– with unspecified ranges of reliability– for the non-winners. But without the actual, precise scores for all the contenders, trading off quality against other factors is impossible. Buy the award winner and you’ll have a pretty good idea of how much style, performance, or comfort you’re giving up. But you won’t know how much “quality” you’re gaining in return. Say it’s one dot’s worth. Well, how much is that? Unfortunately, to make a wise choice, you need to know.
In a basketball game, a basket at the final buzzer can and should be the deciding factor. The closer the game, the more exciting it is to watch. But vehicle reliability isn’t about entertainment (though the news stories that cover the awards may be). The closer scores are, the less they matter. And the scores are often quite close.
In J.D. Power’s 2006 Initial Quality Study (IQS), 30 of 37 makes fell within two-tenths of a manufacturing defect per car of the average. The difference between number three (Toyota) and number 32 (Hummer) was 0.27 problems per car. In J.D.’s most recent Vehicle Dependability Study (VDS), 23 of 37 makes fell within half a problem per car of the 2.27 average. Only four makes— three of them domestic— bettered the average by more than half a problem per car.
Real-world problems occur in wholes. A car cannot have 1.79 problems. So Toyota’s VDS score of 179 implies that the typical Toyota has two problems in its third year. And Ford’s score of 224 implies… much the same thing. Buy a Toyota over an alleged “Fix or Repair Daily” car, and you gain no guarantees, just a middling chance of avoiding a single additional problem in the third year.
No one gets all hung up on vehicle reliability to avoid a single additional problem. Most consumers simply want to avoid buying a lemon that’s in the shop “all the time.” Well, reliability scores based on averages don’t help. Say we’ve got two basketball teams. On one, the average height is 6'5”. On the other, the average height is 6'7”. Which team has the most players over seven feet? Using averages alone, it’s impossible to say.
Time to buy a car. You know the award winners, and not much else. Play it safe and you’re likely sacrifice style, performance or comfort to maximize your odds of having one or two fewer mechanical problems. Ignore the surveys and there’s no telling how much car trouble you’ll have. Maybe none at all. Maybe a lot. The research firms know. But they’re not going to tell you. They only provide potentially helpful comprehensive data to corporate clients willing to pay the big bucks.
Hang in there. My website, TrueDelta , is committed to clarifying how cars differ in reliability, from the "best" right down the “worst.” We’re working hard to collect unbiased real-world data on your behalf. And TTAC can always be counted on to go beyond the superficial story. (How many critiques of J.D. Power’s methodology have you seen in the mainstream automotive press?) Someday soon, you’ll be able to identify the car that best suits your needs and wants— without playing a guessing game that’s stacked against you.