By on February 27, 2009

Consumer Reports has released its annual auto issue and scorecard, and the results are hardly shocking. CR loves them some Toyota, Honda and Subaru, singling out the big H as building the most reliable lineup of vehicles (Element excepted). Toyota came in second, with the Prius winning top spot in CR’s new “value” ranking. Only Toyota’s Yaris and FJ Cruiser were unable to earn a “recommend” grade from the report. Mercedes has improved its reliability, reckons CR, but European brands are still lagging. On the American front, Ford is singled out as the high point among the American automakers, as “some Ford models now rival their competitors” from Japan. Too bad they’re the F150 and Flex, which compete for a shrinking market segments. Unfortunately, that’s as good as the news gets for Detroit.

Detroit only builds 19 percent of CR’s “recommended” vehicles, with efficiency and reliability lagging behind the Japanese competition. And as CR bluntly puts it, “the domestics don’t have any competitive small SUVs or small cars.” Buick Enclave, Cadillac CTS, Chevrolet Corvette, Chevrolet Malibu, Chevrolet Traverse, GMC Acadia, Pontiac G8, and Saturn Outlook fared the best of GM’s models. Chevy’s Avalanche was inexplicably named a pickup “top pick.”

And Chrysler? The less said the better. After tying with Suzuki for last place last year, Chrysler has elbowed the competition out, claiming the bottom spot for itself. Not a single Chrysler, Dodge, or Jeep product was recommended by CR. Chrysler’s vehicles “have noisy, inefficient, unrefined powertrains, subpar interiors, and poor visibility,” reckons CR. All of which has the Freep’s Mark Phelan wondering where it all went wrong. “The dismal showing raises serious questions about Cerberus’ management of the automaker it acquired in 2007 and the credibility of the company’s proposals as it seeks government loans to stay in business,” says the notorious Detroit booster. “Thank you sir, may I have another?”

Get the latest TTAC e-Newsletter!

73 Comments on “Consumer Reports Annual Auto Report: Winners And Losers...”


  • avatar
    grog

    Let’s start the office pool on how long it takes the always sage Warren Brown to carp on CR’s “anti-American car bias” and, therefore, can’t be trusted.

    I pick 28 Feb, 1:36pm

  • avatar
    McDoughnut

    This is an outrage!

    Everyone knows that GM makes over 795 models and each one has been judged the best car ever!

    Obviously this is a sign that until the American people wake up to this fact – GM will need a couple more Billion $ – a month….

  • avatar
    crc

    And as CR bluntly puts it “the domestics don’t have any competitive small SUVs or small cars.”

    CR wouldn’t know a good, small SUV if it bit them in the ass.

  • avatar
    bill h.

    Given the timelines involved, wouldn’t Chrysler’s poor rankings here be more a direct result of Daimler’s management over the past number of years, rather than Cerberus?

    What about the Korean nameplates, are they moving up?

  • avatar

    Does the Prius even still make sense for the common consumer?

    It isn’t environmentaly friendly at all based on reports of its emmissions.
    It is economical only as a fleet car for the Government service sector both local and federal.

    If I was a person, 5’5 feet tall, rather than being 6’7, I’d perfer to buy a car less than $20,000 and make up my fuel use with the money I saved on the car itself.

  • avatar

    Old wine, new bottle.

    As with past auto issues, this one is based on a reliability survey that was conducted nearly a year ago, and the reliability scores are the same ones released last October or November. Somehow this is never made clear, by CR or anyone else…

    In reality, the reliability data on which these ratings are based was gathered when all of these cars were a year younger, and does not reflect any changes that might have occurred in the interim.

    As regular readers here are aware, there’s only one place to get car reliability information based on how the cars have been faring recently:

    http://www.truedelta.com/car-reliability.php

    FWIW, TTAC is the only place I’ll be posting this critique. I’ve been advised that it’s not considered appropriate, and even “below the belt,” elsewhere. So it falls to others in the media to make it.

  • avatar
    crc

    Thank goodness for Michael Karesh.

  • avatar
    KixStart

    Flashpoint: “It isn’t environmentaly friendly at all based on reports of its emmissions.”

    Where did you get that bit of nonsense? Wherever you might have “learned” this particular factoid, did you even give even a passing thought to the idea that a vehicle’s tailpipe emissions will be fairly proportional to its fuel consumption (in other words, a gallon of gas becomes a fixed amount of CO2 and other emissions) and that there aren’t many other vehicles on the road that can touch the Prius’ 48mpg?

  • avatar
    Potemkin

    CR like our government, Harbour Reports, JD Power, et al is bought and paid for by those who they favor. Blogs seen to be the only reliable source of unbiased info these days, and even some of them are co-opted.

  • avatar

    American manufacturers and people have never done the small thing well. Look at the sprawl of North American cities. Compare this to the density of a Japanese city and the difference mindframes aren’t difficult to see. Yes, the market share for larger vehicles might be shrinking, but the F-150 and Silverado are still the number 1 and 2 selling vehicles in the states. The Detroit boys might be better off just building larger vehicles and leaving the small ones to other manufacturers. That way, no one would have to drive a Chevy Aveo.

  • avatar
    jmo

    “It isn’t environmentaly friendly at all based on reports of its emmissions.”

    What reports are these? The ones that have been totally and utterly discredited?

  • avatar
    carguy

    The Prius as best value and the Avalanche was top pick for pickup trucks?

    That sort of does call into question the rest of their findings.

  • avatar
    tedward

    “Chrysler’s vehicles have noisy, inefficient, unrefined powertrains…”

    Only if you completely ignore the 5.7liter hemi. Although I will grant that the automatic they stick it with is completely uninspired (as bad as Mercedes gets in fact, it’s almost Toyota-esque). I’d agree that the rest of the lineup deserves the criticism, and that Cerebrus ownership hasn’t improved it so far.

  • avatar

    Just checked the Phelan article. He might not be aware that the “new” ratings are based on data that are nearly a year old, and so don’t much reflect Cerberus’ management.

  • avatar
    MattVA

    Mr Karesh can correct me if I’m wrong, but this press release from Consumer Reports has almost NOTHING to do with reliability. These results and accompanying rankings are for the subjective evaluations CR does on all vehicles, NOT the reliability. Otherwise, I doubt Volkswagen, BMW, and Mercedes Benz would rank high than Ford.

    Here’s all you need to know about how good CR’s subjective rankings are:
    They say the new Acura TL is better than the 3-series, the CTC, the IS, and the C-class. And not just a little better. A lot better.

  • avatar
    geeber

    On the American front, Ford is singled out as the high point among the American automakers, as “some Ford models now rival their competitors” from Japan. Too bad they’re the F150 and Flex, which compete for a shrinking market segments. Unfortunately, that’s as good as the news gets for Detroit.

    The magazine also praises the Fusion/Milan for excellent reliability.

    MattVA: The magazine’s recommendations are based on a combination of reliability survey results and test scores. A vehicle must score “average” or above in reliability, and over a certain number in its road test, to earn a recommendation.

  • avatar
    mtymsi

    I thought the domestic manufacturers did give up the small car business to the Asians years ago.

    It is hard to believe how bad Chrysler’s offerings are and that would be Daimler not Cerberus’s fault. The Sebring is the poster child of everything that’s wrong with Chrysler’s products. Not only is the styling hideous it ranks as one of the poorest built vehicles offered for sale in this country. Quite an accomplishment Daimler….

    Whether you agree with CU’s ratings and methodology or not it’s still the basis a lot of people use for their buying decisions.

  • avatar

    geeber is correct.

    Their recommendations are based on a combination of survey results and test scores. Sometimes they’re talking about one, sometimes the other, sometimes both. Many people get confused by this.

    For example, they have no reliability info on the Ford Flex. So when they praise it their praise is based entirely on the road test evaluation.

  • avatar
    ajla

    Chevy’s Avalanche was inexplicably named a pickup “top pick.”

    The Avalanche rocks as a personal use vehicle. It’s a Tahoe with the useless 3rd-row replaced by a pickup bed. The mid-gate is a great innovation that gives a lot more utility than usual crew cab trucks. Plus, the two V8s offered are really good.

  • avatar
    tonycd

    Potemkin writes,

    “CR …is bought and paid for by those who they favor.”

    There are plenty of critical things to be said about Consumer Reports, but this isn’t one of them. In direct contrast to, say, Consumer Digest’s Best Bought or the Motor Trend Advertiser of the Year, CR doesn’t even accept advertising. In fact, it is so insistent on remaining unsullied that it has repeatedly gone to court and fought fiercely against letting the brands it likes quote it by name.

    Now, if you’d said “CR wouldn’t know a performance car if it bit ’em in the shorts,” that’d be a different kettle of fish.

  • avatar

    I generally agree with ajla, except for one thing: the Avalanche is actually a Suburban with a useful third row removed (same long wheelbase), but with a useful bed substituted, and a lower price.

  • avatar
    no_slushbox

    Re: crc:

    CR wouldn’t know a good, small SUV if it bit them in the ass.

    That is true, they completely slandered the Samurai, which was a pretty damn good small SUV.

  • avatar

    tonycd:

    I don’t believe that CR has been bought. But they are increasingly willing to let their name be used. During media presentations at NAIAS this year both Toyota and Subaru mentioned and/or displayed CR in their presentations, and the CR people in attendance looked the other way.

  • avatar
    psarhjinian

    …singling out the big H as building the most reliable lineup of vehicles (Element excepted).

    Surprising, but I guess the compromises it requires knock it off the list.

    Toyota came in second, with the Prius winning top spot in CR’s new “value” ranking.

    Probably the same as above. Remember that CR has three criteria before a car can be recommended:
    * Safe (per crash test performance)
    * Reliable (per historical data)
    * Decent (per ratings)

    The Element, Yaris and the FJ are safe and reliable, but they don’t rate well. The opposite example would be, say, Volkswagen, whose products are safe and well-rated, but not reliable.

    CR wouldn’t know a good, small SUV if it bit them in the ass.

    They actually did, sort of, like the Vue. If you can get over the fuel economy. They also liked the Escape, back when the current model first debuted. In 2000.

    CR like our government, Harbour Reports, JD Power, et al is bought and paid for by those who they favor. Blogs seen to be the only reliable source of unbiased info these days, and even some of them are co-opted.

    Schwaaaa?

    Blogs? Unbiased? Blogs are biased, but I suppose you could make the tenuous claim that because they either don’t claim objectivity, or the claims of such are blatantly false, that objectivity isn’t an issue.

    This sounds like the kind of rationale libertarians trot out when they haven’t had to deal with reality: yes, government has been co-opted. So has industry. So has religion. So has everyone, even individuals, in some way, shape or form. Everyone has an agenda, it’s just that the agendas and biases of some people and/or group align with yours, and thusly you might not see it.

    CR is about as close to objective as you’re going to get in this arena because they don’t depend on funding from those the evaluate (which even JDP does).

    As with past auto issues, this one is based on a reliability survey that was conducted nearly a year ago, and the reliability scores are the same ones released last October or November. Somehow this is never made clear, by CR or anyone else…

    This is a fair point, but it’s not necessarily a big deal. CR never recommends models that are new and do not have a strong teething history. They also tend to rate fairly conservatively, so you can generally trust them.

    I think they make a good accompaniment to your service as they’re easier to parse, don’t (until recently) include actual owner comments that could colour matters, and benefit from a broader sampling base, especially in mainstream cars. They’re a good go-to when starting your search as you can eliminate a lot of questionable models quickly.

    The Prius as best value and the Avalanche was top pick for pickup trucks?

    The Prius gets good mileage, holds people and stuff in comfort, rides well and is reliable. Nothing wrong with that. If I were in the market for a midsizer, it’s where I, and a lot of others, would go. It’s possible that CR is weighting fuel economy more heavily than perhaps is currently warranted, but the Prius’ TCO is still very, very low for a midsize car.

    The Avalanche is actually a good truck. Unless you need the longer bed, it’s a better holistic choice than a “normal” GMT900, F-150, Ram or Tundra, and it’s more capacious than an Explorer Sport Trac. It may be odd to see CR rate it so highly, but if you think about it’s virtues, it really is pretty good.

  • avatar
    tedward

    CR is the place to go if you’re uninterested and uninformed about cars, but want to cite an expert authority as justification for a purchase. Otherwise it’s where those people go to start looking, as they don’t know how to find other sources of information.

    I’ve heard that they actually do have competent drivers on staff. I think that they rate fwd’ers highly because of pressure to dumb down their reviews to the expectations of the average consumer. The end result is a wasted expert opinion.

    Anyone who accepts their bare-minimum of data to the reader approach to reporting reliability seriously is a moron.

    Tonycd is correct on the integrity thing though, it’s not fair to say they are bought and paid for.

  • avatar
    ajla

    @Michael Karesh:

    Whoops, I stand corrected on the Tahoe/Avalanche relation.

  • avatar
    MattVA

    geeber and Michael,
    So is there anyway to know how much of the scores are based on the reliability survey and how much of it is their subjective (and seemingly off target) reviews?

  • avatar

    I have issues with Cr, particularly after they said the Jeep Compass was better than the Jeep Patriot.

    Patriot got a 42. Compass got a 51.

    I would also prefer a Patriot to any of the ugly RAV-4 or CR-V things the Japanese put out, even if it is considered less reliable. The new F-150 is hands-down better than the Tundra, too.

    Never mind though. With regards to reliability, I think they’re generally spot on. I don’t think the domestics have it with reliability yet. But other than that, I think CR leaves much to be desired, at least for anybody that’s interested in an interesting car rather than just an appliance.

  • avatar

    psarhijinian,

    You wrote:

    “This is a fair point, but it’s not necessarily a big deal. CR never recommends models that are new and do not have a strong teething history.”

    You seem to be missing the point, as is just about everyone else beyond TTAC. They wait so long to recommend a model not so they can accumulate enough data on it, but so that even a little bit of early data can wend its way through their slow, annual process.

    So they sometimes do recommend models on which they have just a few months of early data. It’s just not obvious because the results aren’t put in Auto Issue form until nearly a year after the data are collected. The car in question might have been out for over a year, so people might think CR has over a year of data on it. In truth, they might have even less data than JD Power uses for its IQS.

  • avatar

    MattVA,

    I haven’t looked into how the new “value” scores are calculated. In the past, CR has refused to divulge its formulas, claiming that they are proprietary. We’ve joked about the “secret formulas” here in the past. Oddly, mainstream car magazines have no such issue with showing how overall scores are calculated in their own comparison tests.

    With their recommendations, a car must pass both “tests” with an “average” or better. In other words, a C in both will do.

  • avatar
    psarhjinian

    You seem to be missing the point, as is just about everyone else beyond TTAC. They wait so long to recommend a model not so they can accumulate enough data on it, but so that even a little bit of early data can wend its way through their slow, annual process.

    No, I get the point, I just don’t think it’s as big an issue as you make it out to be. They don’t recommend models within a year of age, regardless of why it takes them so long to do so. That’s good practice. Full stop.

    When you consider their audience, that’s not really a bad thing.

  • avatar
    MattVA

    Michael,
    I did a little work and added up all the “scores” of every model Honda and Acura makes (except the TSX, which I couldn’t find.) If you take the average of all of these models, you get an average score of 77.1.
    I can’t help but think the the numbers they give to rank the manufacturers are simply the average score of all the models.

  • avatar

    I’m clearly missing something.

    How is it better to wait a year to recommend a model, if this recommendation if based on no more information than if you’d recommended the vehicle nearly a year earlier?

    If TTAC drove a car for a few days, then waited a year to provide our evaluations, would this be better because it might imply that we’d been driving the car for the entire past year? “When we first drove this car ten months ago…”

    I’m surprised by your comment, psarhjinian, because I cannot recall another time when a comment by you simply did not make sense.

  • avatar

    In that case reliability isn’t part of the scores. At least they’ve never said that reliability is part of the road test scores.

    Update: looking at CR’s press release, they say:

    “The final marks are based on a composite of our overall road-test score and predicted reliability Rating averaged from all tested models of that automaker.”

    and

    “Hyundai and Suzuki were the only automakers that showed improvement in all three measures: overall score, average test score, and reliability.”

    Still not clear how the “overall score” differs from the “average test score.”

  • avatar
    Qwerty

    Can someone explain to me how Jeep consistently makes crap? Jeeps do not seem to be all that complicated of a vehicle and they have been making them forever. Why haven’t the bugs been worked out?

  • avatar

    The name has been around forever, but the vehicles get redesigned every few years.

    In TrueDelta’s surveys, the reported repair rates for 2007 and 2008 Jeep Patriot, Compass, and Wrangler have actually been about average, even a bit better than average:

    http://www.truedelta.com/car-reliability.php?stage=pt&bd=Jeep&mc=146&email=Guest

    The redesigned minivans and Dodge Journey have much higher repair rates.

    The Wrangler does poorly in CR’s road test because it’s evaluated as an on-road passenger car.

  • avatar
    psarhjinian

    I’m surprised by your comment, psarhjinian, because I cannot recall another time when a comment by you simply did not make sense.

    Oh, I don’t make sense all the time. You just get my best work, here.

    I think what we have is a disconnect: I’m talking about reliability rankings, you’re talking about competency. I agree that they’re behind the times when it comes to evaluations, but not unduly so for what is effectively a dead-tree medium that strives to give a complete, instrumented evaluation of every model available.

    Imagine if TTAC had to do the same? It would be slower to press, too.

    Take their review of the Venza, which just showed up in dealers this month (give or take). They give an initial set of impressions and some publicly available stats. They don’t recommend it without full testing. I think that’s fair, given their mission—people who are going to buy a car without the kind of data they provide aren’t going to be swayed by their recommendation anyways.

    Again, I see your point, but I just disagree that, within the context of their audience and mission, that it’s a big deal. The same would apply to, say, Phil Edmonston’s Lemon Aid, which is even later to the game than CR.

  • avatar
    psarhjinian

    Can someone explain to me how Jeep consistently makes crap? Jeeps do not seem to be all that complicated of a vehicle and they have been making them forever. Why haven’t the bugs been worked out?

    There’s two things happening here:
    * Jeeps suck as general-purpose vehicles. We all know this, and CR is a magazine that tilts (?) towards centre
    * There’s a difference between robust and reliable. Jeeps can do the Rubicon, but they might not be able to do your daily commute, day-in and day-out.

    Unfortunately, if you want a reliable Jeep-a-like, Suzuki (the Samurai) and Toyota (the old FJ and/or the pre-luxobarge Land Cruiser) aren’t options for new-car buyers.

  • avatar

    MattVA:

    I can no longer edit my earlier response, but just found this:

    “The overall score is calculated from a carmaker’s average test score and average predicted-reliability Rating.”

    The scores just happen to be the same for Honda. Average two identical numbers, and you get the same number.

  • avatar

    psarhjinian,

    I’m actually saying nothing here about their competency, and am talking about their reliability rankings. In talking about the reliability rankings, the road tests you mention are not relevant. Unlike what some (many?) people assume, they do not use the road tests to evaluate reliability, but only to evaluate the same things TTAC and other reviewers evaluate.

    To be clearer, here’s a timeline:

    April 2008: CR sends out its survey

    October 2008: CR releases reliability results for the April survey

    March 2009: CR re-releases the same reliability results, now packaged with the road test evaluations, some (but not most) of which were conducted only recently. People treat the reliability information as “new.”

    The only 2009 in their current results, IIRC, is the Nissan Murano, which went on sale in late January 2008. Respondents to their survey bought these vehicles in January, February, March, or April, then responded based on their experience so far. In some cases just a few weeks or even days of ownership.

    Seeing the reliability verdict for the Murano now, you might think that CR has waited until the vehicle had been on the market for over a year to provide a reliability rating. But instead their reliability information on the Murano does not reflect owner experiences for the last ten or so months, and instead includes a shorter length of ownership than JD Power’s IQS.

    Clearer?

    With the Murano, TrueDelta can report that having another eight months of data does not substantially affect the result. But this is not always the case, and cannot safely be assumed to be the case. Sometimes cars require very few repairs for the first few months, then suddenly have a common problem or two as the miles accumulate.

  • avatar
    Luther

    The new Equinox and Cruze should help…And they best fast-track the new Impala! and get a proper RWD G6! and quit dicking around with Buick!

    If I am going to be Co-Owner-Against-My-Will then they better listen to me…Dammit.

  • avatar
    Mike S

    I’m no expert on CR’s methods but I’ve seen enough anomalies to add a large pinch of salt to everything I read from them. Some years ago I compared the ratings of the Toyota Matrix and Pontiac Vibe (almost identical, they are even built at the same plant). The Matrix was “above average” the Vibe “below average”.

    I’ve noticed this anomaly on other occasions where badge-engineered twins had different ratings. Sampling issues, owner perceptions (or whatever else causes these problems) surely should be culled from the reports before they are released as being “definitive” statistics. The impact they have on public opinion and vehicle sales (when there actually were sales) is probably significant.

  • avatar
    don1967

    I can tolerate CR’s Toyota/Honda love affair – it was hard-earned after all – but crowning the Prius as our new Messiah as the oil bubble is deflating all around us only proves that the editors are anything but “consumers”. They are tree-huggers on a mission.

  • avatar
    King Bojack

    Consumer Reports is retarded for many reasons, several of which are elaborated on at True Delta.

    They used to recommend Toyota’s new models, now they don’t. I think they only still do this for Honda despite mounting quality issues. To remain unbiased they’d better actually BE unbiased as much as possible, which means NO ONE gets a recommended on a new model until stats have been pulled.

  • avatar
    RNader

    I don’t want to stick up for Cerberus too much, but they were handed a baby just before it spit,threw-up and shit on them.

    All these cars were brought to market when Daimler & Dr. Z were in charge. They were built on a shoe-string with old under-bodies (Mitsubishi/Benz) & all had injected molded interiors. And the quirky “in your face” style under the stewardship of Trevor Creed.

    What were they thinking when they drove these cars around the test track? Do they not have the competitors’ product right there to compare it to?
    Who got in the Sebring and said, “its a go”! Who took a look at the Caliber and said “perfect lets build it”? For Christ sake, it makes the Aztek look f*cking good!

    When 50cent & Snoop Dog were rolled out to advertise the gangsta market segment, you know the company is running on fumes.

  • avatar
    Robert.Walter

    Flashpoint; 6’7″; Slamming Prius; knowledgeable about fleet sales …uh… you’re not that reverse-Rumplestiltskin GM-guy, Rick Waggoner, are you?

  • avatar
    ivyinvestor

    CR might not be the best authority, but such is the case with any other “source”, including our beloved TTAC (how many times have folks debated cross comparisons of vehicle classes as a function of the star ratings?). We’re all biased to a certain extent, oftentimes more inclined to side with what preconceptions we might have about something or someone. Open folks listen, weigh, interpret, and decide: There’s nothing wrong with reading CR, MT, PopSci, TTAC, etc to learn about what interests us – there’s no need for the acts to be mutually exclusive.

    Mike S: I’m not going to argue that one example affirms a trend, but your comments about American/Japanese vehicles built on similar (or the same lines) suggests there should be few/no qualitative differences: but there can be.

    Back in ’86, my folks wanted to “buy American” so they opted for a Chevy Nova rather than a Corolla. CR had suggested that the Nova was an acceptable performer but warned that its reliability would behind that of the Corolla despite sharing most of the platform with TM manufacturing. Long story short, certain elements of the GM-TM partnership allowed for substitutions on the lines for Novas, including transmission linkages, gaskets, carpeting, etc (according to several mechanics whom they got to know too well). Ultimately, a few friends of mine in HS pounded the hell out of their ’86-’90 ‘Rollas without problems…And my folks, probably the most easy going conservative drivers, experienced repeated brake problems, two head gasket failures, several transmission difficulties, and broken window winders (*manual* windows). What junk. Their last “American” vehicle – imports since: they couldn’t be happier.

    King Bojack: What are your sources for “mounting quality issues” at Honda? After much experience with statistics, I agree that the baseline should be uniform, but I don’t think a magazine can be “retarded” because of what was, until the Camry’s and Avalon’s difficulties, a reasonable prediction based on short term (0-2 years) reputation.

  • avatar
    Johnster

    Mike S : I’m no expert on CR’s methods but I’ve seen enough anomalies to add a large pinch of salt to everything I read from them. Some years ago I compared the ratings of the Toyota Matrix and Pontiac Vibe (almost identical, they are even built at the same plant). The Matrix was “above average” the Vibe “below average”.

    I’ve noticed this anomaly on other occasions where badge-engineered twins had different ratings. Sampling issues, owner perceptions (or whatever else causes these problems) surely should be culled from the reports before they are released as being “definitive” statistics. The impact they have on public opinion and vehicle sales (when there actually were sales) is probably significant.

    Actually the Matrix is built at a Toyota plant in Cambridge, Ontario, Canada while the Vibe is built at the NUMMI plant in Fremont, California that is jointly owned by GM and Toyota.

    When a car is built at plants in more than one country I usually hear stories that ones from Japan are the best, followed by the ones built in Canada, then the U.S. and then Mexico. Stories about the superiority of manual transmissions, in terms of feel, installed in cars built in Japan are especially persistent.

    Usually such differences are attributable to different demographics of the owners. Matrix owners tend to be older, to have higher incomes and more education, and to drive less and to be more likely to follow regular maintenance schedules than Vibe owners.

    Similar situations exist in other badge-engineered cars, notably between the Chryslers and Dodges where Chryslers frequently have better reliability records than the Dodges built with the same parts on the same assembly line.

  • avatar
    Ferrygeist

    “Chrysler’s vehicles have noisy, inefficient, unrefined powertrains…”

    “Only if you completely ignore the 5.7liter hemi.”

    I’d argue that a motor like that is the very definition of inefficient and unrefined. Chrysler/Dodge et al seem to never have any solution for high power other than massive, heavy, brute force via massive displacement. That they have only two valves per cylinder and twin plugs is 1960s technology. It was awesome in 2.0 liter Porsche flat 6 motors producing 220 hp in 1968. It’s unimpressive in a 5.7 liter V8 motor producing 390 hp in 200x. That’s specific power of ~68 hp per liter.

    Give the nod to Honda: its first generation S2000 motor is 2.0 liters, four cylinders, and produces 240 hp, for a specific power output of 120 hp per liter. That’s more specific power than most Ferrari street motors.

    Oh, and the 5.7 hemi also weighs an astronomical 485 lbs.

  • avatar
    Mike S

    @Johnster
    “Actually the Matrix is built at a Toyota plant in Cambridge, Ontario, Canada while the Vibe is built at the NUMMI plant in Fremont, California that is jointly owned by GM and Toyota.”

    You are correct sir, for some reason I thought that both were built at NUMMI.

    “Usually such differences are attributable to different demographics of the owners.”

    There’s probably some truth to this. Car A is a garage queen, Car B is hammered and not maintained. One gets a good reliability rating, the other gets a poor one. However, this doesn’t help the person using the reviews because they are not aware of the caveats. In this case, the usual “some information is better than none at all” doesn’t hold true.

  • avatar
    snabster

    Funny story for Michael Karesh.

    I’ve been arguing with my dad about CR ratings for 25 years. It goes back to when I was trying to convince him to get me an Audi 4000Q. Even then, CR’s definition of reliability was suspect to me. Fast forward 25 years. He calls me to tell me about something he read on the internet how it was crap. He didn’t remember the name, but I was able to tell him: true delta. He ended up getting a lexus after that. Sigh.

    Like USNWR’s college ranking, or even BCS, CR’s car ranking mislead you badly. Here is a counter argument. CR has played a huge role in making news cars work until 100k. Honda and Toyota are the benchmark. Quality is vastly better than 20 or even 10 years ago. (with modern electronics, that statement may not be true)

    But in today’s age of responsible drivers, we need to recognize that all cars need some tender loving care. Treat your car like an airplane. That way it will get to 200K, instead of 100K.

    I tried this argument out when my dad wanted to get an extended warranty with the lexus. I lost again: he bought it. Sigh.

  • avatar

    That is a funny story. People who buy a car because the brand has a rep for being reliable and then buy an extended warranty (my father and his partner did the same) probably also…well, a good analogy escapes me.

    I don’t think CR is, as you put it, crap. But with any data source–including TrueDelta–you need to look at how the results were determined, including what specific questions were asked on the survey. To this end, I provide more information than others do, for those who really want to know what’s behind the ratings.

    The key problems I have with CR, for those who are new to the discussion:

    1. The survey has people report problems “you considered serious.” This opens the door wide to respondent bias. For example, people who generally like their car are less likely to report a repair, because they’re less likely to consider a repair as “serious.” CR doesn’t itself bias the reliability results, but respondents might.

    2. Just dots, without any way to easily tell how many more repairs one model requires compared to another.

    3. Updated slowly once a year, and then presented as if the information is “new” twice a year. TrueDelta’s results average over nine months ahead of CR’s.

    None of this renders their results “crap,” just not nearly as good as they could be.

  • avatar
    Monty

    The mistake with Consumer Reports (full disclosure: I am a subscriber) is relying entirely on it’s judgement as the final word on the subject (not just cars, but any product or service that’s reviewed).

    Any halfway intelligent buyer would begin with CR, eliminating some vehicles, and then continue researching using vehicle-specific forums and blogs, speaking to other owners, test-driving all of their remaining choices multiple times and then making their own decision based on anecdotal and empirical evidence. I spent almost an entire year researching the car that I eventually purchased for my wife; the end result, a loaded ’05 Ford Focus ZX5 SES (every available factory option except the automatic transmission) that she, and I, are very happy with. CR puts a lot of love on the Civic and the Corolla; I disliked the Corolla and found the Civic to be uninspired and boring, but I did check them out based on CR’s recommendation.

    Of course, most buyers don’t buy cars in this manner. It’s usually an impulse buy, or one borne of neccessity, resulting in too high a price paid for what is more than likely a total POS.

    Consumer Reports is nothing more than one of the many tools at my disposal, and is certainly valuable in that regard, but it’s not the only tool I use. Too many people, though, rely on CR as their only guide to a car purchase.

  • avatar
    golden2husky

    CR is the place to go if you’re uninterested and uninformed about cars, but want to cite an expert authority as justification for a purchase. Otherwise it’s where those people go to start looking, as they don’t know how to find other sources of information.…

    Well said. If you are knowledgeable passionate about some product, CR is NOT the place you look. Camera buff? Into skiing? Bikes, cars, whatever, if you are knowledgeable about it, you realize how lame CR reviews are. Conversely, if you view an automobile as a Maytag and grudgingly deal with car purchases, CR fills that void.

    The automotive press offers a much better look into the realm of the automobile. And since you typically can find numerous reviews of the same car, you can take perceived biases into account.

  • avatar
    FrustratedConsumer

    “The survey has people report problems “you considered serious.” This opens the door wide to respondent bias.”

    Respondent bias is much more correctable than the self-selection bias of the TrueDelta website.

    Anybody with a stats background would know that.

  • avatar
    Pch101

    Just dots, without any way to easily tell how many more repairs one model requires compared to another.

    It’s a relative ranking, equivalent to grading on a curve, with the least troublesome products within the pool getting a higher ranking. That’s an acceptable way of reporting results.

    You count visits, CR reports trouble areas. The effort to argue that one method is inherently better than the other on that basis is fallacious. They’re different data points, both are useful and a smart user will consider both, rather than turn it into a pissing contest based upon the type of data array that is chosen.

    The survey has people report problems “you considered serious.”

    They add some clarification as to how to define it, so I believe that you are being a bit misleading. Their language makes it clear that they don’t want routine maintenance and accident repair to be included in the result.

    Updated slowly once a year

    The upside to that is that CR places a low burden on the respondent pool, which increases the likelihood of getting a large sample size, which they do. Where CR beats everyone handily is with the size of the sample, and that factor alone should improve accuracy.

    Another benefit of the CR survey is that it is short, simple and easy to complete. Long surveys tend to encourage respondents to gloss over points, not necessarily due to dishonesty but just so that they can get it over with.

    I have to wonder about some of the details that JD Power provides, for I can’t imagine that they have an easy time actually gathering some of their detailed data. I have no doubt that they ask the questions, but whether people diligently answer them is another issue.

  • avatar

    I agree that visits vs. trouble areas is trivial. Why is why I’ve never made an issue out of this. No idea why you seem to think I have.

    On the rest you seem to have some blindspots, willful or not.

    My “pissing contest,” as you put it, is in their not reporting the actual problem rates. I believe that most people, when viewing just the dots, vastly overestimate the size of the differences between competing car models. This makes those differences seem more meaningful than they actually are in many, even most cases. This affects thousands of purchase decisions. At which point it’s not trivial.

    For a 2008 model year car, how large do you think the mininum difference is between an “average” car and a “much better than average car,” in CR’s results? These are separated by an entire “better than average” rating, and are essentially the difference between an A and a C.

    My critique of the “you considered serious” wording is not at all misleading. The way they qualify the question has not substantially changed (I’m not sure it has changed at all). They’ve always given some guidance, and told people to exclude maintenance. But they don’t provide clear guidance.

    For example, some people will report an alternator failure, others will not by thinking “x brand is generally reliable. This must have been a fluke, and it was covered by the warranty.” I frequently follow up with people to close gaps in their responses. You would not believe some of the repairs some owners consider “not serious.”

    Meanwhile, CR has an entire category for “rattles and squeaks.” How many of those would be reported at all with a reasonable definition of “serious?”

    Bottom line: permitting each user to decide what counts as serious, with clear guidance concerning what should count ($x, y days, etc.) adds variables such as general satisfaction with the car, past experience with the brand, the general reputation of the brand, and dealer service quality to the analysis, without making it clear these affect the results and without any attempt to adjust for them.

    To repeat the question above, in clearer terms: what do you think the minimum difference in problems per hundred cars per year is between an “average” 2008 model and a “much better than average” one?

  • avatar

    Frustratedconsumer wrote:

    “Respondent bias is much more correctable than the self-selection bias of the TrueDelta website. Anybody with a stats background would know that.”

    Two points:

    1. CR’s results are more subject to self-selection bias than TrueDelta’s. They don’t use a random sample, either, and unlike TrueDelta do not require continuous participation or limit reported problems to those that occur after joining (with the limited exception of the current month).

    Some all-too-common situations: With CR, an unhappy owner can join and report everything that has happened in the past year. With TrueDelta, they cannot. Similarly, with CR people can respond only when they have a problem to report, and ignore the survey otherwise, and have their responses count every time. With TrueDelta, most such responses will be excluded from the analysis.

    2. I have a stats background. Yet, without asking questions CR does not ask, it’s not clear to me how to correct for the respondent biases in their data. Since you suggest this is easily done, how would you suggest they do it?

    Easy or not, they don’t appear to perform any such correction.

  • avatar
    FrustratedConsumer

    “they don’t appear to perform any such correction”

    I admit I’m confused.

    You admit that TrueDelta has just as much of a self-selection problem AND you admit there is no real way to correct for that.

    AND you admit you have no access to CR’s algorithms (or their data.) And thus can’t possibly offer an opinion on what they do. (Other than critique their survey questions)

    Yet, you claim your process is superior.

    I understand you have a PhD, which explains some of your attitude, but that doesn’t change the truth. TrueDelta – just because it asks different questions at different time points – doesn’t it make it more statistically significant. Especially when your 33K sample is what – a third of CRs?

  • avatar
    golden2husky

    Mr Karesh:

    CR’s data, if I am correct, creates reliability data that is relative to the other samples, correct? So, if that is true, what happens in this hypothetical case:

    Only two brands exist, say Toyota and Chevy. And say that Toyota is much better than average and Chevy is much worse. Now, over the course of one year, Chevy manages to make it problem rate match exactly what Toyota has. So, if all the ratings are relative to one another, what is the result? For all of Chevy’s progress, they would move up in CR to average, no more. Toyota, with no loss in reliability, would drop to average. Is this a correct assessment? Of course, there are way more makers than just two, but as the difference in reliability continues to narrow, wouldn’t this become more of an issue? According to most of the raw data that I have looked at, Chrysler’s reliability, compared to earlier Chryslers, has improved quite a bit. But because all automobiles as a group has improved more, Chryslers come across as getting worse which is not true. Comments?

  • avatar

    Frustratedconsumer:

    My attitude? I fear you’re projecting. Though I must admit that some of my own frustration is bleeding through–no matter what I write, some people act as if I’ve written whatever happens to be in their heads.

    For example: where did I say there was no way to correct for the self-selection problem? What I actually provided was TWO ways TrueDelta does correct for self-selection, but that others do not.

    What I did say was that I do not see how CR would correct for respondent bias given the data they have. I know what data they have because I know the questions they ask. And the common anomalies in their results–despite large sample sizes–suggest that they’ve not done nearly enough to adjust for extraneous variables.

    Since you said it’s relatively easy to correct for respondent bias, I asked you how this might be done. Instead of providing an answer, you instead attack my “attitude,” which is generally done by someone who know the facts aren’t on their side or does’t really understand what they’re talking about.

    Going back to methodology, you trivialize the questions asked and the timepoints. Well, anyone who knows anything about surveys knows that the way questions are worded is far from trivial. Ask the wrong question, and you’ll get the wrong answer, no matter how many people you ask.

    As for the timepoints–I already gave some simple examples illustrating why these matter. Having people report what happens after they sign up removes a major potential source of distortion.

    One pet peeve: “stastically significant” is horribly overused and misused. To say that a result is “statistically significant” only means that a basic statistical calculation has found that the difference in question is not zero, given a certain level of acceptable error. It says absolutely nothing about the quality of the data, the validity of the analysis, or the meaningfulness of the result.

    A result can be “statistically significant,” and still incorrect or, more often, meaningless. Given a large enough sample, even the most minute difference can be “statistically significant.” It only has to be larger than zero. So, oddly enough, a result can be both significant in statistical terms and insignificant by any other measure. But because the lay public has been trained to look for this phrase, insignificant differences can be made to seem “significant.”

    So, where’s your fix for respondent bias in CR’s data that doesn’t involve changing their questionnaire?

  • avatar

    golden2husky:

    Your thought process is correct. What we have seen in CR’s results as a result is that fewer and fewer models, even Hondas and Toyotas, earn a “much better than average” rating. This despite the fact that the ratings differ by a percentage of the average, so that as the average declines the differences between ratings has also declined.

    Earlier I asked about the minimum difference in problems per 100 cars between “average” and “much better than average” 2008s. No answers yet.

  • avatar
    Pch101

    I believe that most people, when viewing just the dots, vastly overestimate the size of the differences between competing car models.

    I don’t why that should be the case, when CR provides a legend that associates the ranking with the percentage of owners who reported problems.

    This affects thousands of purchase decisions.

    It’s a relative ranking. If I demand a well-above-average family sedan, then that limits my choices. If I broaden the field to include average or better, I get more choices. The user is provided with the meaning of the relative ranking, as well as a legend that converts terms such as “well above average” into a percentage, and is allowed to make his own decision.

    In answer to your question, “For a 2008 model year car, how large do you think the mininum difference is between an “average” car and a “much better than average car,” in CR’s results?”, that answer is provided in their survey results.

    If all the cars in a class were exceptional, then yes, the curve would lose some meaning. That would be a bit like comparing the IQ’s of a room full of geniuses — everyone’s a winner, some just win a bit more than others, and the discrepancies may not be enough to matter. But just so long as the spread between the highly reliable and not-so-reliable remains statistically significant, as remains the case today, the ranking is still useful.

    They’ve always given some guidance, and told people to exclude maintenance. But they don’t provide clear guidance.

    I don’t know how much clearer that they can make it. They made it simple and fast to read, in order to encourage people to respond to and comprehend the survey. If they added a long list of caveats, most readers would gloss over them or misinterpret them, anyway, and the result would likely be no better. The large sample size should help to filter out that sort of noise.

    Meanwhile, CR has an entire category for “rattles and squeaks.” How many of those would be reported at all with a reasonable definition of “serious?”

    Those are an area of annoyance for customers, and one that often doesn’t fall neatly into the other categories that they provide, given the numerous possible causes. This appears to be similar to JD Power’s effort to measure some of the less tangible issues in their ranking.

    To repeat the question above, in clearer terms: what do you think the minimum difference in problems per hundred cars per year is between an “average” 2008 model and a “much better than average” one?

    Again, that data is provided in the survey. I’m not sure why that you claim that it isn’t. CR does explain its ranking system in its releases.

  • avatar

    I don’t why that should be the case, when CR provides a legend that associates the ranking with the percentage of owners who reported problems.

    You’re not the first person to believe they provide such a legend. But they don’t. And never have for overall scores.

    It’s a relative ranking. If I demand a well-above-average family sedan, then that limits my choices. If I broaden the field to include average or better, I get more choices. The user is provided with the meaning of the relative ranking, as well as a legend that converts terms such as “well above average” into a percentage, and is allowed to make his own decision.

    The only percentage provided is the percentage difference from the average. What isn’t even provided as far as I can tell (I’ve looked), though it was sometimes in the past: the actual size of the average. So you might think you know the size of the difference between the cars you feel are acceptable and those you’ve decided are not acceptable–but you probably don’t.

    In answer to your question, “For a 2008 model year car, how large do you think the mininum difference is between an “average” car and a “much better than average car,” in CR’s results?”, that answer is provided in their survey results.

    Actually, it’s not provided, which is my point.

    If all the cars in a class were exceptional, then yes, the curve would lose some meaning. That would be a bit like comparing the IQ’s of a room full of geniuses — everyone’s a winner, some just win a bit more than others, and the discrepancies may not be enough to matter. But just so long as the spread between the highly reliable and not-so-reliable remains statistically significant, as remains the case today, the ranking is still useful.

    As explained in my response to Frustratedconsumer, “statistically significant” does not mean that a difference is meaningful.

    I don’t know how much clearer that they can make it. They made it simple and fast to read, in order to encourage people to respond to and comprehend the survey. If they added a long list of caveats, most readers would gloss over them or misinterpret them, anyway, and the result would likely be no better. The large sample size should help to filter out that sort of noise.

    It is hard to make a survey both clear and sufficiently simple. If there’s anything I’m painfully aware of, it’s this. You can see TrueDelta’s survey for the way I prefer to do it–with entirely objective terms. For example, “Was the car towed to the shop?” is both simple, and free of subjective interpretation. I get a bit more subjective with “could the car have been depedably driven for another week,” but this is short of asking people to report a problem they considered to be serious based on an unspecified amount of downtime, and unspecified cost, and so forth.

    Those [rattles] are an area of annoyance for customers, and one that often doesn’t fall neatly into the other categories that they provide, given the numerous possible causes. This appears to be similar to JD Power’s effort to measure some of the less tangible issues in their ranking.

    Actually, I think rattles and squeaks probably refer to, well, rattles and squeaks. People who count these as serious generally have a much looser definition of “serious” than those who consider anything that doesn’t prevent the car from running as not serious. JD Power measures things like the difficulty of using BMW’s iDrive, which is a design issue, not a mechanical issue that can be fixed. Two very different things.

    Again, that data is provided in the survey. I’m not sure why that you claim that it isn’t. CR does explain its ranking system in its releases.

    I claim it isn’t because it isn’t. If it is, just give me the number. I’ve never found such a number in CR.

  • avatar
    Pch101

    You’re not the first person to believe they provide such a legend.

    I don’t “believe” it, I’ve read it. It’s published in their materials. If I was a CR subscriber and had it handy, I’d type it out verbatim, but as I’m not, I’ll try to check into that later.

    Actually, I think rattles and squeaks probably refer to, well, rattles and squeaks.

    Yes, but if you are acquainted with CR’s survey, then you know that it uses categories such as “body hardware”, “suspension”, “engine” (or such similar terminology), etc.

    Squeaks and rattles can come from a wide number of components, including those working in concert with one another to annoy the user. In addition, they often end up going undiagnosed, with the techs shrugging their shoulders and the problems never being isolated to their source.

    Defining this as a category allows the user to complain about an annoyance that doesn’t fall neatly under one of the other labels. I just don’t see the problem here if the information is obtained more easily and reliably by posing the question.

    You can see TrueDelta’s survey for the way I prefer to do it–with entirely objective terms. For example, “Was the car towed to the shop?” is both simple, and free of subjective interpretation.

    It’s a fair question to ask, but it may have been a subjective choice on the part of the owner that led to the answer provided to that objective question. Towing the car for what turned out to be a dead battery would be quite different than having it towed because it left its transmission on the side of the road, for example. Some people drive cars that should have been towed in, while others tow unnecessarily, so the value of the question would depend upon what information that you’re trying to get. I’d say it’s a good question for determining whether the owner thought that it was worth towing, not necessarily for determining whether it needed to be towed.

    All data has some noise in it, and the sample size should help to resolve some of it. CR has a million survey respondents, so the noise filter should be pretty decent.

    Getting more to the heart of the matter, CR constructs a pretty accurate and trustworthy survey. The fact that it correlates well with other survey data suggests that it is not an oddball outlier in the world of surveys.

    As a member of the public, I am more interested in hearing reasons why that I should consider your data to be at least on par with it than I am in getting a faulty critique of a rival. Your service is not improved one iota by whatever flaws CR has or doesn’t have, and as a data user, I am more interested in what is in it for me than I am in your attempts to demean the competition.

  • avatar

    FWIW, I’ve taken another look at CR’s Car Reliability FAQ. It’s here, which may be members only:

    http://www.consumerreports.org/cro/cars/new-cars/auto-test/consumer-reports-car-reliability-faq-8-06/overview/0608_consumer-reports-carreliability-faq_ov.htm

    In this FAQ they make a few ellusive attempts to respond to my critiques.

    For example, the age of the data:

    1.5. How current is the data?
    All our reliability information is completely updated annually. We begin sending out each year’s survey in the spring. By late summer, we have collected and organized responses, and we complete our analysis and update the information online by mid October. The new information first appears in print in the Consumer Reports Best & Worst New Cars, on newsstands in November. Subsequent auto publications, such as the New Car Buying Guide, also use this new information. In the pages of Consumer Reports, we update Predicted Reliability and Recommendations in the vehicle Ratings beginning in the road tests in the November issue. Changes to new car recommendations are published in the December CR issue and used car results are published in the following April issue of CR. All reliability information we publish is based on subscribers’ experiences with cars in the 12-month period immediately preceding the survey.

    What they don’t say in clear terms: the data are from a survey sent out in April 2008, which was nearly a year ago. There’s a reason the FDA requires clear expiration dates and not wordy paragraphs on milk cartons.

    6.5. Since the average number of problems is small for most models, is Consumer Reports overemphasizing differences that may not be important?
    Beyond statistical significance, we believe these differences are also meaningful to car buyers. We think that car buyers should expect a new car to be entirely problem-free in its first months or years of service. While the difference between a [red dot] and a [half-red dot] may be small, a pattern of several less-than-perfect trouble spots in a brand new car should be cause for concern and does not bode well for a model’s long-term reliability. We have not yet seen a single model in our survey that is entirely problem-free. More than that, one of the worst new models in the 2008 survey, the redesigned for 2008 Chrysler Sebring Convertible, has nearly four times as many problems as the average model, and 20 times as many problems as the Scion xD, which debuted in the same year. Those differences among models are important for car buyers to consider in choosing a car. We present these scores for trouble spots primarily to allow consumers to compare the relative incidence of problems among models. While there are no guarantees, you can improve your odds of buying a reliable car if you choose a model that has had a lower rate of problems in the past.

    First, note that they distinguish between “statistically significant” and “meaningful,” recognizing that these are two different things. To prove that the differences are meaningful, they first imply that even a single problem in a sample of 100 cars is meaningful, then compare the two extreme models, and never provide any absolute problem rates. Generalizing from a comparison between two extreme cases to conclude that all differences are meaningful is simply poor logic. Overall, this is called “evading the question.” For other examples, see just about any Presidential debate or press conference.

    I only paste the full paragraphs here because the number of words used to say very little is part of my point, and because these might be inaccessiable for non-members. If some can confirm that the above link works for non-members, they can be removed.

  • avatar

    PCH101:

    I’d love to never mention CR, and am generally taking this tack elsewhere. I discuss CR here because TTAC’s readership is more tolerant of facts that differ from “what everybody knows.”

    Why mention them at all? Because misperceptions of what CR provides are so engrained, not only in the general public but in the media. As a result people think they’re getting something different than what they’re actually getting, and so don’t see the need to actually get what they think they’re getting. People are unaware of the problem, so they don’t recognize the need for a solution. Unless I help make people aware of the problem–no one else is doing it–then a better mousetrap by itself isn’t doing the trick.

    TrueDelta’s results have two major advantages over CR’s:

    1. Actual repair rates, so you can see the actual size of the differences between repair rates.

    2. Updated promptly four times a year, for results that average over nine months ahead. Is there any other information where one source is over nine months ahead, yet you use the slower source?

    The legend you remember reading, and so believe still exists, stated what range of repair rates each dot represented for the system-level dots: engine, transmission, etc. No such legend was ever provided for the overall reliability scores. And a few years ago they tossed the absolute scale for the system level scores as well.

    This is covered in the FAQ here:

    4.3. How has this approach differed from the way it was done in previous years?
    CR has changed the way it presents reliability data, beginning with the 2005 survey.

    In previous surveys, the symbol for each trouble spot represented a specific range of problem rates. This allowed readers to make direct comparisons in the rate at which people reported problems in different trouble spots and in different ages of cars. However, this previous approach made it difficult for readers to make sense of whether a particular trouble spot was better or worse than average, and in some cases, it limited our ability to identify unusually reliable or unreliable cars. Also, the Used Car Verdict and Predicted Reliability are relative values that compare a vehicle’s reliability record to the average model of the same age. The absolute scale of the trouble spots and the relative scale of the Verdict caused some confusion and frequent questions. So we changed our analysis in order to represent the data in a way that would be more clear and useful to readers.

    To summarize: all of the ratings are now relative, and no actual problem rates are stated, in a legend or otherwise. This permits smaller and smaller differences to be reported as “meaningful.”

  • avatar

    CR’s survey gives specific examples for each problem area. Here’s what it says in the “rattle and squeak” area:

    BODY INTEGRITY (Squeaks or rattles): Seals, and/or weather stripping, loose interior trim and moldings, air and water leaks, wind noise.

    Water leaks can be serious. The rest, not so much. What I’ve never understood is why even list problem types that are never serious in a survey the is supposedly only asking for serious problems.

    I don’t think their survey was as detailed in the past, and you might be recalling a past survey.

  • avatar

    The question I’d REALLY like someone to answer:

    Why are misperceptions and misinterpretations of CR so prevalent, and so deeply believed?

    I honestly don’t understand this, and need to. It’s almost like discussing religion or politics. Why?

    People tell me that I hate CR. Well, I don’t. I actually have positive feelings about CR as a whole. Where I get emotional (yes, it happens) is when people firmly believe that CR does things it does not do.

    If someone says they want reliability information that is a year old, when they can have information that is much more recent, then I have no problem with that. Where I have a problem is when people think they’re getting one thing, but are really getting another.

    In this I can sympathize with General Motors, though GM’s case is decidedly mixed. Some perceptions of GM continue to have a basis, others do not. All of them are difficult to change.

  • avatar
    golden2husky

    Michael Karesh:

    You are fighting an uphill battle. In all my discussions with people about cars, most bring up CR as the “Bible.” These people also know the least about cars. They turn up their nose on a car that has a half red circle, or just an average mark. They refuse to admit that the “red dot” supercar that they bought 8 years ago would likely only rate average today. Couple that thinking with biases that brand X has to suck, closed minded fanboys, etc. I really would like to know just what the real difference is just like you. Too bad this thread has been bumped off the home page, into the ether, never to be seen again…wish RF would consider a page two for bumped stuff.

  • avatar

    There’s a page two, and a page 22. At the bottom of each page there’s a “previous page” link.

    But it might as well not be there, as few people use it.

  • avatar
    golden2husky

    Did not know that…thanks for the tip!

  • avatar
    tedward

    Ferrygeist
    I actually don’t disagree with anything you’ve said…I certainly won’t try to defend the Hemi as technologically relevant. However, despite the valve count, this is defintiely a refined, if not efficient engine. It just dosen’t belong on a list of crap product.

    The S2000 is one of my top 3 favorite cars so I can’t argue against it, but I’d like to point out that it’s making about 150lb/ft at 6500rpm. The two engines are as “apples and oranges” as two normally aspirated engines could possibly be.

Read all comments

Back to TopLeave a Reply

You must be logged in to post a comment.

Recent Comments

  • myllis: Fastest SD1, named Vitesse was called Saab Turbo and BMW killer in Germans autobahns. The Rover SD1 saw...
  • DenverMike: Pretending? Forget the medical benefits for a second. How about we don’t pretend the entire MJ...
  • mcs: @stuki: “And if you are racing, there are plenty of dragsters faster than 2sec to 60….” Yeah, but...
  • Inside Looking Out: The Fisrt Russian Revolution was actually January 22 not January 6 1905. That’s how it...
  • rolando: so say we all!

New Car Research

Get a Free Dealer Quote

Who We Are

  • Adam Tonge
  • Bozi Tatarevic
  • Corey Lewis
  • Mark Baruth
  • Ronnie Schreiber