By on November 3, 2010

In Part 1, we found that, despite its large overall sample size, Consumer Reports’ has serious gaps in its coverage. But what about the reliability ratings they can provide? An FAQ asserts CR’s ability to split results by engines, drive types, and so forth. At first glance, this appears valuable, as CR’s reliability scores often differ from powertrain to powertrain. But are these differences valid? Should you avoid the V6 in the Camry or insist that your Flex be EcoBoosted?

In his review of CR’s latest results last week, Jack Baruth noted that the 2010 V6 Camry is rated worse than average. Dig a little deeper, and this rating appears based on problems with squeaks and rattles, power accessories, and the audio system…all involving parts shared with the other Camry variants. The implied problems with the V6 powertrain? They don’t exist. All of the powertrain-related systems receive top marks.

The Ford Flex EcoBoost that leads its class is predicted to have reliability 60 percent better than average. Not mentioned in the press release: the Ford Flex AWD sans-boost is predicted to have reliability 16 percent worse than average—nearly bad enough for the half-black blob and a non-recommend. A 76-point difference is huge. A solid red blob and a solid black blob, best and worst ratings, are 90 points apart. The source of this massive difference? It’s not a pair of turbos.

CR’s predictions are based on however many of the three most recent model years they have sufficient data for. The EcoBoost was new for 2010, so the prediction for 2011 is based entirely on the 2010. In contrast, the prediction for the sans-boost also incorporates data on the 2009. Should first-year glitches unrelated to the powertrain have any more bearing on the 2011 sans-boost than on the 2011 EcoBoost? In CR’s formula, they do.

Second, even looking at only the 2010s, the sans-boost fares much less well than the EcoBoost. This would justify a more pessimistic prediction, except that, just as with the Camry (and nearly every other case I checked), the differences between the powertrain-based variants have little or nothing to do with the powertrains. Owners of the 2010 sans-boost Flex reported far more problems with squeaks and rattles, body hardware, power equipment, and the audio system—all involving parts shared with the EcoBoost.

This is far from an isolated anomaly. For every case like BMWs with the turbocharged six (with its notoriously unreliable fuel pump), there are a number of others like the Toyota Camry, Ford Flex, Hyundai Genesis (a lower V8 score can be traced to the Technology Package offered with both engines, but more often ordered with the V8), and Mercedes-Benz C300 (where non-powertrain problems common enough to earn the 2010 a solid black blob go away when AWD is added). Key takeaway: the differences in CR’s ratings for different powertrains often are not due to powertrain-related parts. When there are such differences, it’s critical to check the system-level blobs.

In an FAQ, CR provides an explanation for a similar (though only half as large) discrepancy between the very closely related Chevrolet Equinox and GMC Terrain [brackets mine]:

The Terrain had slightly [about 40 percent] more [reported] electrical, audio, and paint and exterior trims [sic] problems… We believe, though, in the accuracy of our data, and we have a commitment to report the experiences our subscribers share with us. In some cases, they report different reliability experiences with closely related models.

In fewer words: our data are accurate because we believe in the accuracy of our data.

Unless Ford performs far more thorough quality control on the boosted Flex, an unexplained 76-point difference should not happen. A miniscule sample size might explain it, but CR’s sample size isn’t small. The problem, then, is their methods. Ask the wrong question, and it doesn’t matter how many people answer it.

The problem with CR’s key question: it asks car owners to report problems they considered serious. Letting each respondent decide whether or not a problem is serious enough to report opens the door wide for bias. Not CR’s bias, at least not directly. But any bias the car owner might have, and have honestly. Love the car? Treated well by the dealer? Warranty paid for the repair? Then even a failed transmission might not seem “serious.” Especially not if it happened almost a year ago—the impact of this subjective wording is magnified by the annual (in)frequency of the survey. At the other extreme, many CR subscribers report minor problems like rattles and squeaks. If EcoBoost owners love their Flex considerably more than sans-boost owners do, a large difference in reliability—as reported by CR—might result.

In response to a blog comment critical of CR’s methods, a staff member recently argued:

The reliability survey asks if the owner had a problem requiring repair. The way it is constructed, it is objective… To use your example, if Fox News questioned viewers about political views, it would yield a certain, slanted response. However, if Fox News asked viewers if their TVs needed repairs in the past year, either they did or didn’t, regardless of political persuasion.

This would be a valid defense—but only if the survey is truly constructed to maximize objectivity. CR’s is not. The way their key survey question has actually been worded—for years—introduces so much subjective variation that even large sample sizes cannot compensate. A massive 76-point difference can be elicited where none objectively exists—and then be ignored when reporting results. Most models differ from one another by much less.

Michael Karesh owns and operates TrueDelta, an online source of automotive pricing and reliability data.

Get the latest TTAC e-Newsletter!

35 Comments on “2010 Consumer Reports Survey Analysis: Part Two: EcoBoost Oddity...”


  • avatar
    Paul Niedermeyer

    Thank you Michael for shedding light on this one aspect of CR’s stats that have always bothered me. Over the years, I came to same conclusion that CR’s differences between different engine versions of the same model couldn’t be trusted without further information.

  • avatar
    John Horner

    This analysis makes a lot of sense, and pointing to the survey question’s invitation to subjectivity as the root cause of unreasonable results seems to be right on the mark.
     

  • avatar
    Contrarian

    I don’t trust CR for car ratings, and I don’t need them to tell me which toaster or vacuum cleaner to buy so I really don’t have any use for them at all. You can learn more about products on the internet than reading CR. I would guess that many of their remaining subscribers are elderly and/or technophobes. Sort of the same demographic that buys the Toyotas that they tout so highly.

    • 0 avatar
      Quentin

      I have a Toyota that automatically starts playing Pandora radio streamed wirelessly through the bluetooth connection on my iphone 4 when I start the car.  I don’t fear a ghost in the machine, either.  I guess there is a flyer in every dataset, though. 

  • avatar
    geeber

    Dig a little deeper, and this rating appears based on problems with squeaks and rattles, power accessories, and the audio system…all involving parts shared with the other Camry variants.

    Except that those parts may not be installed on all Camry variants. I would imagine that the V-6 models come with more standard equipment (particularly power accessories) or are more likely to be ordered with power accessories, including a sunroof. A sunroof, for example, increases the likelihood that the owner will hear squeaks and rattles. People ordering base, four-cylinder Camrys are unlikely to spring for a sunroof (if it’s even available).

    The problem, as it appears to me, is not that the results are different, but that Consumer Reports doesn’t take the extra step and explain WHY the results are different. If Camry sunroofs are more likely to cause squeaks and rattles, and V-6 Camrys are more likely to be equipped with sunroofs, then that explains part of the disparity. This is why one must actually look at the charts, and not just the blurb in the press release.  

    • 0 avatar

      In the analysis I note that is clearly (well, as clearly as it gets with CR’s blobs) the case with the Genesis. And perhaps with the Camry as well, though they do offer the four in the highest trim level. But what about the Flex? The EcoBoost is only available on the upper trims, and likely tends to be ordered with more options. Yet EcoBoost owners reported far fewer problems with accessories.

    • 0 avatar
      Quentin

      Toyota’s audio integration for ipods isn’t the most elegant.  When scrolling through songs on my ipod via the headunit controls, it starts playing the song you are “hovering” on.  It doesn’t bother me, but it drives some people on the 4Runner forum insane.  I’m sure that the ipod integration is standard, or generally equipped, on the V6 Camry.  That could explain the difference. 

  • avatar
    rtfact32

    “This would be a valid defense—but only if the survey is truly constructed to maximize objectivity. TrueDelta’s is. CR’s is not.”
    While long-time readers may know who Mr. Karesh is, I don’t, and didn’t until the very end of the article. I feel misled, and I feel the credibility is destroyed by the sentence above.
    I agree 100% with the points that are made in the article, and will take CR’s survey analysis with a grain of salt now, I also feel that TTAC has given Mr. Karesh a forum for free advertisement. Objectivity is important, as is full disclosure.
    I’m very disappointed.

    • 0 avatar

      Frequent readers are very well aware of who I am and what I do. And this is fully disclosed following the article.

      If you agree 100 percent with the points made, and can find absolutely no fault with my arguments, then what’s the issue? In what respect is the analysis itself lacking in objectivity?

      The reason I referred to my own survey is because it’s worded the way they claim theirs is, but isn’t.

      You learned valid information from the article, but then suggest that TTAC shouldn’t have posted it. Problem is, you’re not going to get this information anywhere else. If other people were providing critiques like this one, I wouldn’t have to. But they aren’t.

    • 0 avatar
      Brendon from Canada

      rtfact32 – Had the disclaimer appeared above the article, would your opinion change?  Michael’s tagline isn’t hidden at all, and all other TTAC disclaimers appear at the bottom (usually in relation to gas/flights/accomodations provided during a review).

      The issue I take with your complaint is that while the article does provide some adverstising (I’m not sure what percentage of TTAC readers are converted to TD participants), it also furthers TTAC’s purpose – to get to the truth.  If CR is doing a snow-job on it’s clients (ie, consumers), I see no problem with a competitor calling them out if they have a justifiable argument – as long as the disclaimer is in place, so that we can evaluate any opinions or conclusions presented.

      My personal disclaimer: I browse and contribute to TrueDelta (most months!).  Frankly it hasn’t influenced my purchasing decisions as reliability is lower on my relative priority scale, however I understand the value of collecting the data – a raw data version of TTAC. 

    • 0 avatar
      Bob12

      @rtfact32:
      I can see how you might think the article could have the appearance of free advertising. Perhaps having the customary disclaimer appear at the top of such articles instead of the bottom would be a consideration for future posts of this type, but that’s a decision for Ed Niedermeyer.
      That said, the essential issue (IMO) is to determine who puts together a better analysis. While I’ve read criticisms of CR’s methods, I’ve never read a criticism of TrueDelta that was backed up by math. If someone can explain how Michael Karesh utilizes inappropriate or incorrect statistical analysis methods or data-gathering techniques, then that would diminish the value TrueDelta IMO. I haven’t seen that happen yet.
      P.S.: I have no financial or personal relationship with Michael Karesh or TrueDelta, not even as a paying customer.

    • 0 avatar

      My methods have their critics, focusing on my lack of a random sample. A random sample would be so expensive that I could not hope to primaraily serve car buyers.  Like J.D. Power, I’d have to focus on serving the OEMs. Luckily I have found that a random sample is not essential as long as the survey is worded as objectively as possible–I’ve offered the same defense CR’s staffer did, with the difference that my survey is actually worded objectively.

  • avatar
    peteinsonj

    Market research has qualitative aspect to it — that you can’t overlook.  Nor discount.
     
    I may forgive some small glitches with the radio in my cheapo train station car, that I would not in my $50,000 visit client car.  And I the mechanic might not say something is as severely wrong as my brother might, who doesn’t lift a wrench on his car ever. 
    (or perhaps its the inverse?)
     
    The customer’s perspective IS always right — and that’s why the CR ratings work pretty well.  If I met the same guy who reported a problem with his V6 Camry to CR at the doughnut shop — he would tell me the car was a piece of crap — and that would be an impression I’d file.
     
     
     

    • 0 avatar

      One of CR’s own staffers explained why the survey should be worded as objectively as possible, in a quote included in the article. They just don’t do it.

      When you can have a 76 point difference between two powertrains that isn’t at all related to the powertrains, something most certainly is not working well.

      If you want to know how satisfied people are with their cars, CR has a separate survey for that. But a satisfaction survey (or any opinion survey) that doesn’t use a random sample is prone to inaccuracy.

  • avatar
    donkensler

    I stopped paying attention to CR’s reliability ratings years ago, in the time of the first generation Ford Probe (’89-’92), when it allegedly had much worse reliability than the Mazda MX-6, and the differences were almost entirely in the drive train, which was entirely common between the Probe and the MX-6.  Let’s see – same platform, same drive train options, same assembly plant, but one’s more reliable than the other.  Oh, really?  It was just an unbelievable result, and I chose not to believe it and downgraded CR’s ratings ever after.
    At the time, I put it down to a difference in reputation leading to a difference in behavior of the buyers.  Ford buyers had an expectation that Fords had reliability problems, so were sensitive to every hiccup in the engine and transmission, while Mazda was protected by the Japanese halo.  Buyers figured, “Gee, Japanese cars are really reliable, so these problems must not be too much.  I guess I’ll just forget about them.”

    • 0 avatar
      Robert.Walter

      Didn’t probe have a Ford oval on the intake manifold or valve cover?  Maybe the Blue Oval Glue used to hold it on was different from the Mazuda (as they say in Hiroshima) glue?  ;O)

    • 0 avatar
      7

      If I remember it right, The Hondas produced in Rover’s factory (the Concerto’s) were subjected to more quality controls than their Rover counterparts (the 2xx series). Maybe that was the same for the MX6 and the Probe.

    • 0 avatar
      ponchoman49

      I agree about how the Asian car buying public perceives there cars as better because they used to own that perverbial 1985 Toyota Carolla that went 90 billion miles without ever needing a tuneup or any work performed. That was then and this is now. How can a 2009 Carolla that has needed a water pump, alternator, new brake rotors and pads and that has suffered a recent unexplained mileage loss, with but 40K miles, considered more reliable than my 2008 Impala that now has 71K miles and has only needed the ISS lubed, tires and brake pads or normal wear items on my Chevy. Bet somehow that the Carolla would be spun as more reliable by CR if we filled out the same report.

  • avatar
    Conslaw

    How does the F.A.A. gather information regarding aircraft mechanical problems?  From an outsider’s perspective, the maintenance data for aircraft seems to be fairly precise. Maintenance intervals, including engine changes and rebuilds must be rigorously followed and signed off by an FAA-certified mechanic.  Because of this mandated maintenance, catastrophic failures in flight are very rare, even in light aircraft.  Maybe the car industry can learn from the aviation industry.
     
     

  • avatar
    Russycle

    I assume the question at the end of your article is rhetorical, but why indeed?  CR is not a major advertiser with any car-related publication, so why isn’t anyone else questioning their methods?  Are they so used to carrying water for the auto manufacturers that it doesn’t occur to them to actually investigate and critique, or do they fear that any critique with teeth in it will  scare off their advertisers?
     
     

    • 0 avatar

      Two reasons, as far as I can tell:

      1. CR has invested a lot of money and effort into cultivating relationships with media over the years. They’re all friends at this point.

      2. Media have a lot invested in CR’s credibility, after relying on them as a trusted source for years. Undermine CR’s credibility, and they’d undermine their own.

      3. Journalists tend to have a lot of respect for CR’s refusal to accept advertising. But if the wall between advertising and editorial is so solid at their own publications, then why do they feel this should count for so much?

  • avatar
    TrailerTrash

    Michael,
    THIS is  why I no longer subscribe to CR.
    My personal experience with purchases never seemed to match their ratings.
    I knew it couldn’t be me.
     
    And THIS is why I reply to TrueDelta emails.
    It doesn’t matter if I am lazy or not…and I really am lazy…IF I want a place to get the clearest available data, I need to do my part.

    You, we, have to put in the information! 
    Knowing that even TrueDelta can have data error or weaknesses, it is right now the best we have and the only hope for true(est) future data.
    It’s almost a personal responsibility we all have to participate.
    If we don’t, auto enthusiast, then who can we blame but ourselves for not participating in the cause.
     

  • avatar
    HoldenSSVSE

    Great series of reads.

    So I have to ask the question, should things like squeaka or rattles even be counted as “problems,” when you are reviewing reliability of a vehicle.

    Yes, I hate squeaks and rattles as much as an other adult with reasonable hearing.  But if mechanically everything works, but the glovebox rattles when I hit a bridge divider, for RELIABILITY is that even an issue?  I’m thinking no in my book.

    Curious on what Mike K. has to say about this because I don’t seem to remember on my True Delta reports being asked, “hey, does your car squeak?”

    • 0 avatar

      TrueDelta’s survey asks whether the car was taken to the shop for anything. If you took the car to the shop for a squeak, and they fxed it, it counts.

      “Reliability” isn’t the perfect term for all problems. Except that there isn’t a better one.

      This month an improved, nine-point question on problem severity will replace a couple of yes/no questions on the survey. So starting in February 2012 we’ll be able to report stats weighted for problem severity in addition to the regular stats. Sooner if we go back and recode earlier responses.

      I wrote about this here:

      http://www.truedelta.com/blog/?p=663

      If anyone has any input on this proposed (and very soon actual) question, now is the time to speak up.

  • avatar
    Steven02

    This reminded me of a similar problem with the Buick Enclave recently.  The FWD version was rated for below average, but the AWD version was rated at average.  The rest of the vehicle is the same.  Only 1 engine option.  Interiors are the same, but the AWD rated better.  Since then, they are both rated at average, but I haven’t seen it in this years reporting.  It is all interesting how this turns out.

  • avatar
    mtypex

    It’s true that CR points out that V6 or turbo models, etc have problems and doesn’t say why, but like someone said, the donut shop example is the people use.  If it’s not because of the engine per se but because some trim levels are assembled in other locations, or on Mondays, or typically equipped with other features that break … the end result is the same.

    • 0 avatar

      Yes, this is CR’s explanation. But it doesn’t hold water.

      You’re assuming that the cars reported as having more problems actually have more problems, and that there isn’t a serious flaw in CR’s methodology. I’m arguing that this assumption is incorrect.

      If there’s a flaw in my argument, it’s my too quick rejection of the possibility that CR’s sample sizes are too small. In an earlier, long version of this piece I also discussed CR’s tendency to split hairs much finer than their data permits. This increases the liklihood of large errors.

  • avatar
    findude

    I’m puzzled by the whole car survey thing, whether it’s Consumer Reports or Truedelta (disclaimer–I have participated since early on with Truedelta).
     
    My approach to buying new or used cars is simpler if somewhat more time consuming: I read the forums exhaustively and do my own research on the street. I don’t buy “new” models or drive trains–I play it conservative and identify and purchase the tried ones that have at least a couple years of street cred.
     
    ASK people how they like their cars!  Stop them in parking lots. Leave a note on a windshield saying that you’re thinking of  buying a car like theirs and them to call you to say what they think.
     
    There is a wealth of information in the forums.  Just go to your favorite search engine and look for Manufacturer Model Forum then spend an hour or so on each of the several top hits.
     
    It’s nuts to buy a car based on inputs from one data point.  Look at CR. Look at Truedelta. Read “reviews” in the glossy car mags. Ask people. Chat up a service writer or mechanic next time your car is in for service–they know if it’s better to get the 4- or 6-cylinder model and they don’t have skin in the game like the people on the sales floor.
     
    It’s not just cars. I did this a few weeks ago when we needed a new dishwasher.

  • avatar
    richmaluga

    I mostly agree with Michael (and have no affiliation with truedelta, but do subscribe to CR and truedelta.com). When shopping for a Ford Flex, I read through individual consumer reviews on Edmunds.com (there are over 100 for the 2009 non-Ecoboost model). You know what I found? People who gave the car ratings of between 8.5 and 9.5, even though their reviews mentioned brake and quality issues that required them to bring the car back to the dealer — sometimes for multiple visits. This shows that people’s perceptions of whether their cars are “reliable” is skewed by their love of their car. The rating for the car simply did not match the problems they were describing (at least not in my subjective opinion). In order for them to give it a bad rating, the car’s wheels would have to stop turning altogether. However, this is not limited to the Flex. Many other brands have similar fan clubs (e.g. The Honda Odyssey Soccer Mom Fan Club) that not only gloss over problems, but also answer bad reviews they see with counter-reviews  that try bolster the brand back up. It is inconceivable to me that CR can give Ecoboost an excellent reliability rating with just 1 year in the books. I’d be interested in seeing consumer satisfaction reviews in 3 years, when the bumper to bumper runs out. In addition, while I don’t know what the actual sales figures are, Ford always indicated that it expected Ecoboost to represent only about 10% of all Flex sales. If this holds true, then the survey sample for Flex Ecoboost must be a small one. In CR’s defense, I think the ratings are a good tool for avoiding cars that have been found problematic. While some drivers can be just as subjective as to what constitutes bad quality, I appreciate those types of drivers more than the ones that are blind to the crap they’ve bought. In the bigger scheme of things, I don’t expect CR to guarantee reliability via their survey. We’re all grown ups and have to make choices with the most information we can get. I’ve bought things (not cars) recommended by CR and found more of my purchases to be reliable than not. I hope Michael’s feedback will one day be incorporated into CR’s survey.

    • 0 avatar
      Zackman

      IF I recall correctly, when Chrysler had the sludging and failure issues with the 2.7L engines, I did see CR noting on the “reliability” of having half- or full-black dots, but at the time, not explaining precisely what the issue(s) was/were, and THAT is what ticks me off about CR. No detailed information as to WHY a car, or one or more of its components aren’t “reliable”, which could lead someone who was leaning toward the purchase of a vehicle and getting stuck with an extremely expensive clunker. Yes, Toyota, Honda and others had the same issues. Same thing with transmissions.

      Just seeing a series of dots doesn’t fully do it for me, although, admittedly, they do give a hint of a car’s seeming overall grade of pass, fair or fail.

    • 0 avatar

      You can now pay $13 for a some additional information on some of the dots. I paid the $13, and found the additional information very limited. For the 2002 Dodge Intrepid they add “engine rebuild or replacement” to the half-black dot for “Engine Major.” That’s as much detail as the $13 will get you.

      I was planning to write a third piece about this, but it’s not really on the same level as the first two pieces. Just noting that the additional details lack detail, and involve items like brake pads and the timing belt far more often than truly serious repairs.

  • avatar
    Mr. Sparky

    Having participated in both surveys, I feel much more comfortable with Consumer Reports methodology than True Delta.  I have also looked at the results of both, and based on my personal observations, True Delta has similar sorts of oddities.

    Based on the tone of this series, my opinion of CR has not been diminished, but my opinion of True Delta has.  I believe I have completed by last True Delta survey.

    • 0 avatar

      TrueDelta has some results that are off. The difference is that these are almost always associated with small sample sizes, and usually with sample sizes below the minimum such that the results are asterisked and not visible to the general public. I place a comment on results I suspect are not accurate–hover over the results to see these. When two results are so far apart that you must wonder about both of them, I don’t put one in a press release and ignore the existence of the other.

      What is the basis of your comfort with CR’s methodology? Have you looked into it in any detail, or do you simply trust them?

      My apologies for the tone. I fear it follows from my frustration in getting some very basic points across. And I can see that it could be better.

      But should tone be a higher priority than the quality of the information provided?

  • avatar
    SSLByron

    Seems to me like the real lesson here is that you shouldn’t base your buying decision on a single resource, especially if you don’t take the time to evaluate the information it offers. It’s not too difficult to dig deeper into CR’s rankings and see the root of various discrepancies. Some are probably valid. Some are probably not.
    And for all CR’s supposed love from the media, they’re just as often a whipping boy for enthusiasts.

  • avatar
    ciddyguy

    As far as reliability goes, I like a reliable car as the next person but I think many may confuse reliability with build quality (although neither are mutually exclusive of each other as each reflects directly or indirectly of the other and of the entire vehicle as a whole).
     
    That means, a car with so, so build quality can have a very robust drive train that is reliable, easy to fix, lasts a long time and performs well to boot, but the overall body integrity and fit and finish may not be overly stellar or the reverse can be true, a car that uses expensive looking materials and such, is well put together but can’t remain reliable for any length of time and thus out of the mechanic’s garage to be enjoyed, but often one reflects upon the other to one degree or the other.
     
    But as far as reliability factors are concerned, I am more concerned with how the engine, transmission, steering and suspension components hold up, how effective is the cooling system and does it prevent overheating in hot weather adequately? I also look at the secondary components, such as the electrics, you know, wiring harnesses, electronic components, switch gear, door and window controls and motors and how reliable they are because any and/or all of them can cause a car or truck/van to go into the mechanic to be repaired/replaced and how often does this tend to take place?
     
    I know there will be the occasional clunker part, but the issue is, how often does this occur over a model year (or does it keep occurring) or was this a rash incident where a 1 off bad batch of parts made it into production and didn’t show until much later (can happen) but at the end of the day, does the car start reliably, run reasonably smooth, performs well, adequately powered, enjoyable and reliable enough to stay out of the dealer’s garage and on the road to be enjoyed – especially true in its first 5 years or so, or until it hits 100K miles or above. That to me is a sign of reliability and it is true for both mechanical but electrical as well. Once you get over 5-7 years old and/or 100K miles, it IS expected for parts to begin to wear out, get slow (electric window motors especially), interiors to get a little shabby and so on but in the car’s first 5 years at least, is critical that it show itself as being reliable as it will often indicate, if taken care of of how reliable it’ll remain as it ages beyond 100K miles.

    And I should say that I tend to agree with the others in that one should not rely on one source, but a variety of sources as an overall picture will form of the product, first off, do people enjoy it (yes, no) is it reliable (again, yes, no, and to what degree) as cars that proven to be reliable do not necessarily reflect owner’s enjoyment as factors such as ride quality, noise, vibration, interior usefulness and all that can negatively affect an owner’s overall satisfaction of said car.
     


Back to TopLeave a Reply

You must be logged in to post a comment.

Subscribe without commenting

Recent Comments

New Car Research

Get a Free Dealer Quote

Staff

  • Authors

  • Brendan McAleer, Canada
  • Marcelo De Vasconcellos, Brazil
  • Matthias Gasnier, Australia
  • W. Christian 'Mental' Ward, Abu Dhabi
  • Mark Stevenson, Canada
  • Faisal Ali Khan, India