Skip to navigation Skip to main content Skip to footer
flexiblefullpage

Residential Products Online content is now on probuilder.com! Same great products coverage, now all in one place!

billboard
This article first appeared in the PB December 2005 issue of Pro Builder.

Imagine setting out to deliberately design one of the nation's largest home building companies with a primary goal to never please your customers. You might think you would be some kind of industry pioneer with this strategy. But it has already been done — and by more than one builder. If you spend some time studying the nationwide results of the J.D. Power and Associates New-Home Builder Customer Satisfaction Study for the last four years, you will find several large builders who have virtually never had a single one of their operations finish above the mean.

That is not merely incredible and hard to imagine. Statistically, it is near impossible. To achieve this dubious distinction, low customer satisfaction has to be a strategic goal and an operational imperative. Perhaps they just didn't realize what their strategies, policies and practices really meant — or perhaps they did. Now here's the scary part. If you and I can go to J.D. Power and figure this out, then so can prospective homebuyers.

Should you really care about these rankings? Yes. At Professional Builder's recent Benchmark Conference, Paul Cardis, president of NRS — one of the industry's top customer satisfaction research firms — presented the results of his most recent national customer survey. One of the key findings indicated that 95 percent of homebuyers planned to seek out builders' customer satisfaction scores to help them make a purchase decision. So you have to care. There is no longer an option.

More than five years ago, I wrote a column about the decision of J.D. Power to measure customer satisfaction in home building and publicize the results. I predicted it would forever change our industry.

No one paid any attention the first two years and my prediction looked premature at best. But as J.D. Power extends the survey to more markets, builders' interest, and uneasiness, has grown.

Now, with the J.D. Power in 30 U.S. markets as well as Toronto, the level of interest and concern over customer satisfaction scores approaches paranoia.

J.D. Power does not publish an overall U.S. ranking and for good reason. It is almost impossible to calculate meaningful national scores directly from the builders' individual scores in each market. For example, if you gain a #1 award in City A with a score of 125 and a #3 score in City B with a score of 140, how do these compare? Which one is better? How do you compare a builder who operates in two markets to a builder who operates in 50? What about builders who do business under multiple names in a single market?

Two years ago I thought I'd give it a try. I wrote a column wherein I took the J.D. Power scores from all U.S. builders showing up in five or more locations. I calculated a rank using a very simple system that I described in detail. It was generous to firms with operations finishing below the mean because I simply gave them no score instead of the negative numbers they deserved.

That column brought me nothing but grief. Unless they finished first, builders did not want to be listed at all. One CEO yelled at me about how my article was going to tank his stock price. I explained that there has not been any evidence Wall Street cares how well builders are regarded by customers. (I would love to be proven wrong about that, but I have yet to see an analyst report stating customer standing is just as important in their evaluation as signups, closings, backlog, assets, turns, gross margin, etc.)

My angry CEO's big point was that he was convinced the J.D. Power survey is rigged because he had many divisions showing top customer satisfaction on their internal customer satisfaction survey that scored poorly on J.D. Power. I soon discovered that the CEO had tied huge financial incentives to field managers to get high marks on his firm's own customer satisfaction survey. He began to wonder if perhaps some of his guys were manipulating the results. I told him that he didn't have to wonder — I could guarantee it.

Writer and researcher Jim Collins writes eloquently about the reluctance of firms to "face the brutal facts." But as much as they are reluctant to do it internally, they loathe doing so in public. When the latest J.D. Power results were published in September, I told myself the industry needed to take Collins's advice, and this year I would do the analysis again and write it up. After spending a few hours scanning by both city and company, it appeared that not much had changed in terms of who was doing well and who was faltering. My staff crunched the numbers a little more aggressively this time and assigned negative numbers to below average scores.

The results are volatile and as an industry we may not be ready to discuss them objectively. For example, several large builders who tout their high customer satisfaction didn't just score low, but in some cases pulled negative scores.

I am not going to include the entire list. As fortune, and the numbers, would have it, there is a statistical break in the data that separates the top five from the rest. No matter how you calculate it, you will come up with the same top five as we did. Next month, I will name those companies and describe what they do that sets them apart. Meanwhile, to those who are under-performing, I offer to you one of the most profound quotes of all time, this one courtesy of Deming Disciple Brian Joiner: "Every system, and every company, is perfectly designed, to produce the results it is getting."

SCOTT@TRUEN.COM

PB Topical Ref
leaderboard2
catfish1