My wife has a Dilbert cartoon on her office door in which one of the characters says: “If you have any trouble sounding condescending, find a Unix user to show you how.” She’s a Mac user and they were worse even before they all became Unix users too.
Or maybe not. But finding out whether the average Mac user really is smarter than the rest of us isn’t so easy. Part of the problem is that even if you matched the admissions test results for a graduate school with individual PC or Mac preferences to discover a strong positive correlation, people would argue that the Mac users are exceptional for other reasons, that the tests don’t measure anything relevant, and that it’s unethical to do this in the first place.
In fact, it’s pretty clear that this topic is sufficiently emotionally loaded that you’d get shouted down by one side or another no matter how you did the research; and that’s too bad because a clear answer one way or the other would be interesting.
I doubt it’s possible to get a definitive answer, but as long as you don’t take any of it too seriously you can have a lot of fun playing with proxies such as the average user’s ability to read and write his or her native language. This isn’t necessarily a reasonable measure of intelligence (mainly because intelligence has yet to be defined) but almost everyone agrees that a native English speaker’s ability to write correct English correlates closely with that person’s ability to think clearly.
Measuring Written English
In other words, if we knew that Mac users, as a group, were significantly better users of written English than PC users, then we’d have a presumptive basis for ranking the probable “smartness” of two people about whom we only know that one uses a Mac and the other a PC.
So how can we do that? As it happens, Unix has been useful for text processing and analysis virtually from the beginning. In fact, the very first Unics application offered text processing support for the patent application process at Bell Labs — in 1971 on a PDP-11 with 8 KB of RAM and a 500-KB disk.
By coincidence, Interleaf, the first GUI-basedDocument-processing package, was the first major commercial package available on Sun — in 1983, well before Microsoft “invented” Windows and well ahead of the first significant third-party applications forthe Apple Lisa.
During the 12 years between those two applications, text processing and related research became one of the hallmarks of academic Unix use. By the early eighties therefore most Unix releases, whether BSD- or AT&T-derived, came with the AT&T writers workbench — a collection of useful text processing utilities.
One of those was a thing called style. Style is somewhat out of style these days but is on manyLinux “bonus” CDs and downloadable from gnu.org as part of the diction package.
Style produces readability metrics on text. Forget for the moment what the ratings mean and look at the numbers. For comparison, here’s what style says about the first 1,000 words in what is arguably the finest novel ever published in English: The Golden Bowl readability grades:
Kincaid: 18.2ARI: 22.2Coleman-Liau: 9.8Flesch Index: 46.7Fog Index: 21.7Lix: 64.4 = higher than school year 11SMOG-Grading: 13.5
Of course, that’s Henry James at the top of his form.
Slashdot and Other Style
For a more realistic and interesting baseline, I collected about 2,800 lines of Slashdot discussion contributions and ran style against them to get the following ratings summary along with a lot of detail data omitted here:
Kincaid: 7.7ARI: 8.0Coleman-Liau: 9.7Flesch Index: 72.4Fog Index: 10.7Lix: 37.1 = school year 5SMOG-Grading: 9.8
Notice that these results apply to comments from Slashdotters, not to the text on which they’re commenting. Look at the source articles and you get very different results because, of course, most are professionally written or edited — although there is an interesting oddity in that ratings for files made up by pasting together stories posted by “Michael” are consistently at least one school year higher than comparable accumulations made from postings (other than press releases) by “Cowboyneal.”
Comments put in discussion groups aren’t usually professional productions like news articles. You’d expect those to rate considerably higher; and they do. Here, for example, is the summary from running it against five articles taken from today’s online edition of The Christian Science Monitor:
Kincaid: 10.4ARI: 12.5Coleman-Liau: 12.9Flesch Index: 59.5Fog Index: 13.3Lix: 48.8 = school year 9SMOG-Grading: 11.6
Lots of smart people have put effort into arguing that these readability scores are either meaningless or meaningful, a choice that apparently depends rather more on the writer’s agenda than research. Most of the more credible would probably agree, however, that higher rankings are mainly useful as a rough guide to thewriter’s expectations about his or her audience but lower rankings do correlate directly with the writer’s education in English and indirectly with intelligence.
So what happens if we treat the Slashdotters, a mixed bunch if there ever was one, as a median and then compare the ratings shown above with results from “pure play” Mac and PC communities?
The PC Community
I tried running style against text collected from various PC sites. The very lowest ratings came from text collected from an MSN forum host, but I only got about 600 lines because the forums suffer the Wintel design disease of requiring you to click for each new text contribution and I get bored easily.
Kincaid: 2.9ARI: 1.9Coleman-Liau: 8.0Flesch Index: 89.5Fog Index: 6.0Lix: 21.5 = below school year 5SMOG-Grading: 7.1
The highest PC-oriented ratings came from a sample of about 2,500 lines taken from reader comments hosted by PC Magazine:
Kincaid: 5.9ARI: 5.9Coleman-Liau: 9.0Flesch Index: 79.3Fog Index: 9.0Lix: 32.2 = below school year 5SMOG-Grading: 8.8
Notice that both sets score well below the level of Slashdot’s contributors.
And the Mac Users?
So do Mac users differ? You bet. Here’s the ratings summary based on about 3,000 lines of text taken from reader comments hosted by the Macintouch site:
Kincaid: 8.9ARI: 9.4Coleman-Liau: 10.0Flesch Index: 67.8Fog Index: 12.0Lix: 40.5 = school year 6SMOG-Grading: 10.7
Not only were these ratings significantly higher than those given Slashdot’s contributors, and thus better than those given text from the PC sites, but the vocabulary was larger too. Without collapsing words to their root forms, but after removing punctuation (including capitalization) and numbers, the Macintouch stuff had 870 unique words to only 517 for the combined PC sites.
Overall, the results are pretty clear: Mac users might not actually be smarter than PC users, but they certainly use better English and a larger vocabulary to express more complex thinking.
Paul Murphy, a LinuxInsider columnist, wrote and published The Unix Guide to Defenestration. Murphy is a 20-year veteran of the IT consulting industry, specializing in Unix and Unix-related management issues.