The results were, unsurprisingly, inconclusive; it turns out that spam filters, while pretty good at recognizing the attempts of computers to get humans to pay attention how and to what they shouldn’t, are not very good at recognizing the attempts of humans to get humans (including themselves) to pay attention how and to what they shouldn’t. Nevertheless the “final results of the Bayesian classifier” are worth considering:
Words that suggest an article is bullshit, in order of the strength of indication, and case-sensitive, based on my own opinions of articles I found on the internet: entire, truth, No, upon, You, head, required, sources, widely, doesn’t, John, explanation, needs, step, 11, exactly, North, added, defend, completely, word, faith, willing, mentioned, 7, practice, again, thinks, attempt, multiple, meaning, established, dark.
Words that suggest an article is legit, in order of the strength of indication, and case-sensitive, based on my own opinions of articles I found on the internet: there.
In other words, the best easily visible sign that an argument is bad science, is that it puts a lot of effort into convincing you that it’s good science: it jabbers on about the “entire” “truth,” about how its “explanation” is “exactly” what its “widely” surveyed “sources” say, how it’s “defend[ing]” against the “dark” of “established” blind “faith” the real “meaning” of it all. Whereas real science doesn’t use rhetorical tropes to convince you; it just tries to get at what’s “there.”
But of course, this doesn’t teach us how to see through rhetoric; it just reminds us of our existing cultural conventions. “We” trust scientists (when they’re doing science, at least), and the scientists know how to see through any rhetoric used to mask bad arguments about their areas of expertise; this is what it means for them to be experts. So an anti-rhetorical ethos develops in the scientific community: don’t wax eloquent about defending the truth about whatever, just talk about whatever, about what’s “there.” Anyone who rejects the scientific community’s claim to authority will tend to reject this ethos. This, not anything about the styles themselves, prevents a great scientific advance from being published in the style of Time Cube, nor the disjointed delusions of a madman from being published in the style of a technical paper about encryption.
In fact, how, besides the cultural-rhetorical cues, do we even know that Time Cube should be considered nonsense? Seen from a certain angle, it looks more like an extremely eccentric, yet somewhat coherent, work of poetical philosophy. Perhaps Time Cube has, in fact, succumbed to our scientific-nonsense-filter because it wanted to do so. If anyone were actually trying to evade our scientific-nonsense-filters, it would be extremely easy. How lucky for us that no one tries!
People rarely write scientifically plausible pseudo-science, not because it’s impossible, but because no one has any incentive to do so. For it is written:
If the climate skeptics want to win me over, then the way for them to do so is straightforward: they should ignore me, and try instead to win over the academic climatology community, majorities of chemists and physicists, Nobel laureates, the IPCC, National Academies of Science, etc. with superior research and arguments.
The academic climatology community”–and “the scientific community” more broadly–names a spirit, an “invisible hand,” a force guiding scientists to do science, and not just pretend to do so, by making “doing science” a necessary byproduct of pursuing one’s own self-interest. Why write up a scientific-sounding argument that’s not ‘real science,’ if it won’t be taken seriously by scientists until the relevant experts in the scientific community evaluate it? Why imitate the style of scientists if they choose to ignore that imitation?
In a real sense we can speak of a “spirit of science” which knows more science than any particular scientist is even capable of comprehending. A physicist does not know that the theory of evolution, say, is true, not really; but he knows that the biologists, rather than the skeptics, are the ones whose arguments should be trusted prima facie, while the skeptics’ arguments should be ignored. The spirit of science has whispered this knowledge in his ear.
The spirit of science is a spirit of censorship. Every pseudo-scientist who has had the doors of academia slammed in his face before the porters even heard his “argument” knows this.
“Censorship,” as we all know, “is evil.” But this is patent nonsense. We believe it because we don’t want other people telling us what to read; we can, we feel, perfectly well discern the good from the bad on our own. But the feeling and the ability are not the same thing, and may well be anticorrelated. How prideful is it to think that we are smart enough to see through the apparent healthfulness of the most appealing poison out there?
Opposition to censorship comes down to intellectual solipsism–the belief that “what I find persuasive” bears any direct relationship to “what is true.” I call this solipsism because it assumes either that what other persons find persuasive is irrelevant, or that they necessarily find persuasive the same things I do. The former makes it difficult to justify listening to other people at all: why not just make up arguments with a random text generator, and evaluate them? The latter makes it difficult to avoid thinking everyone you disagree with is, not just misguided, but evil: if they don’t actually find that argument persuasive (which they can’t, since I can see through it), they must be trying to trick me.
Arguments, like all words, do not come to us from the ether; they come from other people, and evaluating them is an inherently social activity. Reading a book is like accepting a gift: why do it if you have no reason to think it’s not a Trojan horse? As Helen Andrews writes in First Things:
The perfect public sphere in which ideas compete freely until the truth emerges may be real or it may be mythical, but certainly that is not how it works in the individual human brain. I for one do not trust that my mind will arrive at the correct conclusion if I only jam it full of as many ideas as I can manage. Bad books are like bad company—they don’t make error inevitable, but they make it difficult to guard against. When the Index was abrogated in 1966, the assumption was that in the absence of a list of specific forbidden books, individuals would use the same basic rules to make their own judgments about what was prudent for them to read. How many of us can say we have been conscientious in that duty?
The difficulty comes in finding a censorship regime that can actually provide a useful filter. This is an instance of the principal-agent problem: Given a job for which evaluating the quality of the job done requires the same expertise as doing the job, how can we delegate the job to someone else without getting cheated?
For the hard scientists, the problem is difficult, but can be solved through requiring scientists to make empirically verifiable claims, and penalizing them for making false ones, thus giving scientists an incentive to maximize their claims’ veracity. With other topics, this solution does not work; not necessarily because no empirical claims are involved, but because, if they exist at all, they are few enough and far enough between that they cannot be used to steer our collective mind in the right direction. Just as evolution cannot produce adaptations if children do not bear sufficient resemblance to their parents. This is especially true for things like literature, which do not contain arguments for us to evaluate at all; and yet we must evaluate, or perish. Poetic difficulty is just a special case of the difficulty of all evaluation.
The principal-agent problem only arises if agent and principal do not share a relationship of trust. For this reason it seems like a plausible solution to read what your friends recommend. But how do your friends decide what to read? Either through their own friends’ recommendations, or through newspapers, blogs, academic syllabi, etc. Just channeling everything through friendship doesn’t solve the problem; it just slows down the rate of propagation for both knowledge and disinformation.
Better than reading what your friends recommend, is reading based on your friends’ recommendations. This is better, first, because the recommendations will themselves contain reasons, which you can evaluate; being told why a book is worth reading is much more valuable than being told to read it. Second, because, since you know your friends, you can evaluate their reasons in light of their character: their weaknesses as well as their strengths.
I take this to be the kernel of truth behind Blaise Pascal’s argument regarding how to approach texts that are difficult to understand:
If one of two persons, who are telling silly stories, uses language with a double meaning, understood in his own circle, while the other uses it with only one meaning, any one not in the secret, who hears them both talk in this manner, will pass upon them the same judgment. But if afterwards, in the rest of their conversation one says angelic things, and the other always dull commonplaces, he will judge that the one spoke in mysteries, and not the other; the one having sufficiently shown that he is incapable of such foolishness, and capable of being mysterious; and the other that he is incapable of mystery, and capable of foolishness. (Pascal, Pensées, #690)
If you think that someone is wise and good, and you don’t understand part of what they say–then maybe you’re the one who’s missing something.
Nevertheless, even this strategy does not solve the problem; and it’s not clear that any strategy can solve the problem, forget perfectly, but just adequately. For, to put it somewhat mystically, truth does not differ from falsehood in any particular feature, since every feature can be falsified; but only in its participation in the whole of Truth.
This post is something of a follow-up to my post last month about the power of words; and, in particular, to my use of the word “cult” to describe the LessWrong community. I’ve spent a decent (inordinate?) amount of time trawling their site so you don’t have to, as several links in this post demonstrate. The difficulty is that much of what they write has a great deal of value, even while certain aspects (and not only their treatment of religion) I believe to be woefully misguided; how, then, to cite them responsibly? The word “cult” is my attempt at prudent censorship; it communicates that they should be viewed with suspicion, and perhaps avoided (especially if you have heretical tendencies, as do so many of us in the present day), but also that they perhaps have something of value to offer. As Ross Douthat argued recently, perhaps the world needs more cults.
: This is, perhaps, why books are so often given as Christmas presents; and why there is something curious about giving as a gift a book you have not yourself read.
: Though note that this incentivizes scientists to make boring, almost-certainly-true claims; the system is thus far from maximally efficient. But every large machine wastes much of its energy in the form of heat.