Discussion about this post

User's avatar
Cody Kommers's avatar

2] When you judge your model in terms of its simplicity, you will tend to study things which lend themselves to simple explanations. It’s not that mathematical explanations are overly simple—perhaps theoretical elegance is the preferred term here—but that this approach can only investigate phenomena that hold together well when you reduce them to their essence. This is true of the kind of things studied by computational cognitive scientists. They can point to the equation and say, “Here, at core, is what is happening.”

Novels, like experience itself, don’t work that way. They don’t hold together well when you reduce them to a bare bones plot. Their essence dissipates. Likewise there is no fundamental computational model that can distill experience into its primordial form. Unlike computational models, novels are not their essence. That’s why the first thing you learn in high school English is not to write “plot summary.” If the summary was all that was needed to get the author’s point across, they would’ve just written that to begin with.

Some aspects of cognition may lend themselves to the reductionism of computational models. But humans, in their wholistic experience, are not simple.

Occam’s razor only takes you so far.

Expand full comment
Cody Kommers's avatar

1] And here’s another thing to consider… Cognitive scientists don’t actually evaluate models solely on whether they are empirically validated. Most models languish in scientific obscurity. The p value maybe close to zero; but so is the citation count.

Rather, a small subset of models which go through the rigors of experimental verification are actually determined to be useful within the larger scientific community. These are the ones that get cited.

So besides simple popularity metrics such as citations, how do cognitive scientists determine whether a model is any good? There’s a simple answer: interpretation. Someone with the relevant expertise—one who has thought critically about other models which attempt to do the same thing—must look closely at the model and determine what works and what’s lacking. Then other people with a similar expertise must debate whether that evaluation is correct or not.

I think it’s somewhat provocative that these are essentially no different than the mechanisms we have for evaluating novels. There are simple popular metrics such as book sales, which probably tell us at least something about whether the story resonates with readers, whether they find the model useful in one capacity or another. Then beyond that we have to rely on the in-depth considerations of a community of thoughtful, well-trained readers.

Expand full comment
8 more comments...

No posts