July 19, 2005  ·  Cass Sunstein

Odd: Some people have objected to my little post about Judge Clement, but apparently its substance was right. People were indeed participating in an informational cascade. Unfortunately, I ended up joining that cascade (tentatively). The confident view that the President had chosen Judge Clement, like the confident view that the Chief Justice was about to retire, was clearly a process in which many people were confidently relying on unreliable people, to the point where the number of (confident) people was misleadingly high. That’s a (bad) cascade. With respect to the confirmation hearings, I predict we’ll see at least one other bad cascade in the next two months. Let’s watch for it.

  • http://sethf.com/ Seth Finkelstein

    I suspect the objections may be from a certain frustration with the topic:

    “On two occasions I have been asked [by members of Parliament], “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” – Charles Babbage

    But I see an underlying idea in some evangelism that now, we can put into the “machine” wrong figures, and through WISECROWDS, the right answers *will* come out.

    I would suggest this is a “teachable moment” that, as the saying goes, “Garbage In, Garbage Out”, and usually aggregating a bunch of wrong answers leads to a wrong answer.

    There are indeed some extraction procedures which can find a signal amidst noise. But accurate information can’t be created if it was never there in the first place. Even using a Wiki.

  • http://www.ericd.net ericd

    Information “cascade” != valid.

    How did Judge Roberts graduate from Harvard a conservative? That is quite an achievement!

  • http://www.funender.com/music/enigmapond SWL

    Blame the media, in part..as usual, for the cascade. I don’t agree with the Presidents decision..however, I don’t agree with most of his decisions.

  • http://sethf.com/ Seth Finkelstein

    Very good post:


    “In other words, there�s strong reason to suspect that this case doesn�t support Lindgren�s more general claims about the superiority of prediction markets vis-a-vis experts; in this case the markets are arguably being manipulated by people with insider knowledge that isn�t available to the experts. The reason that markets are doing better than experts �without first-hand knowledge� is most likely that they�re being used by experts with first hand knowledge to make money from those who don�t have such knowledge. This is a very bad case to test the efficacy (or lack of same) of prediction markets in aggregating dispersed public knowledge into a usable metric; it seems to me rather unlikely that this sort of aggregation is what is in fact happening here.”

  • Siva Vaidhyanathan

    Hey, Cass. I nominated YOU for the Supreme Court. See the July issue of Reason.

    – Siva

  • http://www.technoutopia.blogspot.com WillCurtis

    I’m currently doing some research on decentralized estimation, which is really the engineering manifestation of these sorts of information aggregatino issues.
    I’m finding in some situations that, to avoid irrationally high confidence (actually improperly small error covariance matrices) in the group estimate, it becomes critical to indicate provenance; in other words imagine a network of people trying to estimate some value. They each use a sensor with a certain accuracy and they communicate with their neighboring nodes/people in the network. As the information flows across the network it is possible that a well-connected person’s sensor data will be “double counted”. The solution is to tag the shared information with metadata indicating what people you’ve recently communicated with.
    A real world example would be the publishing of research in an academic journal. The references provided in the bibliography act as provenance metadata, and they prevent an unnaturally strong consensus forming from the results of only one or two papers.

  • Corey

    “A real world example would be the publishing of research in an academic journal.”

    Perhaps, but the problem there is that including certain well respected AUTHORS in your bibliography can influence the respect given to the CONTENT. But even great authors sometimes produce ill-recieved works, and those may be the ones built on.

    The electronic network nodes in your model differ from people in that people can actually add analysis as they pass data around. In your model, prominence is related to connectedness only. In the human model, connectedness matters (some people like Lessig or Sunstein have prebuilt audiences), but content also matters.

    I think that information cascades occur when the nodes/people do not have independent ability to judge the content and decide if it is worth passing on. This is almost certainly related to the degree of openness concerning the process being evaluated. Open processes have wider dispersion of expert knowledge.

  • http://bit.ly/W7OSMY cheating girlfriend

    I found out my husband having an affair by working with a new young woman using PlayerBlock