August 26, 2004  ·  Richard Posner

…is the name of a 1998 novel by Dan Brown, the author of The Da Vinci Code. Digital Fortress is a cyberthriller about the National Security Agency (NSA), which monitors and intercepts electronic communications worldwide. In the book as in real life, the agency is concerned with encryption technologies that can prevent it from decoding the communications that it intercepts.(One of the triumphs of modern technology is the unbreakable code; it used to be that even the cleverest codes could, with enough time and effort, be decoded.) The agency would like all such technologies to contain a “backdoor” that would enable it and only it to decode all intercepted messages.

The book has a number of unrealistic features (I very much doubt, for example, that the NSA employs hit men), but it flags a genuine problem, which is that privacy is an equivocal good. This statement will shock many people, for whom “privacy,” like “liberty” and “justice,” signifies an unallowed good. In fact all that “privacy” means, in the case of communications at any rate, is concealment, which obviously can serve bad as well as good purposes; few civil libertarians are so doctrinaire as to deny that there are some situations in which wiretapping of phone conversations is legitimate. So what if telephone or other electronic communications are so effectively encrypted that wiretapping (or wireless tapping) is impossible? It would be another example, analytically symmetrical with that of the use of encryption to protect (and extend) copyright protection, of technology upsetting a balance deliberately struck by the law, in this case between freedom and safety. Hence the case for the back door. The problem is how to control the back door. In the case of conventional, nonencrypted phone conversations, the government has to obtain a warrant to wiretap. But the (unspoken) assumption is that evidence of criminal activity can usually be obtained without wiretapping, then used as the basis for applying to a judicial officer for a warrant to obtain further conclusive evidence. But in the case of foreign intelligence surveillance, the assumption is that winnowing an enormous mass of unfiltered communications may be the only way of obtaining evidence of some terrorist or other enemy threat, and if so then it would be dangerous to forbid the NSA to read intercepted communications without a warrant. But if the NSA has unlimited authority to read communications, then no communications are really private.

My inclination–it is only that; I am not an expert in these matters–would be to let the NSA have its back door. I think that people who worry a lot about invasions of communicative privacy sometimes overlook the fact that communications are never really private. There is always the possibility that the person at the other end of the communication, the person you trust not to disclose the contents of the communication to anyone else, will betray you, or that he will make a copy of the communication and it will come into the hands of someone who wishes you ill. In the case of email, we all know by now that an email message is likely to sit, forever, on several servers and terminals. So communicative privacy is inherently qualified, imperfect, incomplete; and the question is whether knowledge that your communications may be decoded, scanned, and perhaps stored, by the NSA, is going to inhibit you, or inflict psychological distress; and the answer to both questions probably is no.

I don’t doubt that there potential dangers from allowing government surveillance. Think now of the NSA’s interceptions being filed under the names of the participants in the intercepted communication and placed in a database along with other information about each individual, including for example his commuting patterns gleaned from the E-Z Pass database. Eventually there would be an incredibly detailed dossier on every person in the U.S. The value of such dossiers for preventing terrorism and detecting crime would be immense; but so would be the potential political and psychological consequences if every person knew that the government was in effect tracking his every move.

August 25, 2004  ·  Richard Posner

At last, high-level Administration acknowledgment that global warming is real, and that human activity (mainly the burning of fossil fuels, principally oil, natural gas, and coal, and deforestation in Third World countries) is a principal cause because such activity emits carbon dioxide. (See also Times article.)

Greenhouse gases, such as carbon dioxide, in the atmosphere trap heat reflected from the earth and by doing so maintain a temperate climate. But since the Industrial Revolution and in particular since about 1970, economic and population growth has resulted in greatly increased emissions of carbon dioxide, resulting in greatly increased atmospheric concentrations of the gas (the effect of emissions is largely cumulative, because it takes a long time for carbon dioxide to be removed from the atmosphere, by absorption by the oceans), producing in turn higher global temperatures. As I explain in my forthcoming book Catastrophe: Risk and Response, because the global climate equilibrium is fragile, abrupt global warming is possible, though unlikely, in the near future. It would not be as abrupt as depicted in “The Day After Tomorrow,” but it might be abrupt enough to have catastrophic consequences within a decade or even less, consequences that might include a rise in ocean levels that would inundate most of the world’s coatal areas, where most of the largest cities and much of the world’s population are found.

The current global-warming problem is an artifact of technology (though not of the newest technology), which has not only made carbon the basis of most of our energy but has contributed to a great increase in the number and wealth of people, and hence to a great increase in the demand for energy. But technology may bail us out, either by developing feasible, economical substitutes for carbon-based energy sources, or, by advances in nanotechnology (molecular-scale engineering), creating carbon-dioxide devouring nanomachines to cleanse carbon dioxide out of the atmosphere. Unfortunately, as is so generally the case, technology has a downside; for example, concern has been expressed that the weaponization of nanotechnology could further destabilize the geopoliitcal system, and even that nanomachines might accidentally be created that were incredibly voracious self-replicators–superweeds that might devour all organic matter on the planet. See Nanotechnology.

August 25, 2004  ·  Richard Posner

First example: how technology will bring us to the world of The Matrix.

The matrix is a video online world that is so realistic that if one’s “avatar” (one’s electronic self, the player in the video world) is killed, one dies of shock. The current video online worlds, in which you create and manipulate your avatar by means of a computer screen and a mouse or joystick, are insufficiently realistic to cause many deaths; I know of only one, described in a great article by James Meek: ‘In October 2002 a 24-year-old man, Kim Kyung-jae, died of a DVT-like illness after playing an online game, Mu, virtually nonstop for three and a half days. “I told him not to spend so much time on the internet,” his mother told the BBC. “He just said, ‘Yes, Mum’, but kept on playing.” (According to Lance Stites of NCsoft the company has taken steps to encourage players to keep the distinction between real and virtual worlds clear. Now, messages appear periodically on screen reminding subscribers to “stretch your legs and see the sunshine once in a while”.)’ But already there is a video game in which you wear a headset that enables you to manipulate your avatar by brain waves. More matrix-like still is a technology under development whereby chips implanted in the brains of paralyzed people will enable them to operate computers by thought alone: they ‘will have a cable sticking out of their heads to connect them to computers, making them look something like characters in “The Matrix.”‘ Implants.

Even in the current, primitive stage of online video world technology, literally millions of people are participating, many obsessively; the use of real money to purchase game money with which to buy equipment, clothing, and other assets in the video world is already a big business. A few years hence, people will be interacting in the video world by brainwaves alone, and in that “no hands” context they may forget who and where they are. The social consequences could be immense, and the political as well if government obtains control of the chips implanted in people’s brains to enable them to play and of the signals communicated to those chips. It will take many years to create a video online world as complex as that of The Matrix, where millions of avatars interact in a stunningly realistic simulation of a 20th century big city. But short of that, people will find it increasingly difficult to distinguish between the actual and virtual worlds in which they participate.

The law is slowly beginning to notice the video online world phenomenon; there is even a recent case in China in which an online player sued the video game company for allowing a hacker to steal the player’s virtual possessions!

The big question–what if any social controls should be placed on the evolution of video online worlds–is baffling and as far as I am aware has attracted little attention.

August 25, 2004  ·  Richard Posner

As Larry Lessig has long and presciently emphasized, law and technology are substitute methods of protecting an interest. You can sue a trespasser; but it may be cheaper just to put up a strong fence. We used to think that if the technological substitute was adequate, it would be superior to the legal; and in fact the law often imposes self-help requirements to discourage lawsuits. And we never (or rarely) used to think that technology could upset a balance struck by the law; we thought law could cope with any technological changes. The dizzying advances of modern technology have destroyed these assumptions.

File sharing is the obvious example. On the one hand, encryption technology and Internet distribution (that is, selling directly to the consumer rather than through a dealer, enabling the seller to impose by contract additional restrictions on the use of his product beyond those imposed by copyright law) may progress to a point at which the fair use privilege of copyright law is extinguished (and so Lydia Loren has made the interesting suggestion that it should be presumptively deemed copyright misuse for a copyright holder to impose by contract (or, presumably, by encryption) restrictions over and above those authorized by copyright law). It would be like having a fence and gate so secure that the fire department couldn’t enter one’s premises to fight a fire; in such a case the fence would be giving the homeowner greater rights than trespass law, which would permit such entry.

On the other hand, Grokster-like services greatly reduce the cost of infringing copyright. The copyright owners retain (even if the Ninth Circuit’s Grokster decision stands) their right to sue the direct infringers, i.e., the people downloading recordings of copyrighted songs, without a license, into their computers, but this imposes litigation costs that the copyright owners did not have to bear when unauthorized copying of recordings was sufficiently costly to discourage most infringers without having to threaten them with a lawsuit.

We are in the presence of an arms race between encryption and copying technologies; if the latter prevails in this competition, copyright law will be ousted from one of its domains.

With all due respect for the interests of the recording industry and the file sharers, I regard this particular interaction of law and technology as relatively trivial in its overall social consequences. I am much more concerned about the ability, or rather inability, of the law and other policy instruments to cope with the issues thrown up by the relentless progress of science and technology. I’ll give examples in subsequent postings.

August 25, 2004  ·  Richard Posner

A further thought, prompted in part by the release yesterday of the Schlesinger panel’s report of its investigation of the Abu Ghraib scandal.

Under the present system of intelligence, the CIA, although it is not the largest intelligence agency, is the leading agency, and its director is understood to be the government’s senior intelligence officer; he briefs the President, and is responsible for keeping the President and the other top officials informed. If a National Intelligence Director is layered on top of the CIA, its director, and the other agencies, as recommended by the 9/11 Commission, and if in addition, as suggested by Senator Roberts, the CIA is broken up into three parts, who will brief the President? The NID will be too busy supervising 18 agencies, which will mean worrying about spy-satellite launchings, creating “back doors” to encrypted Internet communications, monitoring the Coast Guard’s intelligence activities, etc., etc. So will the responsibility for keeping the President informed devolve on the head of one of the CIA fragments? But won’t he be too low-level an official to be able to marshal all the intelligence resources of government?

The basic problem with the recommendations is the attempt to solve managerial problems with structural solutions. This was recognized by the Schlesinger panel. Its report explains that the Abu Ghraib interrogation fiasco was the result of specific mistakes in planning, analysis, training, deployment, supervision, and personnel, made by specific individuals up and down the chain of command, who are named. The mistakes were not the product of a deficient structure. For the most part, this is likewise the case with respect to the failure to detect Al Qaeda’s 9/11 plot and respond to the attacks. Inadequate screening of visa applicants, deficiencies in building-evacuation plans, misunderstood rules regarding sharing of intelligence between criminal investigators and intelligence officers–the list of remediable management failures goes on and on, but the closest to a structural failure that I discern is the lodging of domestic terrorist surveillance in the FBI, which seems to have a deep-seated prosecutorial mindset that is inconsistent with effective preventive surveillance of potential terrorists.

August 25, 2004  ·  Richard Posner

Doug Lichtman, a very able IP professor at the University of Chicago Law School, took sharp issue with my brief note on patent fair use, emailing me that my “quick reference to patent fair use…is problematic for the simple reason that, often, the key market for research tools is to sell those tools to other researchers. If a researcher’s use of patented research tool is fair use, that would significantly degrade the incentive to create those research tools inthe first place. Moreover, even if your approach works, it is in sharp conflict with the Bayh-Dole instinct that society might very well be better off in a world where academic researchers patent their work. As you know, that legislation was passed in response to evidence that university breakthroughts were sitting on the shelves both because (a) they could not be owned exclusively under old NIH rules; and (b) universities had too little incentive to bring their work to the attention of industry. Overall, patent fair use and the research exception are an important topic, but your short sentence seems to unfairly duck the many hard issues.”

These are difficult issues, to which I can’t do full justice here. Lichtman and I differ on the importance of patents as motivators of research. The effects of patents on innovation are extremely complex, an important consideration being that when a field becomes blanketed by patents, as is happening with research tools, inventors are forced into what can be costly and protracted negotiations for licenses in order to be able to use and build on previous innovations. So we have to consider carefully what alternatives there are to patents for motivating innovation in pharmaceutical and other research. It turns out that there are many alternatives, including government grants, university grants (universities have their own resources–Harvard has an endowment of $20 billion), the commercial advantages of a headstart, and trademarks.

And are we really better off in a world in which academic researchers can patent their work? Maybe so, but a countervailing factor is that the patentability of academic research deflects academic researchers from basic to applied research, which may have long-run consequences for innovation that are adverse.

August 24, 2004  ·  Richard Posner

Here is a very worrisome problem concerning fair use. It has to do with a dichotomy long noted by legal thinkers between the law on the books and the law in action. They often diverge. And fair use is an example of this divergence. As I said in an earlier posting, fair use often benefits rather than harms the copyright holder. However, it doesn’t always; moreover, even if a copyright holder is not going to lose, and is even going to gain, sales from a degree of unlicensed copying, if he thinks he can extract a license fee, he’ll want to claim that the copying is not fair use; and finally, because the doctrine has vague contours, copyright owners are inclined to interpret it very narrowly, lest it expand by increments.

The result is a systematic overclaiming of copyright, resulting in a misunderstanding of copyright’s breadth. Look at the copyright page in virtually any book, or the copyright notice at the beginning of a DVD or VHS film recording. The notice will almost always state that no part of the work can be reproduced without the publisher’s (or movie studio’s) permission. This is a flat denial of fair use. The reader or viewer who thumbs his nose at the copyright notice risks receiving a threatening letter from the copyright owner. He doesn’t know whether he will be sued, and because the fair use doctrine is vague, he may not be altogether confident about the outcome of the suit.

The would-be fair user is likely to be an author, movie director, etc. and he will find that his publisher or studio is a strict copyright policeman. That is, since a publisher worries about expansive fair uses of the books he publishes, he doesn’t want to encourage such uses by permitting his own authors to copy from other publishers’ works. So you have a whole “law in action” law invented by publishers, including ridiculous rules such as that any quotation of more than two lines of a poem requires a copyright license.

Here’s a reductio ad absurdum of folding in the face of copyright overclaiming: �While interviewing students for a documentary about inner-city schools, a filmmaker accidentally captures a television playing in the background, in which you can just make out three seconds of an episode of �The Little Rascals.’ He can�t include the interview in his film unless he gets permission from the copyright holder to use the three seconds of TV footage. After dozens of phone calls to The Hal Roach Studios, he is passed along to a company lawyer who tells him that he can include the fleeting glimpse of Alfalfa in his nonprofit film, but only if he�s willing to pay $25,000. He can�t, and so he cuts the entire scene.� Jeffrey Rosen, �Mouse Trap: Disney�s Copyright Conquest,� New Republic, Oct. 28, 2002, p. 12 (emphasis added). Clearly, copying the three-second “fleeting glimpse” was fair use, but who knows how the studio would have responded if the filmmaker hadn’t cut the scene?

What to do about such abuses of copyright? One possibility, which I raised hypothetically in my opinion in WIREdata, pp. 11-12, is to deem copyright overclaiming a form of copyright misuse, which could result in forfeiture of the copyright. For a fuller discussion, see the very interesting paper by Kathryn Judge, not available online but obtainable by emailing her at

The underlying problems are two: the asymmetry in stakes in disputes between owners of valuable copyrights and people who are either public domain publishers or don’t anticipate that the works they’re creating will have great commercial value; and the vagueness of the fair-use docrine. I have suggested that this vagueness can be reduced by a categorical approach, under which types of use are given essentially blanket protection from claims of copyright infringement. If only one could define “glimpse”!

August 24, 2004  ·  Richard Posner

Many great comments on my fair use posts; can’t discuss them all, but let me make a few points in response:

With regard to the Patry-Posner proposal for creating a new fair-use defense for unauthorized copying of old copyrighted workers if the copier was unable with reasonable effort to discover the name and address of the current holder of the copyright, several commenters point out that one of the objections to the pre-1976 system, where failure to renewal forfeited copyright, was that people often just forgot to renew or botched the renewal application. No doubt there were unfortunate such incidents. But in general forgetting to renew or botching the application is pretty good evidence that the copyright had little remaining value. People are careful with property that they think valuable. Failure to renew even if inadvertent is pretty good evidence of lack of value.

Another commenter asked, what’s to prevent someone from registering copyright on a private registry without actually owning it, in order to extract a license fee? That’s an excellent question. The conduct would be fraudulent, and should be punishable–severely. Another problem raised by the commenter is that a copyright owner may not know he’s such–he may be the remote heir of a forgotten writer. But that means he’s not likely to benefit from his ownership. Better that his copyright be forfeited than that the work remain in a limbo, where no one can use or copy it because no one can find the owner. In the law of physical property, such a work would rightly be regarded as abandoned, and so should intellectual property, in similar circumstances.

But what all this means is that our proposal is likely to propel into the public domain the works new creative artists, publishers, etc. are least likely to want to copy! That’s a fair criticism, but, in the face of Eldred, I don’t see any way to meet it. Just to be clear, although I think there is a case to be made for allowing continued propertization of valuable copyrights indefinitely, I do think that on balance the Sonny Bono Act is unsound.

On the broader issue of the scope of fair use, a commenter asked why the law should distinguish between parodies and satires. The fair-use defense is broadly available to parodies, but, in general, not to satires. What’s the difference between these terms and should the law recognize it? A parody is a work of criticism or ridicule. A satire is a humorous version of a work that doesn’t criticize it but may use it as a vehicle for criticism. (This is not the only possible definition of the word, but it’s the definition that points up the legally relevant difference.) A parody may destroy the market for the original work, but it does so by criticism rather than by offering itself as a substitute; and obviously copyright law shouldn’t be used to stifle criticism. A satire on the other hand trades on the popularity of the original and may indeed be a substitute. An example is the movie “Abbott and Costello Meet Frankenstein.” This is a humorous version of three earlier horror movies, “Dracula,” “Frankenstein,” and “The Wolf Man.” The movie is not critical; and it offers itself as a substitute for people who want three in one, plus laughs. I had occasion recently to watch the original (1931) “Dracula,” which is the version satirized in “Abbott and Costello Meet Frankenstein.” I was disappointed; it seemed to me the satire had stolen the thunder of the original movie. Another great vampire comedy is “Love at First Bite,” and, again, it is not criticizing any earlier vampire movies, but merely offering a humorous version. Satires, in short, are classic derivative works, which belong to the owner of the copyrighted original; parodies are derivative works too, but are protected by a critic’s privilege that is a part of the doctrine of fair use. For completeness, I note that no one can copyright the idea of the vampire (ideas are not copyrightable; only expression is), but most of the vampire movies incorporate specific expressive features of “Dracula.”

One commenter asked for an example of fair-use counterparts in patent law. Perhaps the clearest is the experimental-use exception, but there are others. For example, a generic drug manufacturer is permitted to use the patented drug to demonstrate that its generic equivalent is indeed therapeutically equivalent (the “testing” exception created by the Hatch-Waxman Act). More broadly, an inventor can use the information in the patent to try to invent around the patent. And Landes and I advocate an expansion of the patent fair-use principle to allow scientists to use patented research tools (such as the oncomouse) without license–provided the scientists aren’t allowed to use the tools to produce their own patented products!

August 24, 2004  ·  Richard Posner

Many excellent comments on my posting. I can’t respond to all of them, but I do want to respond to two of them.

One commenter said (I’m paraphrasing): why would breaking up the CIA be a big deal? It accounts for only 12 percent of the national intelligence budget. What that overlooks is that high-tech intelligence agencies, like the NSA (surveillance of communications worldwide) and the NRO (develops and launches spy satellites), are very expensive because they are capital-intensive as well as requiring substantial staffs, but much of their intelligence output is input into the analytical and operational divisions of the CIA, the FBI’s counterterrorist division, and the State Department’s Bureau of Intelligence and Research. It is important not to disrupt those analytical and operational activities.

It’s also important to recognize the importance of the phenomenon that economists refer to as “path dependence”: where you end up may depend on where you started from, rather than on optimal system design as an original matter. If we were starting afresh, we might well configure the intelligence agencies differently. But imagine the transition costs involved in a from-the-ground-up reorganization of our 15 intelligence agencies.

The second comment I want to respond to may seem unrelated to the first, yet turns out to be closely related. This commenter takes issues with a statement that I once made to the effect that I thought the Supreme Court had made the correct decision in the Korematsu case, when it refused to invalidate an army order, approved by President Roosevelt (and by Earl Warren, who at the time was the governor of California), removing persons of Japanese extraction from the west coast in 1942, shortly after Pearl Harbor. In hindsight, it is apparent that the order was erroneous–that the Japanese-Americans did not pose a threat to the nation and that the order was influenced by racism. But the wisdom of hindsight is treacherous. In March of 1942 when the order was issued, just three months after Pearl Harbor, there was not only fear that Japan would attack the continental United States, but also a need to demonstrate resoluteness in a war for which the nation was not prepared.

The wisdom of hindsight infected the 9/11 Commission’s report and the reaction to it by Senate Roberts and others. Hindsight is omniscent. In hindsight we know that Al Qaeda planned to attack the United States by infiltrating its operatives to learn to fly commercial aircraft and take over and crash those aircraft into buildings. The natural reaction is, since we know it now, why didn’t we know it then? We must have been asleep at the switch, and so we have to revamp our intelligence structure from the ground up. There are two non sequiturs here. First, that if you’re surprised by something, it shows you were culpable. Second, that if there is a system failure, the solution is to change the table of organization.

August 23, 2004  ·  Richard Posner

Enough for the moment on fair use; I’ll get back to that.

I’m interested in the report of the 9/11 Commission on the intelligence failures that led up to the 9/11 attack. I was asked to do a book review of by it the New York Times, and I agreed (the review will appear in next Sunday’s New York Times book review section, but as the Sunday Times Book Review is published the preceding Monday, my review was actually published today) because of my interest in how the nation should be responding to catastrophic risks (this turns out to be, to a considerable extent, a law and science issue). My book Catastrophe: Risk and Response will be published in November by Oxford University Press, and besides taking the opportunity created by my guest blogging to plug the book shamelessly, I am going to be discussing some of the issues raised in it.

One is how to defend against terrorism. Although the 9/11 Commission’s report is a good read, and has other virtues as well, one of its greatest weaknesses is its failure to address, other than in passing, terrorist risks that are even greater than that of another 9/11: in particular the risks of bioterrorism, nuclear terrorism, and cyberterrorism. The Commission’s recommendations are concerned essentially with preventing a more or less exact repetition of the 9/11 attacks, which are any event the least likely form of a future terrorist attack, since surprise has been lost. We give our adversaries little credit if we suppose that the only attack they can launch is the one we’ve anticipated. They didn’t make that mistake on 9/11; why should they now?

The Commission’s report, because of its timing in relation to the election and because of the unusual promotional (and self-promotional) efforts of the Commission’s members, has attracted enormous attention, particularly with respect to its marquee recommendation (as one commentator put it) for the creation of a new position of National Intelligence Director.

With all due respect for Senator Roberts, his proposal, at least as described by the New York Times, seems to me to highlight the problems with the Commission’s report rather than to solve them. He proposes to break the CIA into three parts, and also to create a new agency from within the DIA (Defense Intelligence Agency), and to place all the intelligence agencies under the direction of the new NID (National Intelligence Director). It is unclear what he wants to do with the uniformed intelligence services (Army, Navy, etc.), so it’s unclear just how many intelligence agencies he envisages the NID directing, but it is at least 15 and could go as high as 18.

The agencies are highly diverse. One for example designs and launches spy satellites; another analyzes intelligence for the State Department; another does satellite mapping; another is the FBI’s counterrorist division; and so on and on. No one seems to have considered whether it makes good management sense to place all these agencies under the direction of a single official. There is some optimum span of control, which the proposal seems to exceed. It would be highly unusual in any organization for 18 divisions to report directly to the president, especially if they were as diverse as our intelligence agencies. Probably the agencies should be grouped. Well, they are grouped; most of them are part of the Defense Department. The proposal would spin them off, them put them all under the NID. Sounds pretty dubious.

There is also a point that readers of Larry’s blog should particularly appreciate: the dangers of centralization. Of course to have multiple overlapping intelligence agencies creates a degree of chaos (and there are limits, which breaking up the CIA might well exceed); but it also creates competition among organizations having different organizational cultures, personnel policies, traditions, methods, etc. Were the agencies to be welded into a single organization, the inevitable tendency would be the substitution of uniformity for heterogeneity and centralized control for a competitive “market” in which rival agencies strive to get their views accepted by the President. Would there be a net improvement? Neither the 9/11 Commission, with its less ambitious project of centralization, nor Senator Roberts, with his more ambitious one, has explored this vital question in depth, at least so far as an outsider is able to judge.

Against all this it can be argued that we have to do something, because we were surprised in 9/11, so we must have been doing something wrong. Not necessarily. Yesterday my wife and I happened to attend a surprise 60th birthday party thrown by the woman’s husband. He had thrown a surprise party for her on her 50th birthday. Nevertheless, she was completely surprised–indeed, stunned–by the party yesterday.

It is easy to surprise people, including entire nations.