March 22, 2007  ·  Lessig

Another Philadelphia court has struck another effort by Congress to regulate “harmful to minors” speech. (ACLU v. Gonzales). No surprise. Though it has taken almost a decade, it is the right answer given the flaws in the statute.

The core of the court’s rationale was the effectiveness of filters. We should remember the ACLU’s own warnings about a world filled with private filters. They were right then; the warnings are more valid now.

As it happens, I have just completed the third of my legislative recommendations to Congress. As it happens, it is about regulating “harmful to minors” material. My friends won’t like it. My not-friends don’t like me. But here it is anyway. You can download it here. Or you can watch it on Google Video below:

  • Seth Finkelstein

    “We should remember the ACLU�s own warnings about a world filled with private filters.”

    Yes indeed – but the Faustian bargain continues :-(.

  • Michael Leuchtenburg

    While your proposal, indeed, does not imply government *restriction* of speech – that is, it doesn’t prevent people from saying certain things – it does present a governmental *requirement* of speech – that is, it requires people to say certain things. Specifically, it requires people to say, whenever they make a post, whether it might be considered “harmful to minors”. I think that this is a significant constraint on speech, even if you don’t consider it to be a restraint.

    I also think that it presents significant technical challenges in the context of arenas such as forums and other sites wherein user-contributed content is the norm. Many of these sites do not allow users to enter arbitrary HTML or, indeed, any HTML. While the need to get support for the tag into browsers puts a requirement on a small number of software authors, the requirement for support in every piece of software allowing user-contributed content is much broader. Certainly not impossible to overcome, but again, it presents a requirement for certain speech on the part of programmers.

    This also requires content contributors on such sites to consider, for every comment they make, whether it might be considered “harmful to minors”. Currently, I go through most days without every thinking about whether such-and-such thing I am saying is harmful to minors. The sites which I frequent are generally used by primarily adults, so I simply don’t worry about it. If I had to consider this with every post I made, I would make fewer posts. In your talk, you said this was not an undue burden. I disagree. This need to always act as if children might be listening, and decide which parts of my speech may or may not be heard by children, is an undue burden.

    You might argue that, well, some sites will simply decide to mark all content as potentially harmful to minors. And again others might decide that, rather than provide access to the h2m tag, they would simply disallow such content. This results in the same “internet ghetto” possibilities implied by previous marking schemes.

    While allowing content to be marked in a more granular way than on the page or site level is an improvement, it still has several negative results which you don’t mention in your talk. Instead, you present an overly cheery view of the world under your proposed regulatory scheme. As I’m sure you’re aware, ignoring the possible criticisms of the scheme does not make them cease to be valid. I wonder why you chose to do so in your presentation.

  • Logical Extremes

    Conceptually, it sounds quite reasonable. It’s really kind of like the visible NSFW tag in common use. The market solution today (private censorship) isn’t working, and can be compared to secret government laws, neither of which we should want as a free society.

    Some web sites will move to less stringent locales, circumventing the intent and taking away US business). As a practical matter, blocking all IP addresses of non-certified countries would block much beneficial speech.

    The proposed law would also certainly need some enforcement teeth to have any impact.

    I’m curious to see a response to Michael Leuchtenburg’s undue burden issue. It’s no problem for large sites who can make a call one way or another, or who can police their contributed content. But another matter for more open, collaborative-content sites.

  • Brent Royal-Gordon

    Remember, this is “harmful to minors” in the legal sense. I would assume that doesn’t include every swear word or reference to a sexual act.

    As for message boards and YouTube, a checkbox for the entire post would do fine. The law would probably need to include a safe harbor provision for providers who include such a checkbox and make sure it generates whatever code is needed; the liability should then fall on the user who generated the content.

    Honestly, I’m not sure how necessary making it a law actually is. The pornography industry claims that it’s in their interest to keep kids out, so if the HTML standards include such a simple mechanism, they have no excuse not to use it.

    And finally, from a technical standpoint, there’s no need for a new tag. All linking and including tags (like img and a) have a rel=”" attribute which is intended to describe the current page’s relationship to the linked content. (For example, one of my sites has all links to the outside marked with rel=”external”; a bit of JavaScript then runs through the page and marks them to be opened in a new window.) The field can include any attribute of the linked content, so you could easily include an “adult” attribute (for example) in it. Perhaps HTML could be expanded to allow rel=”" on any tag.

  • Lumiere

    Original post is here.

    This proposal requires the content producer to make a judgment call in labeling as to whether content is “harmful to minors”. In this, it differs from requiring the use of ICRA, PICS, RDF or similar standards to describe the contents of content, e.g., “contains nudity”.

    The key difference is in where the inference from “this content contains such-and-such” to “this content is harmful to minors” is made: by the producer when the content is made, or by the consumer when it is viewed. Since consumers differ across time and location and will make different judgment calls, having the consumer make that decision is more robust, allowing the content to be accessible without a change in markup to more people for more time.

    This does not take away from the undue burden issue raised in other comments; if anything, it strengthens that claim, as the list of possible descriptors, worldwide, is huge. Requiring content producers to mark up their content with descriptors for each audience puts an undue burden on the content producer.

    Moreover, any such scheme is doomed to technological obsolescence: eventually technology will advance to the point that producing such content descriptors, and the legal judgment calls based thereon, will be largely automated. Any legal framework for content-based access restrictions should account for and handle this.

  • Federal Farmer

    Why isn’t a law like this vulnerable under the compelled speech cases?

  • poptones

    So what you seem to be saying is “the law” here would only enable the technology and that those who feel they have been unfairly marked “h2m” could sue?

    Isn’t that what we have already? It may not be so easy with images, but every browser already has a means of setting “load images for originating website only” which would pretty much guarantee kids arent going to be seeing images when they’re browsing the forums at facebook. Compelling me to wrap everything I do in some sort of ratings marker is onerous beyond reason; there is no reason to involve the law in this sort of personal decision if parents would simply take some responsibility for their kids. Why should I have to risk being dragged into court simply for saying “fuck that” instead of fuck that?” Especially when many parents I know would not consider either version “h2m.”

    “The market” need not be so balkanized as it is. As Wikipedia proves collaborative filters can work, it seems to me the only thing that’s needed is for “the market” to decide it worthwhile to attempt a wikipedia of net nannies. Provide a standard api for the lists and, within a larger filtering community, “the market” could allow smaller communities of parents to decide amongst themselves who has the best ideas and provides the most “fair” list.

    No laws needed… not even new software.

    How’s this for a novel idea: how about we pass a law requiring parents take responsibility for their own kids?

  • Ben Curtis

    I agree with your central premise that doing nothing results in an environment that is potentially worse than doing something, even something that is incomplete. However, it might be worth stating more strongly that your proposed technology solution is illustrative and not prescriptive — the method itself shows how to think about such things, but does not work as it is described. Allow me to explain.

    Once a page is viewed, it can be saved to the hard drive. There, the HTML may be changed in any way that the viewer wishes; since the image ot text or hyperlink labeled as h2m is in the HTML, it may be plainly viewed or the h2m tags removed to allow the browser to render it. The browser could filter the content out of the saved version, but a very valid service that converts any HTML page into something for the blind to read, using XSLT, could also be used instead to filter out the h2m tags before the browser even sees them — and certainly blocking such services would be illegal. Plus, simple javascript programs called bookmarklets would be readily available that, with a single click, change the HTML in the browser without even knowing anything technical — and the only solution would be to either ban javascript from browsers that filter, or cause the h2m to receive special handling unlike every other XML tag.

    Hang around a 13 year old talking about his MySpace page, and you’ll see they have the skill to do it.

    A smart response to this from the browser makers would be that all h2m content is available (since I’ve just shown it cannot be made truly unavailable), but hidden behind a click. Morever, every click will store a copy of the material such that it can be reviewed (by the parent, etc.). This prevents the accidental viewing, and since the downside (that the browser is “snitching” on them) is hidden from the viewer it is less likely to be circumvented. But it would require such amendments to the tag as “alt” text or alternative content to display until the click is made.

    However, this could be a lot of work to implement. I believe that the undue burden test must be applied to the publishers of media content, and not just the consumers. It would be unreasonable for a publisher of, say, Playboy to surround each item in its print magazine with little labels that indicate it is h2m. Although technically feasible and easy to print such, it would be incredibly cumbersome for the editor, layout designer, plate reviewer, and teams of lawyers to consult on whether each item needed it. Instead, the entire magazine is deemed h2m, and then publishing is not unduly burdonsome. Similarly, a whole domain could be indicated as h2m, perhaps in the domain name records (a la SPF anti-spam measures), or in some root file with consistant name (such as robots.txt, favicon.ico, or Google’s sitemaps).

    But this makes the point that censorship from the publisher may still be worse than nothing. For example, Wikipedia would never be able to guarantee compliance 100%, and out of self-preservation would need to mark the whole domain as h2m, effectively blocking a substantial source of research for students. This may be a good thing, if you believe the statistics on plagiarism, but in general I would think it not so good for a society whose next generation is learning that knowledge is something you look up, and then do something interesting with. But then you wind up with Wikipedia being right next to in the cyber-ghetto.

    My fear is that once compliance is mandated, then it is a small step (once legislators see a low level of compliance) to dramatically increase the consequences of non-compliance. Non-profits could not withstand a single accidental slip-up, and would over-comply. We see this with CAN-SPAM compliance now, where legitimate emailers find it nearly impossible to run an opt-in email newsletter due to email service over-reaction to every spam complaint. This year, a client of mine has paid $200 in fines for spam complaints made by people who do not remember giving their business card to my client, even though she has the physical card.

    It is a difficult subject, with no clear answer. Anyone can pick apart a proposal, so I applaud your efforts. I think you would find it more viable if you build-in safeguards for the publishers as well — any legal tool is subject to abuse, and good laws will limit themselves to solving the problem without becoming a new one. For example, a publisher that follows certain rules (e.g., has a conspicuous “contact us” page or email address, or publically labels the whole site “Intended For Adults”) might have 10 days to suitably markup or remove content that is complained about in some official manner.

  • Gavin Baker

    1. If a large part of the concern over censorware is with secret censored lists, then why not mandate that censorware disclose the list of banned sites? This would not be the first government action in this regard: one of the past DMCA exemptions granted was for research on censorware blacklists. It seems reasonable to require disclosure of the blacklists, at least to a. the owners of the banned sites and b. the customers of the censorware, if not requiring disclosure to the general public. There is perhaps a political feasibility argument here but I think it could be won.

    2. If a large part of the concern over national regulation is that other countries don’t participate, wouldn’t this be an appropriate place to look to an international treaty? It could be problematic if e.g. the Chinese start requesting that more and more content be tagged to allow filtering, but if this could be avoided — if the discussion were limited to pornography alone — then one might be able to make some headway here.

    3. What about Internet technologies other than the Web? How do we tag files on an FTP server or BitTorrent? Will we mandate that the filetype specs be rewritten to include a bit for “h2m”?

    4. Is “h2m” just pornography, or is it something more? How can this be defined statutorily?

    5. What happens when someone doesn’t tag content as “h2m”? Is this a civil or a criminal offense? What are the penalties? Most importantly, how does a publisher defend herself when arguing that her content is not, in fact, h2m? If the Parents’ Council starts sending out h2m violation notices, who will be willing to stand up to them in court, at great potential expense to the publisher? Won’t it be easier to settle by tagging everything h2m, thereby putting us in a world where the Net is overcensored, just as in the reign of censorware?

    6. If Uncle Sam is to mandate that pornography is to be tagged, why not require more tagging? After all, it’s not a mandate that the content be censored, only that it be tagged to ease in private censoring. You know, a public-private partnership to make the Web safe for children.

    7. Why is government action necessary here? I don’t think the video makes a convincing case. This can be done in the absence of a government mandate. Publishers can tag their own content (some already do), develop standard tags, and browsers can choose to implement filters that read these tags. You may not achieve 100% compliance, but it might make enough of a dent, and I’m not convinced that government action could ever achieve 100% compliance (or close to that).

  • Gavin Baker

    To clarify, my intent is not to “pick apart” the proposal so as to shoot it down, but — this is an incredibly complex subject. It warrants a lot more discussion than 15 minutes of slides, so I just want to continue the conversation.

  • nedu

    Professor Lessig,

    It sounds like you have an extremely fascinating idea. As a first practical step, you should lobby Congress to direct NIST to study adopting RDF as a FIPS.

  • Crosbie Fitch

    I’m with Michael Leuchtenburg.

    (At first I thought I was still reading Seth Finkelstein – and was surprised I wasn’t.)

    A minor’s brain has evolved over billions of years to voraciously seek out and understand everything concerning its environment, precisely to maximise its own survival – and people insist on misrepresenting the issue as one of harm prevention.

    Why not be honest about it and call it ‘destabilising to parental indoctrination and instillation of traditional family values’?

    The Internet insubordinates those used to enjoying the control of information – and this includes those used to controlling the flow of information to their children.

    The Web is unmoderated. If you wish to preserve your children’s ignorance for as long as possible, then supervise their use at all times – or simply prevent their access entirely.

    It’s no longer a case of cutting out the pages from the reproductive system sections of encyclopaedias and letting kids see only the pages on weapons and war. Now, Pandora’s box has been opened and copies of those pages are legion.

    The lid cannot be closed – unless, you shut down the Internet…

  • Niels Elgaard Larsen

    You were right. I do not like it.

    I do not live in the US. But I just do not get why parent would let the state determine what is harmful to their children. Some parents think that teenagers seing naked bodies is OK but violence, religious fanatism, and anorectic models in commercials is bad.

    And I do not believe that a state controlled filter would be voluntary for long. Libraries, schools, etc would somehow be forced to use them.

    You would need some heavyhanded control for this to work.

    Live CD’s (Knoppix, Ubuntu) would have to be marked or regulated, otherwise teenagers just download a live CD and bypassing the censorhip.
    Even if Knoppix was marked H2M most teenager would soon have a CD in the pocket from older friends, bittorrent, ftp, etc.
    Trusted Computing and DRM would be the way to stop that.

    Parents would have to stop their kids from using Free Software or they would just fix their software.

    And what would you do about non HTTP traffic?

  • Crosbie Fitch

    There is a useful thought experiment to evaluate this and other similar proposals.

    Imagine two Internets:
    A) Unadulterated
    Unregulated, unfiltered, prohibited from access by, or supply to, minors.

    B) Family-safe
    All publishers must submit themselves to certification before they obtain permission to publish. All content must be guaranteed family-safe and adhere to strict guidelines. All user submitted content must be vetted before publication. No anonymous use is permitted of this web. Any publicly accessible web terminals must obtain ID from all users. Severe penalties are applied according to any potential harm caused due to negligence, contempt or malice.

    If a family-safe Internet is sufficiently attractive to the market (parents & their children) and thus commercially viable then it will come.

    Technically it’s a doddle. Simply create a new protocol ‘httpc’ which like https is subject to a strictly controlled regime of digital certification. Certified publishers can then easily markup the subset of their content that is family-safe. Moreover, parents can apply for user licenses for the benefit of their children and promise to ensure no unvetted use of their terminals (by wicked uncles, etc.). Similarly for libraries and creches.

    Commercially, it’s a joke.

  • nedu

    Crosbie, there’s no need to imagine

    President Bush signed the Dot Kids Implementation and Efficiency Act of 2002, bringing us all —ta da— – Play, Learn and Surf….

  • J. Cook

    Regarding Niels Elgaard Larsen’s comments:

    1. The state would not determine what’s harmful to children. The content publishers would. If a parent disagrees with a publisher, they can give their children access through several separate mechanisms (including, hopefully, a mature implementation of this in browsers that permits the user to specify sites to block completely and sites to ignore this tag on).

    2. New versions of major live CDs (Knoppix, etc.) probably wouldn’t allow circumvention, as they tend to use major browsers like Firefox or Konqueror, and I’d expect these projects to implement this tag relatively quickly.

    Old versions would allow it, but there’s nothing that can be done about that. There’s always a way around things. Trying to plug all means of circumvention is an egregious and futile waste of time, as demonstrated by DRM and similar technologies.

    The best we can do is provide a sensible and useful system to give parents control over their child’s media intake. The state can’t *be* their parents … we can’t regulate what they place in their CD drive, both because we *can’t* and because doing so would be really, really bad. : )

  • Crosbie Fitch

    Ah, Nedu, thanks.

    I’m glad to see the beginnings of a case study in futility.

    All they need to do is upgrade it into a PKI based protocol as I suggest with ‘httpc’ and we have ourselves a sterile white elephant. Exactly what all the hand-wringers asked for, and completely unused.

    It’s like creating alcohol-free lager for children. They don’t like it (tastes foul), nor do they buy it (expensive and difficult to obtain). The only people who actually drink it are adults desperate for a risk-free substitute.

    So, a family-safe web, if ever created, is destined for use within the coddled confines of the church and Disney – and pretty much no-one else.

  • Jessica Margolin

    Great! I think this is a very interesting approach to this issue. My questions:

    (1) International: why can’t international sites do the same, with a metatag on a site-by-site basis saying whether they’re compliant as well as the h2m (or whatever) tag? Is there an enforceability issue? Could there be a “white-list”? I’d hate to restrict on a country-by-country basis, though I can see how the way treaties go, this would be relevant. (For example, my teenager has just realized that the British have a *different* historical perspective than the US….)

    (2) A refinement: it would be actually *helpful* to enable a labeling system, where what you’re talking about is effectively NC-17, yet most parents have issues with specific aspects. Believe it or not, I have no problem with my teenaged son surfing nudity — I recognize that beyond titillation there’s a legitimate need to understand male health and human sexuality — but I’m absolutely not happy about other things (e.g violence). I don’t know if this is reasonable since you’re talking about “harmful to minors” and I am talking about filtering, which could be construed as censorship (until you imagine a 7 year old getting a view of Saddam Hussein’s hanging).

    (3) Last, what to do about Googles’ perpetual cache? Am I misunderstanding things, or are pages cached virtually forever? In this case wouldn’t it be somewhat impossible to retroactively label?

    This is, obviously, far afield from my expertise; I’m just asking out of curiosity, and have no plans to personally effect these changes, so if you don’t have time to reply, I certainly understand (and maybe someone else will). Thanks!

  • Henry EMrich

    Prof. Lessig, I can no longer take you seriously as a thinker.

    This is pathetic — even lower than your bungling of the “Eldred” case (thanks to you, twenty more years of Disney monopoly — good going Larry!)

    Your “modest” proposal is idiotic.

    1. You assume that “law” and the attendant punishments for violating it) is somehow persuasive or effective. Sorry, wrong answer: the whole “p2p” (peer-to-peer) scene seems to revolve around people who, for various reasons, don’t give a shit about copyright or whether such-and-such file is “legal” to download. “Law” is only meaningful if it’s enforceable AND if people can be threatened badly-enough by the penalties.
    But even the worst penalties aren’t themselves persuasive all of the time: “civil disobedience” anyone?

    Your proposal hinges on several really bad ideas:

    1. You accept the notion of “obscenity” in a legal context.
    2, You accept the (extremely questionable) idea that particular imagery or content can be ‘harmful to minors’. (Same argument made by every ‘well-meaning’ school board that bans Huckleberry Finn, by the way.)
    3. You advocate LAWS REQUIRING ‘tagging’ of this supposedly “harmful to minors’ material, so as to ‘help parents’ censor what their children see.

    Great job, Lessig — you’ve managed to make just about every error possible in this issue.

    First, whether or not there’s a law ‘requiring’ thuch tagging of ‘objectionable’ content is already moot beforehand, given the fact that at least SOMEBODY is going to fail to comply with that ‘law’ for philosophical or ideological reasons. “Civil dissobedience’ in action.

    Secondly, who exactly gets to draw up the lists of content which gets ‘invisibly tagged’? Is it porn? What about ‘sexually explicit’ stories? What about news-feeds that happen to include footage of dead bodies (due to our wonderful continuous warfare of late)? Do we go for the “conservative’ route and censor sexual content, the “liberal” route of censoring ‘hate-speech’ and “politically-incorrect’ content, or the “compromise’ route of censoring everything?

    This is pathetic, Lessig. Who’s to say, further, that OTHER organizations BESIDES “parents’ won’t use this wonderful ‘invisible tagging’ for their OWN purposes? There’s already a great example of your wonderful plan in action — two of them, in fact:

    The Chinese government has been attempting for several years to censor what it’s children (oops, I mean ‘citizens’) can view on the Internet. Thankfully, genuinely freedom-loving individuals are constantly creating workarounds and anti-censorship tools to thwart such oppression.

    The “Church of scientology” had a “free internet-connectivity’ CD out for awhile, for it’s member’s use. (The only problem was that they had placed the equivalent of your wonderful ‘invisible tagging’ technology — hidden DLL’s — which prevented the web browser or any other application from displaying web-pages or searches which were antithetical to the Church of Scientology. Look it up, Professor.

    Further, your “let’s get the government to make a law to compel a ‘market response’ is the same tired old, recycled, “there oughta be a law….” thing that EVERY advocate of further governmental expansion uses.

    Professor Lessig, I really hoped you were a capable and sincere person — reading your book “free culture” was a real eye-opener. But first, you completely muff the “Eldred” case because you failed to understand how to approach it — thanks for the twenty-year copyright extention, by the way — and THEN you advocate government ‘intervention’ to ‘help parents’ by mandating censorship-technology?

    Thankfully, even if this monstrous proposal of yours WOULD get taken seriously, there are BETTER people than you who will promptly devise “tag-muting” to thwart such censorious bullshit.

    The funny part is, I actually had respect for you at one time.

  • Lara Spencer

    Why not have a la carte ports? We could ask web publishers to publish “h2m” material on one range of port numbers and other material on ports that are suitable for children. This would be more like the hardcopy world and zoning.

    Any user then could purchase from their ISP access to all ports service, some ports, or just general content ports (not h2m ports). If the block is made at the ISP level, it can’t be circumvented on the home computer by all but the extremely skilled techie kid with a lot of expensive equipment.

    Some parents will choose to limit access to some Internet content. Everyone else will never notice any difference. It is like chosing not to enter a certain store because it does not sell what you want your kids to have; surely a parent has a right to do that without it being a First Amendment problem. It hardly seems unfair to allow parents who do have strong views on pornography to have a choice short of throwing the Internet out (hardly realistic in this society) and sitting next to their curious teenager 24/7. (Why don’t we leave issues like gun access, alcohol, drugs, etc. to parental control?) Is it unreasonable to facilitate some choice for these folks, if it is going to cost so little for everyone else?

    “Legitimate” pornographers who are not trying to trap or target kids should be fine with either the invisible label or the simple configuration to serve to a different port.

    Just as there are those posting Supreme-Court-defined “obscenity” now on the Internet, even though it is illegal, there will be those who will continue to feed extreme images without the labels or appropriate port designation. What would you do about those?

  • nedu


    Exactly what problem is your CP80 Internet Chanel Initiative designed to solve that isn’t already addressed by or by PICS/RDF?

    Is the problem just that you would like the internet to be more like cable television? Or is it just that you would like to magically hand-wave some sort of silver bullet?

    Sorry if I sound a little adversarial—but exactly what problem is it that you’re trying to solve with this initiative?

  • Henry EMrich

    ?[July 4, 2001 - LUBBOCK, TX.] Free speech is under siege at the
    margins of the Internet. Quite a few countries are censoring access
    to the Web through DNS [Domain Name Service] filtering. This is a
    process whereby politically incorrect information is blocked by
    domain address — the name that appears before the dot com suffix.
    Others employ filtering which denies politically or socially
    challenging subject matter based on its content.”

    The above is from a document titled the “hactivismo manifesto”. It highlights the fact that INTERNET CENSORSHIP is already happening — and that it is already rampant.

    The sad fact, Professor Lessig, is that it is pernicious little lawyer-worms like YOU who give aid and comfort to the pro-censorship forces by your concessions. You’ve pretty much given them every bit of ground you possibly could on this issue while still being able to (hypocritically) continue to classify yourself as an advocate of “freedom of speech”.

    The valid social policy is NOT to aid “families” in any of their attempts at “thought-control”. If you wouldn’t support (for example) the banning of “huckleberry finn” by a school district, then why in HELL would you support the electronic equivalent? Mr. Lessig, if anything, the local school-boards that ban Twain’s book because of the word “nigger” are infinitely LESS odious than your capitulation on this issue, if only because they legally mandate censorship on a merely LOCAL level. The idea of adding pro-censorship code to HTML — for whatever reason, covering ANY content whatsoever — is egregious. It’s not whether the technology censors out ‘too much’ or not — it is, more fundamentally, the perception that the State has ANY ROLE WHATSOEVER in “helping” parents inculcate their own narrowmindedness into the younger generation. The State has NO business aiding families in that manner.

    Creeping totalitarianism is very easily sugarcoated — especially when it’s “for the chi-i-i-i-ldren!”


  • Brandon Yarbrough

    There’s something in this presentation that doesn’t sit well with me. Here you argue that the reason current filtering software is harmful to speech is that it blocks too much legitimate content. However, when you point out that this law could not affect websites in other countries, you offer as a solution blocking all international content until they adopt similar legislation. Isn’t that the same problem times 1000? Surely blocking everything the rest of the world has to say is a much greater problem than Net Nanny accidentally blocking breast cancer awareness sites.

    I like the h2m tag idea anyway, though I’m not sure I like mandating it. Seems to web-specific. What do we do with websites built largely around flash? or emails? or usenet or IRC? There are plenty of other protocols out there, and if the h2m system was a glowing success, I’d be worried parents would assume that they could make the entire Internet safe by enabling a check for it in only the web browser.

  • poptones

    At least two local ISPs offer “family safe” service. It doesn’t cost extra and the people are free to choose it or not. If one doesnt like the service at company A they can always go for company B; I go for company C because I want none of it.

    Seems to me this is an entirely non-issue save for lawmakers and politicians looking to make some money for themselves and heighten their control over the market and the people.

  • Andrew Radley

    There are some strong arguements here on a number of different fronts.

    However, the main thing is that the person paying for the service should be able to decide what is appropriate for their household or business. That is simply not what the ISPs are delivering, and therefore proposals like CP80 start to gain ground.

    I have felt for a long time that there are a number of things that we as technologists can do to improve peoples experience of the Internet:

    1. Most people do not need or use all the protocols on the Internet, therefore shouldn’t have them enabled for residential subscribers until they ask. This is not to deny them anything, but limit the exposure they have to things they cannot deal with. As an example, there have been many cases of parents becoming liable for the music & movie content that the children have downloaded over P2P. Given that in many homes the children are more technically literate than the parents, it seems only logical to give the basics out as a starting point for a service (HTTP, SMTP, PoP3 etc) so that the parent is in charge of what is going on inside their home.

    2. Allow them to defined what they consider to be acceptable in terms of web content via an easy to use interface with the ability to over-ride the categorisation engine that is providing the service. Once again, this is entirely down to the subscriber to define, although there’s nothing wrong with providing the subscribers with some starting templates. Simple to understand modesl are already out there with film classifications they just need applying to this ‘new’ context.

    3. Provide a network based anti-malware service so that they can be safe from the start of using the service, rather than the current model of allowing a subscriber to downlaod software if they want. This is the biggest weakness in subscriber security as it provides a level of choice that they are typically unable to make. Once again, this can be turned off if the subscribers wishes to do that, but it should be there turned on by default when the subscriber signs-up.

    None of this affects freedom of speech any more than a parents choice of which channels to buy from their cable provider, or what books to buy their children.

    The technology is out there, it’s just a matter of making the business case to the service providers that this is what the subscribers want, and in a lot of cases, need.

  • Pat Gunn


    It would be a lot easier to take you seriously (or even read your whole post) if you didn’t use all those cliches and other signs of trolling. There is little room for all the CAPITAL LETTERS and “oops, but you’re an idiot so you disagree with me”-type phrasing in polite discussion. It’s possible to disagree in a way that continues discussion and won’t turn people off from listening to you.

  • Gram

    I guess the question you have to ask :

    Is content anarchy on the internet more important than giving some choice to parents and individuals (a choice that exists in all other forms of media except the internet)?

    If content anarchy is more important to you, then any new idea or attempt to fix a world wide mental health crisis, is going to be a waste of time with you so can go back to your porn and self-abuse sessions.

    If giving some choice to parents and individuals is more important than you must realize that there is a real problem, and in my opinion, problem that must be solved with legislation and technology. Professor Lessig thanks for taking the time to offer a real solution to the problem.

  • marcus

    I have two reactions to this topic. The first reaction is that parents who try to control what content their children *can* access by technology are doing something dangerous in the first place. In my opinion, the more sensible way is to educate the child such that it can deal with this problem sensibly out of its own responsibility. A child who is determined to access certain content will find one way or the other to get at it.

    The second reaction is that supposing we have that the child does not want to access this content, we still need a way to suppress it. Take the number in your presentation: 66% of the children exposed to pornography did not want to. In fact, this is just a particular case of any filtering of information on the internet, and not specific to children. What if I, as an adult, do not want to be exposed to flame wars, pornography, or graphic violence, or, worse in my opinion, advertisement, or brainwashing mainstream media “news reporting”? How can any user of the internet sensibly access the information he wants to access and filter out information he doesn’t want to see? I have a strong suspicion that the same solution should work for advertisement as for pornography. This seems to be a good test case, and I am not sure that your proposal fits the bill.

    There is a huge problem here. The freedom of the internet creates a vacuum of content filtering. This vacuum wants to be filled, but in my opinion not because we require one party (parents) to control what another party (children) sees, but because every one of us wants to control what they see for themselves. Unforunately, I don’t have a good proposal for that.

  • GtRl

    >Exactly what problem is your CP80 Internet Chanel Initiative >designed to solve that isn�t already addressed by or by

    Kids aren’t the only people that would like a porn-free Internet experience and the CP80 initiative covers all technologies that exist and will exist, as well as deal with forums, blogs and social communities.

    >Is the problem just that you would like the internet to be more >like cable television? Or is it just that you would like to >magically hand-wave some sort of silver bullet?

    What exactly is your point Nebu. Do you think that the Internet is a force of nature of physical/unalterable law of science? It is a man-made creation that can be evolved to better serve the needs of the people that use it. If it can be fashioned so that you can get your porn and I can block it, then why not?

    >Sorry if I sound a little adversarial�but exactly what problem >is it that you�re trying to solve with this initiative?

    Two problems. 1). It bring order and accountability to an otherwise chaotic and irresponsible community; and 2). it allows individuals to choose whether or not they want access to adult content.

  • Henry Enrich

    You really ARE a worm, aren’t you?
    Since other countries wouldn’t be beholden to your wonderful “governmental action” you advocate BLOCKING content from other countries? That’s pathetic. You really are worthless, Larry. “Great Firewall of U.S.A’ here we come, courtesy of the pathetic “free culture” blowhard himself!

    There is very literally no way that you could EVER possibly redeem yourself to me now. The idea that the State should BLOCK CONTENT from areas that it cannot control is fucking TOTALITARIAN BULLSHIT — and you actually posture as an advocate of “free culture”.

    I detest you. This isn’t just a ‘dispute’ — this is wholehearted advocacy that the Nanny-State FORCIBLY take over the internet, mandate ‘content filtering’, and block access to ALL CONTENT outside it’s jurisdiction.

    (And I thought Google’s decision to help the Chinese censor the ‘net was bad. You advocate something far worse, and you do so in the name of “protecting children.” Pathetic.

    (Nobody should be suprised here, however — many ‘educated’ men supported Hitler’s Germany AND the former Soviet Union. This time, totalitarian despotism will at least wear a “family friendly” smile.


  • nedu


    The CP80 proposal is just another tagging and filtering proposal. While superficially plausible, it does not make engineering sense.

    I do hope you’re aware that a “port” is nothing more than a field within the TCP, UDP, or other transport layer header. That is, it’s just a number.

    How do you expect your tagging and filtering solution to cope with various forms of IP-over-IP tunneling? Specifically, IPv4-over-IPv6 and IPv6-over-IPv4? You’d need deep packet inspection–and at that point you’re no better than any other filtering solution. Otoh, if you propose to just prohibit the technologies that we need to transition to IPv6, then your proposal is DOA.

    Anyhow, you’re just handwaving over the problems of tagging and filtering, while claiming to have a general solution. That makes CP80 look like vaporware snake-oil.

  • nedu

    I have felt for a long time that there are a number of things that we as technologists can do to improve peoples experience of the Internet: [1 2 3]

    Andrew Radley,

    Broadly speaking, the essence of your itemized list can be summed up as moving the subscriber’s security perimeter from customer premises equipment to internet access provider equipment.

    What about subscribers on shared media? Particularly cable modems? Is expanding their security perimeter really such a good idea?

  • Adrian Lopez

    Why should “harmful to minors” speech be regulated at all? Before I support any sort of regulation I’d like to be certain that this sort of speech is, in fact, harmful to minors. Without scientific proof, regulating “harmful to minors” material in any way is no different from legislating morality.

    The fact that filters suck is no reason to embrace regulation. Let’s instead get rid of regulations that mandate the use of filters, and to enact laws that enable site owners to sue for libel when the content of their websites is misrepresented by filtering software vendors.

  • Patrick

    Labeling is certainly one way to go about filtering material that is “harmful to minors”, but my concern is what happens to those people that make a mistake in labeling? Are they to spend time in prison and/or pay a fine because a prosecute and judge have a different interpretation of the law? The law is pretty clear regarding content like pornography being harmful to minors, but what about content who’s harmfulness to minors isn’t as clear? It seems like a sure fire way to encourage the more litigious among us to sue or file charges every time there is any doubt whether content is harmful or not. The tags themselves may not be much of a burden, but the threat of prison time or expensive lawsuits certainly could be unless there is a less subjective way of identifying content that needs to be tagged.

    I am sorry some here can’t seem to argue their point without personal attacks. Calling someone a worm or a despot accomplishes nothing, but making this an emotional shouting match.

  • poptones

    “The Law” has also made “perfectly clear” the rules for one man owning another, and for one man putting himself above another, and for beating one’s wife. The law tells us that homosexuality is deviant behavior, that sodomy is harmful to a society which tolerates it, and that you can marry that 14 year old with her parents consent but you better not take any honeymoon photos…

    The law is an ass. Asking “how do we make the global internet better reflect our narrow perspective” simply won’t get you a reasonable answer no matter how hard you try to spin it. There are huge cultural differences even within one city; catering to political whims to overwhelm anything which threatens the most vocal minority’s ability to “regulate” all of society simply cannot serve liberty.

  • Alan Green

    I may have missed some detail (I’m better at reading than watching videos) but here are a few issues that I think would need addressing before a law came into being:

    1. Is a search engine based in the US responsible for wrapping h2m tags around foreign, h2m content?

    2. Will all web pages created pre-enactment need to be updated with h2m tags, as appropriate?

    3. Society’s standards change over time. Will web content need to be periodically re-fitted with h2m tags?

    4. Services such MySpace, LiveJournal, Digg, Wikipedia, and their smaller bretheren create web content from information entered by diverse groups of individuals. Will these individuals be responsible for indicating the h2m state of their content? If not, isn’t it likely that these services will wrap all user generated content in h2m tags, because that’s the simplest way to ensure they stay on the right side of the law?

    5. What about video, pdf, bittorrent and so forth? What about the next big protocol after html? I suppose the law, rather than mentioning a specific piece of technology like the h2m tag, could create a regulatory authority that addressed each technology.

    6. Perhaps I misunderstood what you were saying about the specifics of the law being able to be challenged in court. Courts are of no use to 99% of content creators. We don’t have the money, and if we had the money, we don’t have the time.

  • Adrian Lopez

    This book should be required reading for anybody who suggests that indecent speech be regulated on the basis that it’s harmful to minors.

  • Andrew Radley


    The access medium is orthogonal to the nature of the service I’m thinking of.

    It’s worth looking at the anti-virus market to see how well the ‘end-user is responsible for all’ model works. Today anti-virus sales are at their highest. Anti-virus software installs are at their highest. But virus infections are also at their highest.

    This indicates that there is a problem with this model, namely that the system of entrusting security to your average end-user is doomed to failure as their ability to absorb the level of detail required to protect themselves is doomed to failure.

    What’s required is for ISPs to offer Internet Access services where the content is cleaned up before the subscriber gets it. Just like the water system. It doesn’t preclude people also taking their own brand of water filter into their home as well, but it does mean that the service is safe at the point of use. But the responsibility for running the technical elements for the majority of users is taken up, at a cost, by their ISP. The operations team of an ISP is far more able to run a good quality anti-malware service, or any other content security service than probably 95% of the their end-users.

    My other point is that all these things must be configurable by the end-subscriber. I’m advocating a subscriber lead Internet, rather than a publisher/conduit lead Internet. Nothing more, nothing less.

  • hedora

    If the US government would stop blocking the proposal to add .XXX domains then we’d most of the advantages of the proposal without passing a single law. The idea with .XXX domains is that porn sites would register a .XXX domain name. Then censoring routers, browsers, and/or software firewalls would refuse to lookup .XXX domain names. Technological provisions (reverse dns lookups) would allow existing porn sites to keep their current addresses without bypassing the filters.

    The porn industry has been lobbying for this for years. It’s more reliable than current filters, and they’d prefer self-regulation to new classes of legal liability. The sites have an incentive to register .XXX domains, since they don’t enjoy receiving complaints from angry parents.

    Of course, neither the .XXX or the h2m proposals handle the really harmful stuff on the Internet, nor do they provide parents with much control over other types of content.

    Are hate speech groups really going to censor their own sites? Should they be expected to? What about sex-ed sites? Religious historians? Ultimately, local communities and parents are going to want to answer these questions; the producers of the content aren’t qualified to do so.

  • Silence is golden

    The irony is that a web site like is free speech, and a penis entering a vagina is harmful to minors. For a 12 year old, buying the ideas of hate speech is way more harmful than seeing pornography, and I doubt it would adopt the H2M tag any time soon. The bottom line is that no amount of law or code can replace proper parenting. As for blocking international sites altogether, all I can say is this is not the Lessig I thought I knew.

  • John Swanson

    I dislike negitive stuff! Instead of someone declaring content is harmfull, why not say it is “Kid Safe”. And maybe an age qualification. Then someone who puts a “Kid Safe” tag on porn content did something and not just oops, I forgot to place a h2c tag.

  • Antwan

    Amazing! Its in fact remarkable article, I have got much clear idea on the topic
    of from this article.

  • Kyra D. Gaunt, Ph.D.

    I work in childhood studies and once viewed the video and recommendations 2 years ago. It’s no longer available above. The links do not work. Why? And how can I access this vital info again?