Times Withholds Web Article in Britain – New York Times

Times Withholds Web Article in Britain – New York Times

Here’s an interesting note about how the Times blocked an article from IP addresses within Britain because it contained information that might prejudice an English jury. Kudos to the Times for coming right out and admitting the block.

The problem with this action is the type of precedent it might set. Would I be liable for the same offense if I posted the same info on my blog, which is available to all of England. What about a more prominent blogger? It’s interesting to think in this case about what separates the Times as an institutional speaker from less notable (but, in theory, just as accessible to the public) publishers.

Free speech case to watch

In what appears to be little-reported in the U.S. media, the BBC reports:

A US businessman has been charged with offering broadcasts of Hezbollah’s al-Manar satellite television station to customers in the New York-area.

It appears from this report that this individual is being charged with “doing business with a terrorist entity” because of his rebroadcasting of Hezbollah news broadcasts. What is interesting about this is that the broadcasts are news (which may not be balanced, but certainly political speech), and that this is not an individual speaking but rather a business (and as a rebroadcast, it’s not the individual’s own speech). It’s also not immediately clear from the reports whether any money had been given to Hezbollah, or if the spreading of their messages was considered the offense. More charges are said to be pending, so there will certainly be more about this soon.

    Edit: it looks as though some of the US media has picked up on it…just not my rss aggregator.

    Implementing ‘institutional review’ on collaborative editing?

    Can the Germans fix Wikipedia?

    The wikipedia vandalism problem has pushed the model nearer to the point of restricting access to expert authors in a given area–a point that would make it like a regular encyclopedia.

    From what I can tell, the system allows anyone (or perhaps logged in users) to edit a page, but then a “trusted” or “experienced” user reviews the changes and makes them live.

    This change makes perfect sense.

    Technically, it sounds a lot like Slashdot’s moderation system or Google’s PageRank (TM) system. Outside of the technological realm, it sounds quite a bit like how review works in publication or even the newsroom. Institutions (whether commercial or nonprofit) have a number of inherent factors which ensure content quality. Review processes, history, reputation, legal liability, and internal debate are all things which restrict institutional speech from being completely uninhibited. By mirroring this, in it’s own way, Wikipedia stands to potentially become more reputable while still capitalizing on the benefits of worldwide collaborative editing.

    Google’s Library

    In case you don’t have the time to read Battalle’s The Search, a pretty good substitute might be an article in today’s Washington Post (Search Me?). The article discusses a number of issues around Google’s book project which got me thinking: could a bit of creative lawyering and calling Google a “library” be a possible escape for the project’s leagal liability. The point is made that:

    To index the Web, Google first sends out software programs called “crawlers” that explore the online universe, link by link, making copies of every site they find — just as Book Search makes a digital copy of every book it can lay its hands on. Web sites are protected by copyright, so if you don’t want your site indexed by Google and its search brethren, you can “opt out,” usually by employing a nifty technological watchdog (a file called robots.txt) that tells search engines to bug off.
    Ditto for books, Google argues: Publishers and authors can opt out by informing Google that they don’t want their books scanned and made searchable.

    It looks as though the “transformative use” argument is being made, but I wonder if this could be extended to the notion that, much like a traditional library, all Google is doing is collecting material and making it available for a limited public use (a few pages, as opposed to a limited-time loan). I’m not saying that this argument is necessarily persuasive, or even right, but rather creating an analogy between a data warehouse and a physical book warehouse. As it is, copyrights basis in “the copy” pretty much makes the argument untenable (but this basis is up for debate).

    …in other stories grabbing my attention today

    Michael Geist is picking up on the emerging Blackboard v D2L patent fight. Not exactly sure what to make of this yet, as a large part of my job involves working with D2L software (I’ll note, for one of its earliest and largest clients). For more, keep an eye on Michael Feldstein’s blog.

    Finally, the Times has an article about Paparazzi 2.0, where anyone with a cell camera stands to profit from selling quick pics to tabloids. Now celebrity privacy can be added to the list of areas of media law individuals may need to “be more aware of” in the world of anybody-can-publish (something Ashley Simpson knows well). It appears in at least these cases, there are still organizational checks within news and tabloid firms to keep the issue from falling totally on a photographer.

    Finally finally, here’s a good note on “Astroturfing 2.0” from Ars. If my old pro-market comment-ers on net neutrality were in fact involved in this, they might like to know that their efforts were appreciated in that they really forced me to further examine and clarify my own views. If they were legitimately commenting–thanks for the same reason!