The White House and Congress are trying to restrict use of public domain photos and videos. 


As two of the three branches of the US government, Congress and the Administration have key roles in creating and enforcing our copyright law.  So why are they trying to restrict what people do with public domain material?
 
Believe it or not, copyright law actually has a specific section addressing the Federal Government’s ability to get a copyright. The section is pretty straightforward: the Federal Government does not get copyright on the works that it produces.  You don’t need to be a lawyer to understand the first part of 17 U.S.C. § 105:

Copyright protection under this title [which pertains to copyright] is not available for any work of the United States Government.



This means that works created by the US Government receive no copyright protection.  These works do not pass go nor do they collect $200 – they automatically enter the public domain the moment they are created, freely available for anyone to do whatever they want with them.  And yet strangely there are parts of the US government that do not seem to understand that.

This is not new.  Back in 2009 our friends over at Creative Commons and EFF pointed out that the official White House flickr stream was using a CC-Attribution license – a license that requires some sort of underlying copyright to enforce.  To their credit, shortly after this concern was raised, the White House and flickr responded to this criticism and made it clear that the works are in the public domain.



Those United States Government Work “licenses” still appear on White House Flickr photos.  But the licenses are not alone.  They are joined by a prominent alert:

This official White House photograph is being made available only for publication by news organizations and/or for personal use printing by the subject(s) of the photograph. The photograph may not be manipulated in any way and may not be used in commercial or political materials, advertisements, emails, products, promotions that in any way suggests approval or endorsement of the President, the First Family, or the White House.


What?  This extra language has been noted multiple times, but for some reason persists.  Whenever you see a restriction like this, the first question you should ask yourself is “or what?”  What happens if I use these photos outside of the scope of the restriction?  In most cases, if you saw this type of restriction the “or what” would be “you will be sued for copyright infringement for exceeding the scope of this license.” 

But without copyright protection, that “or what” is simply not available.  The White House is not explicitly claiming copyright on these photos (the license makes that clear), but this type of scary quasi-legal language gets awful close to flirting with a bit of light copyfraud. I could reproduce entire photos here on the PK blog – neither the site for a news organization nor my personal website - without fear of any sort of repercussion.  See:



I can even manipulate them in express violation of the alert:



The White House clearly understands its relationship to copyright.  The copyright policy of whitehouse.gov makes it clear that nothing that the White House generates for the site is protected by copyright.  And the White House YouTube channel makes it clear that its videos are in the public domain and even makes it easy for you to download them.



So what’s so special about photographs in the Flickr stream?

Unfortunately, the White House is not alone in this game.  The House Judiciary Committee streams and archives its hearings here and the page includes this restriction:

Use restriction: No portion of any recording may be used for a political purpose; no portion of a recording may be disseminated with commercial sponsorship except as part of a bona fide news program or public affairs documentary; no portion of a recording may be used in any commercial advertisement; and any redistribution must be subject to this same notice.



The House Government Oversight Committee does not have a lengthy use restriction on itsYouTube page.  But instead of a public domain notice with a download button it applies a “Standard YouTube License” to archived videos of its hearings.  



With no underlying copyright to license, that license is meaningless – although it may stop someone who has not read section 105 from making use of video in the public domain.  The House Energy and Commerce Committee does the same thing.  Members are just as guilty.  Representatives Goodlatte, Watt, Blackburn, and Conyers – all of whom are heavily involved in copyright issues – slap licenses on videos that are not protected by copyright.

The Senate is no better.  Videos on the Senate Commerce Committee YouTube channel are licensed under a “Standard YouTube License.”  So are the Senate Budget Committee’s videos.  Ditto for videos from Senators involved in copyright policy like Senators Leahy, Hatch, andFeinstein. All of them leave the public under the false impression that they need some sort of permission in order to make use of these videos.



This may all seem like legalistic quibbling, but it is not.  There are many members of Congress who think it is important to educate the public about copyright, but it seems that no one has thought to start with videos that Congress itself releases.  The same applies for photos released by the White House.  Worse, bogus use restrictions imply that the American public is not free to use the works that its government is producing on their behalf.

Fortunately, this is an easy one to fix.  Get rid if bogus use restrictions on photos and videos.  Make use of public domain licenses on online services.  And if an online service does not allow for a government work-type license, make use of the comments.  Tell the public that they are free to make use of the work however they want. After all, that’s the law.

Expanding on The Switch’s 5 things that  neither side of the broadband debate wants to admit.


Over at The Switch today, Timothy B. Lee offered his list of 5 things neither side of the broadband debate wants to admit.  His list strikes me as mostly reasonable, although I think that you could find at least one side of the debate to endorse most of them.  In any case, I wanted to take a moment to add a bit of color to the list, to try and give you a sense of how we think about some of these things.  Here are Lee’s things, followed by a bit of commentary.

1.    American wireless service is working pretty well.

Especially when compared to the wired broadband market, this statement is fairly accurate.  We have four nationwide carriers and some decisions (like offering earlier upgrades) by one carrier clearly push the other carriers to match.

I would add two additional data points to that statement, however.  Lee mentions that, in 2007, there was a great deal of concern about wireless carrier control over mobile software.  He then points out that Apple’s decision to open up the iPhone to third party developers rendered the concern “obsolete.”  While no one would argue that the state of mobile software has improved massively since 2007, I don’t know that I would go so far as to say that concerns about network operator control are necessarily obsolete.  Even today we have carriers preventing some types of services from running on phones connected to their network.  And carriers are still working hard to prevent you from unlocking the phone that you own from their network.

Moreover, this state of working pretty well was not necessarily the wireless industry’s destiny.  As we have pointed out before, the FCC’s decisions to reject mergers and signal that it would support four competitive nationwide carriers have done a great deal to preserve the level of competition we have today.  That does not undercut Lee’s point, but it is worth keeping in mind.

2.    We’re falling behind on residential broadband.

It won’t come as a surprise that we’re quite willing to admit that one.  The theory of facilities-based competition between telephone companies, cable companies, satellite providers, and even power companies has turned out to be weak in practice.  While, as Lee points out, some like to point to DSL or satellite as viable competitors, the reality is that they are not.  Coming to terms with this state of affairs would bring us a huge way towards developing rational broadband policies.

3.    We desperately need more broadband experimentation.

Again, we’ll admit this one too.  Our allies at the Institute for Local Self Reliance do fantastic work trying to help localities build their own local networks and push back against statewide bans on such experimentation.

On the flip side, we get wary that “experimentation” can also be interpreted as an excuse for existing ISPs to inject themselves into the value chain through data caps or special priority fast lanes.  In a world with limited broadband competition (see point 2), there are few market protections for consumers with ISPs who want to experiment by exploiting their control over customers.  This does not mean that ISPs should not be prevented from experimenting.  But these types of concerns should be kept in mind when thinking about those experiments. 

4.    Discrimination concerns are mostly about video streaming.

This strikes me as sort of, but not totally, true.  Video streaming gets a lot of attention these days.  In part, this is because video is one of the most data-intensive applications that most people will use on a regular basis.  Therefore it is an easy way for people to understand more general concerns about discrimination.  

Of course, there are also some video-specific discrimination concerns. As Lee points out, most Americans connect to the internet through their cable provider.  And the role of competitor to online video and keeper of a key ingredient to the success of online video can create some problems.

But video is not the only potential victim of discrimination.  The internet moves quickly and new applications can seemingly emerge over night.  While I don’t know what it is, it is not hard to imagine a whole new generation of data-intensive applications that could be just as vulnerable to discrimination as online video is today.  So while the discussion is about video today, that doesn’t mean that it will be about video tomorrow.

5.    “Network neutrality” probably isn’t the answer.

Again, this is one that I agree with in part and disagree with in part.  I agree that network neutrality is not the only answer.  One of the reasons that we started WhatIsNetNeutrality.orgwas to remind people that net neutrality is actually a fairly specific thing, and that “net neutrality violation” was not just synonymous with “a bad thing happening on the internet.”  There are going to be a number of developments that raise concerns about internet access but have nothing to do with net neutrality.

That being said, net neutrality is an answer to some of those concerns.  Net neutrality rules helped to resolve the dispute surrounding AT&T’s decision to block the Facetime app for some of its customers.  Perhaps more tellingly, last week Verizon explained that the FCC’s net neutrality rules were the only thing preventing it from trying to force some websites and services to pay to get special access to its customers.  I’m pretty confident that the existence of net neutrality are at least part of the reason that problematic behavior has migrated to other places in the network.


Those quibbles aside, Lee’s five things feel like they are in the right neighborhood to me.  Perhaps most importantly, they serve as an important reminder to move beyond that state of play in 2003.  We at Public Knowledge work hard to keep abreast of the evolving technical and business reality of the internet and to adjust our advocacy accordingly.  But it never hurts to get another reminder that things evolve and that we need to as well.

Original image by Flickr user SweetKaran.

Last week I had the opportunity to participate in the Open Hardware Summit, an event that is always a highlight of my conference year.  Now in its fourth year, the summit is a chance for the robust community that has grown up around open source hardware to come together, discuss what has been happening, and show off great advances.

By any measure, the open source hardware community is thriving.  Each year the summit gets bigger, the projects and products get more ambitious, and the barrier to entry is lowered.  But this year it did feel like the community was reaching an inflection point.  The world of open source hardware is expanding beyond its original borders, and that presents its own set of challenges and opportunities.  While I raised some of these during the panel that wrapped up the summit, I wanted to expand upon a few of them a bit more.

The State of Licensing and the Law

I touched on this in a blog post last year, and it was the topic of my presentation this year, but my discussions with people at the summit made me think about this further.  While open source hardware looks to open source software for inspiration and guidance, from a legal standpoint it must strike out on its own. Fundamentally this is because, in contrast to software, most hardware is not protected by any type of intellectual property.  

This can lead to a tension.  There are people who are interested in creating “sticky” licenses for open source hardware – licenses that would force people who build upon open source hardware to be open as well.  Unfortunately, without an intellectual property hook, those licenses simply are not enforceable.  

The way to resolve this tension is not to find a novel way to protect hardware with existing intellectual property law, or to create a new type of intellectual property law that is easier to apply to hardware.  For every person who used this right to share designs, there would no doubt be 100 or 1,000 who would use it to reduce sharing.

Instead, it is to find alternatives.  Clear, non-legal descriptions of expectations may not be legally binding, but they will make it easy for good actors to play by the rules.  Legal enforceability is nice, but it is not the only way forward.  There can be a lot of power in publicly calling someone out for violating the rules. 

Letting New People In and Allowing Existing People to Evolve

You cannot effectively enforce the rules until those rules are clear to everyone.  There have been important attempts to help codify what it means to be open source hardware.  Phillip Torrone has written the {unspoken} rules of open source hardware, a strong effort to document some of the informal understandings that have grown up over time.  The open source hardware definitionand best practices, hosted and developed by the fantastic Open Source Hardware Association, are also a huge step forward.  We need more of this.

But to many people, the heart of open source hardware can still feel like a set of gut instincts, community expectations, and hidden rules that are all too easy to run afoul of.  If you are not someone who has already spent a good amount of time in the community, sometimes it can feel like there are unwritten rules just waiting to be inadvertently broken.

For those already deep into the open source hardware community, this may not feel like a problem.  But for a community interested in expanding and evolving, it could be.  

Here is one, but certainly not the only one, way that the lack of clarity can play out.  Over the course of the summit I spoke with a number of people who work for large companies.  These individuals (and presumably their companies) were excited, or at least intrigued, by open source hardware.  I had no reason to believe that their desire to help their companies go open source was not totally sincere.  

But, for all their enthusiasm and interest, they were a bit concerned.  They understood that the open source hardware community is a passionate one, and that they would only get one chance to make a first impression.  But they were not totally sure how to make sure that first impression was a good one.  As a result, the fear of crossing a hidden line may keep them out of open source hardware entirely.

Depending on your perspective, this is either a good thing or a bad thing.  There are plenty of people who don’t really care if large companies engage with open source hardware.  And that is a totally reasonable position to have.  But for people who are at least intrigued by the idea of having large companies embrace open source hardware, this feels like a missed opportunity.  Giving newcomers – even corporate newcomers – greater certainty that the rules are clear will help expand the open source hardware world.

Moving Forward

The bad news is that neither of these challenges can be solved by an arduino robot or flashing LEDs.  They are the kind of unglamorous infrastructure and community building work that feel a bit like documentation – always behind something else on the todo list.  

Furthermore, there is nothing that says that anyone has to do any of it.  The world doesn’t end if the open source hardware community does not figure out an alternative licensing solution.  Similarly, nothing explodes if the only way to really “do” open source hardware is to hang around in the community for a while first.  

Nonetheless, I think not working harder on those things would be a shame.  But I could be wrong.  And I’m happy to be wrong.  My real hope is that if neither of these things happen it is because there was some sort of conscious decision not to let them happen.  While I think it would be a missed opportunity not to do them, the real missed opportunity would be to not do them without even realizing it.

According to Verizon, the FCC’s Open Internet Rules are the only thing preventing ISPs from becoming gatekeepers for the internet. For background on yesterday’s hearing, start here, for a summary of the arguments go here, and for a timeline of net neutrality, click here. 


Yesterday Verizon explained, in the simplest terms possible, why net neutrality rules are so important: the rules are the only thing preventing ISPs from turning the internet into cable TV.

During yesterday’s oral argument, the judges and Verizon’s attorney discussed Verizon’s desire to enter into special commercial agreements with “edge providers.”  Edge providers are just another name for websites and services – everyone from Google, Netflix, and Facebook to the Public Knowledge policy blog.  

These types of agreements – where ISPs charge edge providers extra just to be able to reach the ISP’s subscribers – are exactly the types of agreements that raise network neutrality concerns.  If Verizon – or any ISP – can go to a website and demand extra money just to reach Verizon subscribers, the fundamental fairness of competing on the internet would be disrupted.  It would immediately make Verizon the gatekeeper to what would and would not succeed online.  ISPs, not users, not the market, would decide which websites and services succeed.

Fortunately, we have rules that prevent this type of behavior.  The FCC’s Open Internet rules are the only thing stopping ISPs from injecting themselves between every website and its users.  But you don’t need to take Public Knowledge’s word for it:




That’s Verizon’s attorney yesterday.  “These rules” are the FCC’s Open Internet rules.  “Those commercial arrangements” are arrangements that would force edge providers to pay ISPs a toll every time one of the ISP’s subscribers wanted to access the edge provider’s content.  In other words, if your ISP doesn’t have a special deal with the website you want to visit (or if the website you want to visit is in a “premium” tier that you haven’t paid for), it may not work.

The FCC’s Open Internet rules prevent that type of corrupting market from developing.  Again, Verizon’s attorney:



All of this is good news for those of us in favor of net neutrality. The FCC’s Open Internet rules really are the only thing preventing ISPs from installing themselves as the gatekeepers of the internet.   And if you don’t believe us, just ask Verizon.


Bonus:  The excerpts above come from a slightly longer exchange between Verizon and the judges anchored in a discussion of standing.  The full exchange can be found below.  Remember that a “two-sided market” is one in which, in addition to charging subscribers to access the internet, ISPs get to charge edge providers on the internet to access subscribers as well.



And here is a link to a recording of the entire argument.

Any discussion must be built on facts. But the FCC has not asked the questions, and ISPs have not provided answers.


Earlier this week, the Open Internet Advisory Committee – a group formed by the FCC to provide advice about the Commission’s Open Internet Order (also known as the net neutrality order) – released itsfirst report.  The Committee examined a host of thorny issues left unresolved in the FCC’s 2010 Order.  The overall, but unstated, conclusion was clear: in the almost three years since the Order, the FCC has done almost nothing to improve its understanding of the issues in question.  It, and the public, is almost three years older but far from three years wiser.

We Don’t Know

Nowhere was this inaction more striking than in the report’s discussion of usage based billing and data caps.  The report’s observation that “much information about user understanding of caps and thresholds is missing” should be the subheading for the entire data caps section.  The section is shot through with clauses like “may require future monitoring,” “lack of definitive data,” “no definitive standard,” “these questions require more information,” “questions cannot be answered because there is no quantitative evidence,” and “little public analysis.” 

The Committee is diverse, with representatives from ISPs, content creators, edge providers, consumers, and more.  Finding consensus on an issue as divisive as data caps would be hard under any circumstances.  But doing so in a largely data-free environment was probably doomed from the outset.  With no data to test assertions, the discussion could have been little more than competing assertions.

It Did Not Have to Be This Way

Data caps, and concerns about data caps, are far from new.  As early as 2011 Public Knowledge, along with our allies, sent a pair of letters to then-Chairman Genachowski urging the Commission to start collecting simple data on how data caps were implemented and administered. 

Without a response from the Commission, in 2012 Public Knowledge went directly to the CEOs of major ISPs asking them to explain how and why they implemented their data caps.

In the meantime, Public Knowledge released two reports raising concerns about data caps and urging the Commission to take steps to better understand their usage and impact. 

As the report indicates, none of this resulted in either the FCC or the ISPs shedding any light on data caps.

What We Do Know

In this information vacuum, the report does take steps to explain some of what is happening with data caps.  Although it does not provide a source, it asserts that ISPs view data caps as set in a way that only impacts a handful (1-2%) of customers.  Unfortunately, there is nothing to indicate that the caps are ever reevaluated once they have been set as usage patterns change. 

The report also dismisses the once-popular theory that data caps can be effectively used to manage network congestion, rightly pointing out that caps “provide no direct incentive to heavy users to reduce traffic at peak times.”

An Obligation to Ask

In the Open Internet Order, the FCC committed to continue to monitor the internet access service marketplace.  This report suggests that monitoring has been, at a minimum, inadequate.  Almost three years since the Order was first released, most of the debates remain the same.  Advocates like Public Knowledge continue to raise concerns.  ISPs continue to explain why those concerns are not justified.  And, in the absence of actual information or action by the FCC, the debate pretty much stops there.

The FCC has not even taken steps to act on data caps when it has an obligation to do so.  Over a year ago, Public Knowledge filed a petition asking the FCC to look into the way that Comcast was treating internet video delivered to Xbox 360 consoles and TiVos.  With nothing to show for the year since, today Public Knowledge sent a follow-up letter demanding action.This report serves as a useful introduction to the issues that it confronts, and the Chairs and participants should be commended for producing it in the absence of useful information.

Very little in the report should come as news to those following these issues closely, or to those tasked with regulating it.  Issues have been outlined.  Viewpoints have been explained.  Questions have been listed.

If the FCC wants to be taken seriously, it must now take the next step.  Advance the debate.  Gather, and make public, actual information.

Original image by Flickr user EssG.