Expanding on The Switch’s 5 things that  neither side of the broadband debate wants to admit.


Over at The Switch today, Timothy B. Lee offered his list of 5 things neither side of the broadband debate wants to admit.  His list strikes me as mostly reasonable, although I think that you could find at least one side of the debate to endorse most of them.  In any case, I wanted to take a moment to add a bit of color to the list, to try and give you a sense of how we think about some of these things.  Here are Lee’s things, followed by a bit of commentary.

1.    American wireless service is working pretty well.

Especially when compared to the wired broadband market, this statement is fairly accurate.  We have four nationwide carriers and some decisions (like offering earlier upgrades) by one carrier clearly push the other carriers to match.

I would add two additional data points to that statement, however.  Lee mentions that, in 2007, there was a great deal of concern about wireless carrier control over mobile software.  He then points out that Apple’s decision to open up the iPhone to third party developers rendered the concern “obsolete.”  While no one would argue that the state of mobile software has improved massively since 2007, I don’t know that I would go so far as to say that concerns about network operator control are necessarily obsolete.  Even today we have carriers preventing some types of services from running on phones connected to their network.  And carriers are still working hard to prevent you from unlocking the phone that you own from their network.

Moreover, this state of working pretty well was not necessarily the wireless industry’s destiny.  As we have pointed out before, the FCC’s decisions to reject mergers and signal that it would support four competitive nationwide carriers have done a great deal to preserve the level of competition we have today.  That does not undercut Lee’s point, but it is worth keeping in mind.

2.    We’re falling behind on residential broadband.

It won’t come as a surprise that we’re quite willing to admit that one.  The theory of facilities-based competition between telephone companies, cable companies, satellite providers, and even power companies has turned out to be weak in practice.  While, as Lee points out, some like to point to DSL or satellite as viable competitors, the reality is that they are not.  Coming to terms with this state of affairs would bring us a huge way towards developing rational broadband policies.

3.    We desperately need more broadband experimentation.

Again, we’ll admit this one too.  Our allies at the Institute for Local Self Reliance do fantastic work trying to help localities build their own local networks and push back against statewide bans on such experimentation.

On the flip side, we get wary that “experimentation” can also be interpreted as an excuse for existing ISPs to inject themselves into the value chain through data caps or special priority fast lanes.  In a world with limited broadband competition (see point 2), there are few market protections for consumers with ISPs who want to experiment by exploiting their control over customers.  This does not mean that ISPs should not be prevented from experimenting.  But these types of concerns should be kept in mind when thinking about those experiments. 

4.    Discrimination concerns are mostly about video streaming.

This strikes me as sort of, but not totally, true.  Video streaming gets a lot of attention these days.  In part, this is because video is one of the most data-intensive applications that most people will use on a regular basis.  Therefore it is an easy way for people to understand more general concerns about discrimination.  

Of course, there are also some video-specific discrimination concerns. As Lee points out, most Americans connect to the internet through their cable provider.  And the role of competitor to online video and keeper of a key ingredient to the success of online video can create some problems.

But video is not the only potential victim of discrimination.  The internet moves quickly and new applications can seemingly emerge over night.  While I don’t know what it is, it is not hard to imagine a whole new generation of data-intensive applications that could be just as vulnerable to discrimination as online video is today.  So while the discussion is about video today, that doesn’t mean that it will be about video tomorrow.

5.    “Network neutrality” probably isn’t the answer.

Again, this is one that I agree with in part and disagree with in part.  I agree that network neutrality is not the only answer.  One of the reasons that we started WhatIsNetNeutrality.orgwas to remind people that net neutrality is actually a fairly specific thing, and that “net neutrality violation” was not just synonymous with “a bad thing happening on the internet.”  There are going to be a number of developments that raise concerns about internet access but have nothing to do with net neutrality.

That being said, net neutrality is an answer to some of those concerns.  Net neutrality rules helped to resolve the dispute surrounding AT&T’s decision to block the Facetime app for some of its customers.  Perhaps more tellingly, last week Verizon explained that the FCC’s net neutrality rules were the only thing preventing it from trying to force some websites and services to pay to get special access to its customers.  I’m pretty confident that the existence of net neutrality are at least part of the reason that problematic behavior has migrated to other places in the network.


Those quibbles aside, Lee’s five things feel like they are in the right neighborhood to me.  Perhaps most importantly, they serve as an important reminder to move beyond that state of play in 2003.  We at Public Knowledge work hard to keep abreast of the evolving technical and business reality of the internet and to adjust our advocacy accordingly.  But it never hurts to get another reminder that things evolve and that we need to as well.

Original image by Flickr user SweetKaran.

Last week I had the opportunity to participate in the Open Hardware Summit, an event that is always a highlight of my conference year.  Now in its fourth year, the summit is a chance for the robust community that has grown up around open source hardware to come together, discuss what has been happening, and show off great advances.

By any measure, the open source hardware community is thriving.  Each year the summit gets bigger, the projects and products get more ambitious, and the barrier to entry is lowered.  But this year it did feel like the community was reaching an inflection point.  The world of open source hardware is expanding beyond its original borders, and that presents its own set of challenges and opportunities.  While I raised some of these during the panel that wrapped up the summit, I wanted to expand upon a few of them a bit more.

The State of Licensing and the Law

I touched on this in a blog post last year, and it was the topic of my presentation this year, but my discussions with people at the summit made me think about this further.  While open source hardware looks to open source software for inspiration and guidance, from a legal standpoint it must strike out on its own. Fundamentally this is because, in contrast to software, most hardware is not protected by any type of intellectual property.  

This can lead to a tension.  There are people who are interested in creating “sticky” licenses for open source hardware – licenses that would force people who build upon open source hardware to be open as well.  Unfortunately, without an intellectual property hook, those licenses simply are not enforceable.  

The way to resolve this tension is not to find a novel way to protect hardware with existing intellectual property law, or to create a new type of intellectual property law that is easier to apply to hardware.  For every person who used this right to share designs, there would no doubt be 100 or 1,000 who would use it to reduce sharing.

Instead, it is to find alternatives.  Clear, non-legal descriptions of expectations may not be legally binding, but they will make it easy for good actors to play by the rules.  Legal enforceability is nice, but it is not the only way forward.  There can be a lot of power in publicly calling someone out for violating the rules. 

Letting New People In and Allowing Existing People to Evolve

You cannot effectively enforce the rules until those rules are clear to everyone.  There have been important attempts to help codify what it means to be open source hardware.  Phillip Torrone has written the {unspoken} rules of open source hardware, a strong effort to document some of the informal understandings that have grown up over time.  The open source hardware definitionand best practices, hosted and developed by the fantastic Open Source Hardware Association, are also a huge step forward.  We need more of this.

But to many people, the heart of open source hardware can still feel like a set of gut instincts, community expectations, and hidden rules that are all too easy to run afoul of.  If you are not someone who has already spent a good amount of time in the community, sometimes it can feel like there are unwritten rules just waiting to be inadvertently broken.

For those already deep into the open source hardware community, this may not feel like a problem.  But for a community interested in expanding and evolving, it could be.  

Here is one, but certainly not the only one, way that the lack of clarity can play out.  Over the course of the summit I spoke with a number of people who work for large companies.  These individuals (and presumably their companies) were excited, or at least intrigued, by open source hardware.  I had no reason to believe that their desire to help their companies go open source was not totally sincere.  

But, for all their enthusiasm and interest, they were a bit concerned.  They understood that the open source hardware community is a passionate one, and that they would only get one chance to make a first impression.  But they were not totally sure how to make sure that first impression was a good one.  As a result, the fear of crossing a hidden line may keep them out of open source hardware entirely.

Depending on your perspective, this is either a good thing or a bad thing.  There are plenty of people who don’t really care if large companies engage with open source hardware.  And that is a totally reasonable position to have.  But for people who are at least intrigued by the idea of having large companies embrace open source hardware, this feels like a missed opportunity.  Giving newcomers – even corporate newcomers – greater certainty that the rules are clear will help expand the open source hardware world.

Moving Forward

The bad news is that neither of these challenges can be solved by an arduino robot or flashing LEDs.  They are the kind of unglamorous infrastructure and community building work that feel a bit like documentation – always behind something else on the todo list.  

Furthermore, there is nothing that says that anyone has to do any of it.  The world doesn’t end if the open source hardware community does not figure out an alternative licensing solution.  Similarly, nothing explodes if the only way to really “do” open source hardware is to hang around in the community for a while first.  

Nonetheless, I think not working harder on those things would be a shame.  But I could be wrong.  And I’m happy to be wrong.  My real hope is that if neither of these things happen it is because there was some sort of conscious decision not to let them happen.  While I think it would be a missed opportunity not to do them, the real missed opportunity would be to not do them without even realizing it.

According to Verizon, the FCC’s Open Internet Rules are the only thing preventing ISPs from becoming gatekeepers for the internet. For background on yesterday’s hearing, start here, for a summary of the arguments go here, and for a timeline of net neutrality, click here. 


Yesterday Verizon explained, in the simplest terms possible, why net neutrality rules are so important: the rules are the only thing preventing ISPs from turning the internet into cable TV.

During yesterday’s oral argument, the judges and Verizon’s attorney discussed Verizon’s desire to enter into special commercial agreements with “edge providers.”  Edge providers are just another name for websites and services – everyone from Google, Netflix, and Facebook to the Public Knowledge policy blog.  

These types of agreements – where ISPs charge edge providers extra just to be able to reach the ISP’s subscribers – are exactly the types of agreements that raise network neutrality concerns.  If Verizon – or any ISP – can go to a website and demand extra money just to reach Verizon subscribers, the fundamental fairness of competing on the internet would be disrupted.  It would immediately make Verizon the gatekeeper to what would and would not succeed online.  ISPs, not users, not the market, would decide which websites and services succeed.

Fortunately, we have rules that prevent this type of behavior.  The FCC’s Open Internet rules are the only thing stopping ISPs from injecting themselves between every website and its users.  But you don’t need to take Public Knowledge’s word for it:




That’s Verizon’s attorney yesterday.  “These rules” are the FCC’s Open Internet rules.  “Those commercial arrangements” are arrangements that would force edge providers to pay ISPs a toll every time one of the ISP’s subscribers wanted to access the edge provider’s content.  In other words, if your ISP doesn’t have a special deal with the website you want to visit (or if the website you want to visit is in a “premium” tier that you haven’t paid for), it may not work.

The FCC’s Open Internet rules prevent that type of corrupting market from developing.  Again, Verizon’s attorney:



All of this is good news for those of us in favor of net neutrality. The FCC’s Open Internet rules really are the only thing preventing ISPs from installing themselves as the gatekeepers of the internet.   And if you don’t believe us, just ask Verizon.


Bonus:  The excerpts above come from a slightly longer exchange between Verizon and the judges anchored in a discussion of standing.  The full exchange can be found below.  Remember that a “two-sided market” is one in which, in addition to charging subscribers to access the internet, ISPs get to charge edge providers on the internet to access subscribers as well.



And here is a link to a recording of the entire argument.

Any discussion must be built on facts. But the FCC has not asked the questions, and ISPs have not provided answers.


Earlier this week, the Open Internet Advisory Committee – a group formed by the FCC to provide advice about the Commission’s Open Internet Order (also known as the net neutrality order) – released itsfirst report.  The Committee examined a host of thorny issues left unresolved in the FCC’s 2010 Order.  The overall, but unstated, conclusion was clear: in the almost three years since the Order, the FCC has done almost nothing to improve its understanding of the issues in question.  It, and the public, is almost three years older but far from three years wiser.

We Don’t Know

Nowhere was this inaction more striking than in the report’s discussion of usage based billing and data caps.  The report’s observation that “much information about user understanding of caps and thresholds is missing” should be the subheading for the entire data caps section.  The section is shot through with clauses like “may require future monitoring,” “lack of definitive data,” “no definitive standard,” “these questions require more information,” “questions cannot be answered because there is no quantitative evidence,” and “little public analysis.” 

The Committee is diverse, with representatives from ISPs, content creators, edge providers, consumers, and more.  Finding consensus on an issue as divisive as data caps would be hard under any circumstances.  But doing so in a largely data-free environment was probably doomed from the outset.  With no data to test assertions, the discussion could have been little more than competing assertions.

It Did Not Have to Be This Way

Data caps, and concerns about data caps, are far from new.  As early as 2011 Public Knowledge, along with our allies, sent a pair of letters to then-Chairman Genachowski urging the Commission to start collecting simple data on how data caps were implemented and administered. 

Without a response from the Commission, in 2012 Public Knowledge went directly to the CEOs of major ISPs asking them to explain how and why they implemented their data caps.

In the meantime, Public Knowledge released two reports raising concerns about data caps and urging the Commission to take steps to better understand their usage and impact. 

As the report indicates, none of this resulted in either the FCC or the ISPs shedding any light on data caps.

What We Do Know

In this information vacuum, the report does take steps to explain some of what is happening with data caps.  Although it does not provide a source, it asserts that ISPs view data caps as set in a way that only impacts a handful (1-2%) of customers.  Unfortunately, there is nothing to indicate that the caps are ever reevaluated once they have been set as usage patterns change. 

The report also dismisses the once-popular theory that data caps can be effectively used to manage network congestion, rightly pointing out that caps “provide no direct incentive to heavy users to reduce traffic at peak times.”

An Obligation to Ask

In the Open Internet Order, the FCC committed to continue to monitor the internet access service marketplace.  This report suggests that monitoring has been, at a minimum, inadequate.  Almost three years since the Order was first released, most of the debates remain the same.  Advocates like Public Knowledge continue to raise concerns.  ISPs continue to explain why those concerns are not justified.  And, in the absence of actual information or action by the FCC, the debate pretty much stops there.

The FCC has not even taken steps to act on data caps when it has an obligation to do so.  Over a year ago, Public Knowledge filed a petition asking the FCC to look into the way that Comcast was treating internet video delivered to Xbox 360 consoles and TiVos.  With nothing to show for the year since, today Public Knowledge sent a follow-up letter demanding action.This report serves as a useful introduction to the issues that it confronts, and the Chairs and participants should be commended for producing it in the absence of useful information.

Very little in the report should come as news to those following these issues closely, or to those tasked with regulating it.  Issues have been outlined.  Viewpoints have been explained.  Questions have been listed.

If the FCC wants to be taken seriously, it must now take the next step.  Advance the debate.  Gather, and make public, actual information.

Original image by Flickr user EssG.

Making public domain works available in a public domain way respects copyright and spreads culture.


Yesterday’s news from the Getty Museum that they were making high-resolution images of 4,600 works in their collection available for free download should be celebrated by anyone who cares about art and culture. And it should also be celebrated by anyone who cares about copyright and the public domain, and who is thinking about what it means to be a modern museum dedicated to bringing people into contact with art.

Let’s get the art and culture part out of the way first.  One of the great things about museums is that they allow people who are not, say, massively rich oil magnates to access culture. And one of the great things about the internet is that they allow people who are not physically near something to experience it themselves. Combining the two makes all sorts of sense.

Museums like the Getty house art, and some art is protected by copyright. And Getty should be commended for recognizing that just because some art is protected by copyright, all is not.  A huge portion of the art in the Getty’s collection is in the public domain.  That means that it is no longer protected by copyright and that no one – not Getty, not you, not me, needs permission to make a copy of it.

But there is a difference between being legally able to make a copy and being organizationally willing to make a copy. And there is also a difference between being organizationally willing to make a copy and being willing to make that copy freely available to the public. Getty made all the right choices in making the files available to the public in an unrestricted way.

Public Domain Means Anyone Can Use it For Anything

To be clear, making high-resolution scans of public domain art does not bring it back under copyright protection.  That means that Getty does not have any copyright in the files that it is making available, even though it surely spent a great deal of time and money making them.

But not having copyrights in images does not always stop people or entities from trying to assert copyright-like control over files. It is not hard to imagine Getty making these files available for non-commercial use only, or in a way that required attribution to Getty for use. While these requests could not be enforced via copyright, they could be enforced (at least somewhat) as part of the Terms of Service for the site.

Getty declined to do that. They recognized that public domain means freely available to everyone for any purpose and did not try to set up extra restrictions on use. It is true that they ask why you are using the images when you download them. And, in their announcement, they did request that users attribute the source of the files to Getty. But there is no red light that goes off when a user indicates that she will use the image commercially, or a pop up demanding attribution under penalty of lawsuit.  

In making all of these decisions, Getty recognized that part of its mission is to share its collection with the public. It also expressed confidence that sharing its collection digitally would not mean that people would stop coming to the museum to see the original works in person.

Going Beyond Images to 3D Files

Getty is not the first museum to make digital files of its artwork available to the public, but as one of our nation’s most prestigious institutions, its decision will hopefully push other museums to follow suit.  And as they examine their collections, those institutions should not stop at paintings and drawings.  Thanks to the expanding availability of 3D scanning and 3D printing, they can make their sculptures and installations available as well. The Art Institute of Chicago, the Metropolitan Museum of Art, and the Brooklyn Museum have started to do just that.  

Pretty soon you will be able to print out a copy of a Cezanne still life and hang it over a 9th century bust of Hanuman.

Digital image courtesy of the Getty’s Open Content Program.