Last week I had the opportunity to participate in the Open Hardware Summit, an event that is always a highlight of my conference year. Now in its fourth year, the summit is a chance for the robust community that has grown up around open source hardware to come together, discuss what has been happening, and show off great advances.
By any measure, the open source hardware community is thriving. Each year the summit gets bigger, the projects and products get more ambitious, and the barrier to entry is lowered. But this year it did feel like the community was reaching an inflection point. The world of open source hardware is expanding beyond its original borders, and that presents its own set of challenges and opportunities. While I raised some of these during the panel that wrapped up the summit, I wanted to expand upon a few of them a bit more.
The State of Licensing and the Law
I touched on this in a blog post last year, and it was the topic of my presentation this year, but my discussions with people at the summit made me think about this further. While open source hardware looks to open source software for inspiration and guidance, from a legal standpoint it must strike out on its own. Fundamentally this is because, in contrast to software, most hardware is not protected by any type of intellectual property.
This can lead to a tension. There are people who are interested in creating “sticky” licenses for open source hardware – licenses that would force people who build upon open source hardware to be open as well. Unfortunately, without an intellectual property hook, those licenses simply are not enforceable.
The way to resolve this tension is not to find a novel way to protect hardware with existing intellectual property law, or to create a new type of intellectual property law that is easier to apply to hardware. For every person who used this right to share designs, there would no doubt be 100 or 1,000 who would use it to reduce sharing.
Instead, it is to find alternatives. Clear, non-legal descriptions of expectations may not be legally binding, but they will make it easy for good actors to play by the rules. Legal enforceability is nice, but it is not the only way forward. There can be a lot of power in publicly calling someone out for violating the rules.
Letting New People In and Allowing Existing People to Evolve
You cannot effectively enforce the rules until those rules are clear to everyone. There have been important attempts to help codify what it means to be open source hardware. Phillip Torrone has written the {unspoken} rules of open source hardware, a strong effort to document some of the informal understandings that have grown up over time. The open source hardware definitionand best practices, hosted and developed by the fantastic Open Source Hardware Association, are also a huge step forward. We need more of this.
But to many people, the heart of open source hardware can still feel like a set of gut instincts, community expectations, and hidden rules that are all too easy to run afoul of. If you are not someone who has already spent a good amount of time in the community, sometimes it can feel like there are unwritten rules just waiting to be inadvertently broken.
For those already deep into the open source hardware community, this may not feel like a problem. But for a community interested in expanding and evolving, it could be.
Here is one, but certainly not the only one, way that the lack of clarity can play out. Over the course of the summit I spoke with a number of people who work for large companies. These individuals (and presumably their companies) were excited, or at least intrigued, by open source hardware. I had no reason to believe that their desire to help their companies go open source was not totally sincere.
But, for all their enthusiasm and interest, they were a bit concerned. They understood that the open source hardware community is a passionate one, and that they would only get one chance to make a first impression. But they were not totally sure how to make sure that first impression was a good one. As a result, the fear of crossing a hidden line may keep them out of open source hardware entirely.
Depending on your perspective, this is either a good thing or a bad thing. There are plenty of people who don’t really care if large companies engage with open source hardware. And that is a totally reasonable position to have. But for people who are at least intrigued by the idea of having large companies embrace open source hardware, this feels like a missed opportunity. Giving newcomers – even corporate newcomers – greater certainty that the rules are clear will help expand the open source hardware world.
Moving Forward
The bad news is that neither of these challenges can be solved by an arduino robot or flashing LEDs. They are the kind of unglamorous infrastructure and community building work that feel a bit like documentation – always behind something else on the todo list.
Furthermore, there is nothing that says that anyone has to do any of it. The world doesn’t end if the open source hardware community does not figure out an alternative licensing solution. Similarly, nothing explodes if the only way to really “do” open source hardware is to hang around in the community for a while first.
Nonetheless, I think not working harder on those things would be a shame. But I could be wrong. And I’m happy to be wrong. My real hope is that if neither of these things happen it is because there was some sort of conscious decision not to let them happen. While I think it would be a missed opportunity not to do them, the real missed opportunity would be to not do them without even realizing it.
But For These Rules…
According to Verizon, the FCC’s Open Internet Rules are the only thing preventing ISPs from becoming gatekeepers for the internet. For background on yesterday’s hearing, start here, for a summary of the arguments go here, and for a timeline of net neutrality, click here.
Yesterday Verizon explained, in the simplest terms possible, why net neutrality rules are so important: the rules are the only thing preventing ISPs from turning the internet into cable TV.
During yesterday’s oral argument, the judges and Verizon’s attorney discussed Verizon’s desire to enter into special commercial agreements with “edge providers.” Edge providers are just another name for websites and services – everyone from Google, Netflix, and Facebook to the Public Knowledge policy blog.
These types of agreements – where ISPs charge edge providers extra just to be able to reach the ISP’s subscribers – are exactly the types of agreements that raise network neutrality concerns. If Verizon – or any ISP – can go to a website and demand extra money just to reach Verizon subscribers, the fundamental fairness of competing on the internet would be disrupted. It would immediately make Verizon the gatekeeper to what would and would not succeed online. ISPs, not users, not the market, would decide which websites and services succeed.
Fortunately, we have rules that prevent this type of behavior. The FCC’s Open Internet rules are the only thing stopping ISPs from injecting themselves between every website and its users. But you don’t need to take Public Knowledge’s word for it:
That’s Verizon’s attorney yesterday. “These rules” are the FCC’s Open Internet rules. “Those commercial arrangements” are arrangements that would force edge providers to pay ISPs a toll every time one of the ISP’s subscribers wanted to access the edge provider’s content. In other words, if your ISP doesn’t have a special deal with the website you want to visit (or if the website you want to visit is in a “premium” tier that you haven’t paid for), it may not work.
The FCC’s Open Internet rules prevent that type of corrupting market from developing. Again, Verizon’s attorney:
All of this is good news for those of us in favor of net neutrality. The FCC’s Open Internet rules really are the only thing preventing ISPs from installing themselves as the gatekeepers of the internet. And if you don’t believe us, just ask Verizon.
Bonus: The excerpts above come from a slightly longer exchange between Verizon and the judges anchored in a discussion of standing. The full exchange can be found below. Remember that a “two-sided market” is one in which, in addition to charging subscribers to access the internet, ISPs get to charge edge providers on the internet to access subscribers as well.
And here is a link to a recording of the entire argument.
After All These Years, We Still Don’t Know Much About Data Caps
Any discussion must be built on facts. But the FCC has not asked the questions, and ISPs have not provided answers.
Earlier this week, the Open Internet Advisory Committee – a group formed by the FCC to provide advice about the Commission’s Open Internet Order (also known as the net neutrality order) – released itsfirst report. The Committee examined a host of thorny issues left unresolved in the FCC’s 2010 Order. The overall, but unstated, conclusion was clear: in the almost three years since the Order, the FCC has done almost nothing to improve its understanding of the issues in question. It, and the public, is almost three years older but far from three years wiser.
We Don’t Know
Nowhere was this inaction more striking than in the report’s discussion of usage based billing and data caps. The report’s observation that “much information about user understanding of caps and thresholds is missing” should be the subheading for the entire data caps section. The section is shot through with clauses like “may require future monitoring,” “lack of definitive data,” “no definitive standard,” “these questions require more information,” “questions cannot be answered because there is no quantitative evidence,” and “little public analysis.”
The Committee is diverse, with representatives from ISPs, content creators, edge providers, consumers, and more. Finding consensus on an issue as divisive as data caps would be hard under any circumstances. But doing so in a largely data-free environment was probably doomed from the outset. With no data to test assertions, the discussion could have been little more than competing assertions.
It Did Not Have to Be This Way
Data caps, and concerns about data caps, are far from new. As early as 2011 Public Knowledge, along with our allies, sent a pair of letters to then-Chairman Genachowski urging the Commission to start collecting simple data on how data caps were implemented and administered.
Without a response from the Commission, in 2012 Public Knowledge went directly to the CEOs of major ISPs asking them to explain how and why they implemented their data caps.
In the meantime, Public Knowledge released two reports raising concerns about data caps and urging the Commission to take steps to better understand their usage and impact.
As the report indicates, none of this resulted in either the FCC or the ISPs shedding any light on data caps.
What We Do Know
In this information vacuum, the report does take steps to explain some of what is happening with data caps. Although it does not provide a source, it asserts that ISPs view data caps as set in a way that only impacts a handful (1-2%) of customers. Unfortunately, there is nothing to indicate that the caps are ever reevaluated once they have been set as usage patterns change.
The report also dismisses the once-popular theory that data caps can be effectively used to manage network congestion, rightly pointing out that caps “provide no direct incentive to heavy users to reduce traffic at peak times.”
An Obligation to Ask
In the Open Internet Order, the FCC committed to continue to monitor the internet access service marketplace. This report suggests that monitoring has been, at a minimum, inadequate. Almost three years since the Order was first released, most of the debates remain the same. Advocates like Public Knowledge continue to raise concerns. ISPs continue to explain why those concerns are not justified. And, in the absence of actual information or action by the FCC, the debate pretty much stops there.
The FCC has not even taken steps to act on data caps when it has an obligation to do so. Over a year ago, Public Knowledge filed a petition asking the FCC to look into the way that Comcast was treating internet video delivered to Xbox 360 consoles and TiVos. With nothing to show for the year since, today Public Knowledge sent a follow-up letter demanding action.This report serves as a useful introduction to the issues that it confronts, and the Chairs and participants should be commended for producing it in the absence of useful information.
Very little in the report should come as news to those following these issues closely, or to those tasked with regulating it. Issues have been outlined. Viewpoints have been explained. Questions have been listed.
If the FCC wants to be taken seriously, it must now take the next step. Advance the debate. Gather, and make public, actual information.
Original image by Flickr user EssG.
Getty Shows What it Means to be a Modern Museum
Making public domain works available in a public domain way respects copyright and spreads culture.
Yesterday’s news from the Getty Museum that they were making high-resolution images of 4,600 works in their collection available for free download should be celebrated by anyone who cares about art and culture. And it should also be celebrated by anyone who cares about copyright and the public domain, and who is thinking about what it means to be a modern museum dedicated to bringing people into contact with art.
Let’s get the art and culture part out of the way first. One of the great things about museums is that they allow people who are not, say, massively rich oil magnates to access culture. And one of the great things about the internet is that they allow people who are not physically near something to experience it themselves. Combining the two makes all sorts of sense.
Museums like the Getty house art, and some art is protected by copyright. And Getty should be commended for recognizing that just because some art is protected by copyright, all is not. A huge portion of the art in the Getty’s collection is in the public domain. That means that it is no longer protected by copyright and that no one – not Getty, not you, not me, needs permission to make a copy of it.
But there is a difference between being legally able to make a copy and being organizationally willing to make a copy. And there is also a difference between being organizationally willing to make a copy and being willing to make that copy freely available to the public. Getty made all the right choices in making the files available to the public in an unrestricted way.
Public Domain Means Anyone Can Use it For Anything
To be clear, making high-resolution scans of public domain art does not bring it back under copyright protection. That means that Getty does not have any copyright in the files that it is making available, even though it surely spent a great deal of time and money making them.
But not having copyrights in images does not always stop people or entities from trying to assert copyright-like control over files. It is not hard to imagine Getty making these files available for non-commercial use only, or in a way that required attribution to Getty for use. While these requests could not be enforced via copyright, they could be enforced (at least somewhat) as part of the Terms of Service for the site.
Getty declined to do that. They recognized that public domain means freely available to everyone for any purpose and did not try to set up extra restrictions on use. It is true that they ask why you are using the images when you download them. And, in their announcement, they did request that users attribute the source of the files to Getty. But there is no red light that goes off when a user indicates that she will use the image commercially, or a pop up demanding attribution under penalty of lawsuit.
In making all of these decisions, Getty recognized that part of its mission is to share its collection with the public. It also expressed confidence that sharing its collection digitally would not mean that people would stop coming to the museum to see the original works in person.
Going Beyond Images to 3D Files
Getty is not the first museum to make digital files of its artwork available to the public, but as one of our nation’s most prestigious institutions, its decision will hopefully push other museums to follow suit. And as they examine their collections, those institutions should not stop at paintings and drawings. Thanks to the expanding availability of 3D scanning and 3D printing, they can make their sculptures and installations available as well. The Art Institute of Chicago, the Metropolitan Museum of Art, and the Brooklyn Museum have started to do just that.
Pretty soon you will be able to print out a copy of a Cezanne still life and hang it over a 9th century bust of Hanuman.
Digital image courtesy of the Getty’s Open Content Program.
3 Things We Learned About Publishers from the Apple E-Book Price Fixing Opinion
One good, one bad, and one undetermined thing about the world view of book publishers.
Yesterday, a federal court found Apple guilty of antitrust violations in connection with the creation of its digital bookstore. The decision is full of interesting information about antitrust law and emerging markets. But in addition to that, the opinion – drawing on internal emails and in-court testimony – offers a compelling description of how publishers see their world.
At least three things jump out:
1. Everyone at the Top Understands That There is a Relationship Between Availability and Piracy
On an abstract level, this point will come as no surprise to regular readers of this blog or anyone involved in discussions around digital copyright for the past decade or so. Digital locks do not combat piracy. Suing your customers does not combat piracy. The best – and really only – way to combat piracy is to offer the public an easy way to buy your products at a reasonable price.
While this passes for common knowledge in many places, the opinion makes it clear that the people at the highest level of Apple (less surprising) and the publishing houses (more surprising) recognize the connection as well. The CEO of Macmillan described windowing – the practice of delaying the release of e-books for weeks or months after the physical version was released – as “really bad” because it encouraged piracy. The CEO of Penguin called windowing “entirely stupid” and admitted that it “actually makes no damn sense at all really.” An internal Penguin study showed that the sales of a windowed e-book never recovered – if a book was delayed people simply didn’t buy it. Macmillan and Random House also called windowing “a terrible, self-destructive idea.”
Steve Jobs pushed the publishers on this point, telling one executive “Without a way for customers to buy your ebooks, they will steal them. This will be the start of piracy and once started there will be no stopping it.”
While no one would argue that major publishers have fully come to terms with this lesson (see, e.g. the fact that they self-destructively hold onto DRM), it is encouraging that they are at least aware enough of it to discuss it.
2. Publishers Have Not Come to Terms with Pricing Digital Goods
A large part of the negotiations between Apple and the publishers was the tension between the publisher’s desire to increase the price of e-books and Apple’s recognition that high prices would lead to lower sales. While it is clear that both Apple and the publishers were happy raising prices of e-books beyond Amazon’s price, it does appear that Apple was concerned with constraining what it viewed as the publisher’s instinct to raise prices as high as possible.
At least on some level, this tension could flow from each side’s relative comfort with the idea of digital marketplaces. Apple has been running iTunes and its app store for years and recognizes that increased prices can reduce sales and ultimately lead to lower profits. While this is true in every market – be they physical or digital – that operates according to fundamental economic principles, it can be exacerbated in digital markets. Unlike with physical books, there is essentially no marginal cost to producing an extra e-book. Even better, those e-books don’t even come into existence until someone has purchased them. With their rush to increase prices as much as Apple would let them and their concern about protecting the “perceived value” of books among the public, publishers seem unable to consider the fact that lower prices could increase their profits in the long term.
3. Publishers Are Very Concerned About Protecting Their Existing Physical Market
The terms of the deal between Apple and the publishers is expressed explicitly in terms of the price of physical books – especially hardcover new releases. Internal emails suggest that one of the publishers’ concern about the Amazon pricing model was that it would undercut prices and perceived value of physical books.
Depending on how you view the future of publishing, this is either savvy or short-sighted. If you assume that book publishing is going the way of music and movies before it – away from physical products and towards digital downloads – this concern about protecting the hardcover market reads as self-destructive. Don’t hobble the future to protect the past – you will just end up killing both!
On the other hand, if you think books are different, it can feel much more reasonable. If the future of books is a world where physical books exist side-by-side with e-books, it makes sense not to undermine the physical market for the sake of the digital one. That future may require a more thoughtful consideration of the relationship between physical and digital beyond “digital is 20% cheaper,” but we are still in early days. All this means that while the first two lessons are fairly easy to place, this one could end up being the most interesting.
Original image by Flickr user Biblioteken i Östergötland.