img

Today the FCC released its Open Internet Notice of Proposed Rulemaking – basically a draft version of its net neutrality rules.  While there is plenty to say about them, in this post I’d like to focus on one part – fast lanes.  Read on for all (or at least some) of the reasons they are a bad idea.

Although the FCC appears to have backed away from its original embrace of internet fast lanes and slow lanes in response to massive public outcry, and Chairman Wheeler made a passionate presentation about the value of a free and open internet at today’s meeting, the proposed rules still imagine a world where some sort of bifurcation of the internet is allowed.  Specifically, it assumes two types of service.  Within the “minimum level of guaranteed access,” (something that could also be considered the “slow lane”) the proposed rules would prevent at least most blocking and discrimination.  But beyond that unknown minimum level of guaranteed access (could be thought of as the “fast lane”), discrimination would be allowed as long as it was “commercially reasonable.”

That is a huge problem. Fundamentally, this is because there is no real way to have a internet divided between fast lanes and slow lanes that also brings all of the benefits that we have come to expect from our current, single, open internet.  Why is this so?  Let’s consider the ways.

1. Internet for Haves, Internet for Have-Nots

A fast lane/slow lane internet adds a new, unwelcome element to innovation online.  Traditionally, new services and websites succeeded or failed based on the quality of their offering.  But in a fast lane/slow lane internet, success goes to the services and websites that can afford to pay off the biggest ISPs.  Service beyond a “minimum level” is often where innovation happens.  It shouldn’t only be available to some.

2. ISPs Get to Decide Who Wins

Of course, “available to some” assumes that ISPs are even interested in doing business with a new service or site.  The proposed rules would give ISPs a lot of flexibility to pick winners and losers online, and simply ignore some players all together.  As long as its decision is commercially reasonable, an ISP could just freeze someone out of the fast lane before they even started competing.

3. The Slow Lane Will Always be Bad

This is just common sense.  If you are charging people to get into the fast lane, it has to be worth the money.  To put it another way, the slow lane has to be bad enough to justify paying to get out of it.  If the slow lane really is good enough for anything that you want to do online, why would anyone ever pay to get into the fast lane?  The result is that the slow lane will always be at least inadequate enough to push a critical mass of users towards the (paid) fast lane.

4. Investment Flows to the Fast Lane

Going forward, ISPs will have a choice.  Should they invest in the fast lane or the slow lane?  Since they get to charge extra for one and not the other, that becomes an easy decision.  You can be sure that any new innovation that would make the network faster or more responsive will debut in the fast lane.  And it may not ever trickle down into the slow lane.

5. Hope You Enjoy the 2014 Internet – It Just Became the High Water Mark

The proposed rules suggest that we don’t need to worry about a fast lane, because the slow lane (the “minimum level of access”) will always be good enough to protect innovation.  In fact, splitting the internet would all but guarantee that “good” internet circa 2014 becomes the baseline well into the future.  Put another way, the slow lane is stuck at today’s average level of service.

That is because benchmarking an “adequate” slow lane becomes all but impossible once you have split the internet.  In the absence of a unified open internet, how could you set the slow lane standard?  One option would be to look to other countries.  But the United States is already well behind other countries in broadband speed, and even today there is always an apologist willing to explain why it is OK that the US is falling behind.  There is no reason to think that would change in the future.  Even if you were willing to use an international benchmark, what would it be?  “The United States shall always have at least the 16th fastest broadband in the world”?  That’s nothing to strive for.

Another way to approach it would be to assume some sort of annual rate of improvement, or some sort of rate that marked “adequate” broadband speed improvements.  But what would the rate be?  Until 2008, the FCC defined “broadband” as 200kbps (that’s kbps, not Mbps).  In 2010, it updated that figure to 4 Mbps.  Those are not numbers that people should have been satisfied with, even at the time.  And even a speed increase rate that looks ambitious today could be rendered glacial by an unexpected innovation.

More importantly, since the slow lane would undermine the virtuous cycle of broadband innovation (high speeds encourages new services, which themselves encourage higher speeds, which starts the cycle over again), we may not even see the innovation that would push up speeds on a single, neutral network anymore.  As venture capitalist Fred Wilson memorably dramatized, we won’t even know what we are missing when innovative startups that push the network never get funded.  Those startups don’t file a complaint with the FCC before the fizzle out.  They just disappear.  You may never miss what you never know, but it will be a shame when the fast lane/slow lane internet settles into a comfortable dotage where real innovation is just too much trouble.

The Good News

The good news is that the FCC’s proposal is just that – a proposal.  There is still time to change it, and to convince the FCC to create real net neutrality rules that prevent paid prioritization and internet fast lanes.  But that opportunity won’t last forever.  So act now, and tell the FCC where you stand.

Read More...

Last month during Sunshine Week, the White House Office of Science and Technology Policy released a memorandum directing federal agencies to develop a plan in the next six months to make their scientific collections more available to the public. This is a great move on its own – federal agencies collect all sorts of interesting information on behalf of the American people, and it is important to make that information as easy to access as possible. But more specifically, it could be a first step toward creating a central repository of all of the government’s 3D scans. And the government has a lot of things to be scanned.

Laser Cowboys and Fossilized Whales

First, the memo recognizes the pioneering work that the Smithsonian Institution’s “laser cowboys” have been doing in digitizing its physical collection. For the past few years, the Smithsonian has been creating detailed 3D scans of physical objects in its collection andmaking them available to the public for viewing and download. What they have managed to make available so far is a tantalizing taste of what kind of objects could possibly be available if the US Government digitized everything it had. But it is really only the tip of the iceberg (or, perhaps more accurately, the tip of the fossilized whale) of the Smithsonian’s collection (137 million artifacts and counting). And the Smithsonian’s collection is one of countless collections spread throughout the entire U.S. government.

Read More...

image

This is my first “real” (non-blink an LED) arduino project.  The problem: our bar blocks the lightswitch for the light over the living room table.  The solution: control the light by touching a picture around the corner.  Touch the picture once to turn the light on and touch it again to turn the light off.

Supplies:

1 Arduino (I used Adafruit’s Adruino Micro without headers): $22.95

Bare Conductive electric paint: $24.95

1 330k resistor

1 usb power supply: $5.95

1 usb cable (A to micro B): $3.95

1 arduino-controllable relay (I used the Sparkfun Beefcake Relay Control Kit): $7.95

1 solderable breadboard: $4.95

1 washer (optional)

1 magnet (optional)

arduino capsense library

Read More...

This post originally appeared in Slate

Read More...

It is a problem when Verizon’s decisions prevent subscribers from having a good Netflix experience. But the bigger problem may be the FCC’s ignorance about the underlying dispute.


Today brings us another round in what has been something of a long-simmering fight between Netflix and Verizon.  This morning, David Raphael posted awriteup of his interactions with his ISP Verizon.  In it he documents how his Netflix streaming quality has become “awful compared to just a few weeks ago.”  After some investigation, Mr. Raphael traced the problem back to the Amazon commercial cloud service used by Netflix.  When he confronted a Verizon customer representative, the representative “admitted” that Verizon is limiting bandwidth from cloud services.

A Real Problem

What to make of all this?  First of all, while hugely problematic and growing in importance, this issue is not new and is probably not directly connected to the recent Open Internet decision.  As John Bergmayer pointed out in a blog post last June, there has been a brewing interconnection problem between Netflix and Verizon for some time.

Of course, the fact that the problem is not new does not mean that the problem is not a problem.  It is a huge problem.  Noor does it mean that the recent Open Internet decision will not embolden ISPs to expand behaviors they had been testing in the past.

As John pointed out, it is an ISP’s job to deliver traffic to their customers.  That is what they get paid to do.  And part of doing that job is investing in the facilities required to deliver customers the content that they want.  An abstractly fast internet connection is all well and good, but if it is slow in getting you the content you want the headline number doesn’t matter very much.

This problem is also thematically similar to net neutrality.  There are some possible policy distinctions (again, neatly summarized by John last summer) but at a consumer level the policy distinctions start to slip away.  For a consumer, the reality is that decisions made by her ISP prevent her from accessing the service of her choice.

Opacity Hinders a Market Solution

Even those who are inclined to seek a market solution to this problem should be concerned.  Let’s assume, at least for the sake of argument, that consumers have a number of ISPs to choose from.  In situations like this, it is very hard for the average consumer to figure out if switching ISPs would even help.  Without network engineering skills like Mr. Raphael’s it can be very hard to tell what the problem is.  Is it the ISP?  Is it Netflix?  Is it something else entirely?  Without an easy way to assign blame for the awful picture, how is a consumer supposed to know which part of the equation to switch? 


The same opacity also creates a disincentive for either Verizon or Netflix to invest in solving this problem.  If customers are not sure who to blame, it is unlikely that they will know who to reward for the investment that fixes the problem.  Transparency about business and technical practices won’t solve any underlying competition problems.  But, more information would at least help us get a handle on exactly what is going wrong.

Verizon Has an Incentive to Slow Netflix

Thus far, everything in this post could apply to any ISP.  But Verizon has two additional features that make its behavior even more worthy of a second look.  First, Verizon also sells a cable television package.  Second, Verizon owns Redbox, an online video streaming service.

On at least some level, both of these services compete with Netflix.  That gives Verizon, the company that controls the connections that Netflix relies upon to reach Netflix’s subscribers, a huge incentive to make Netflix work not so well.  Every mediocre Netflix experience is one more reason to stay within the Verizon universe.  In light of that, any problems that Netflix customers have on Verizon (or any ISP that also has a TV offering) deserves extra scrutiny.

Where is the FCC?



But what is really going on here?  It can be hard to say from the outside.  This type of interconnection problem has been brewing for some time, and all indications are that they will continue to assert themselves.  With that, interconnection issues start to feel a lot like data caps: net neutrality-related but also bigger than net neutrality, huge impact on services that compete with ISP video offerings, potential technical justifications that are hard for outsiders to evaluate.

But the biggest similarity is that the FCC has essentially ignored both of these issues. One might think that an issue as big as interconnection would warrant some sort of FCC attention.  Even investigating interconnection enough to be able to figure out how to apportion blame for this sort of problem would be a huge step forward.  But, thus far, we have seen nothing from the FCC.  Chairman Wheeler has expressed interest in taking hard looks at how ISPs are handling their network traffic but thus far not actually acted to do so.  

This is the most recent manifestation of an interconnection problem, but it will not be the last. If the FCC wants to take its role protecting an open internet seriously, it would be well served to begin educating itself about what is actually happening.

Original image by Flickr user 24oranges.

Read More...