I like to listen to NPR when I work out but radio reception at my gym is horrible.  In order to resolve this problem I wanted to record NPR and download it onto my MP3 player.  Since I also travel a bit, I wanted to be able to access the recording on the road (I actually like finding local NPR stations, but sometimes I can’t get good reception).

If all you want to do is record a radio stream, this instructable is amazing, helpful, and all you really need.  If you want to go a bit nuts and go well beyond what’s necessary, here’s a more elaborate option.

Here are the steps I had in mind:

  • Record audio automatically
  • Edit out parts of the recording as needed
  • Transcode edited WAV recording into MP3
  • Make MP3 available anywhere with an internet connection
  • Automatically load edited MP3 onto MP3 player

Bonus challenge: I wanted to avoid making my (7+ year old) home laptop a core part of this setup because I didn’t want to have to keep it on all the time.  At one point I was using a local ubuntu server I had running all of the time to do some of it, but it was a pretty underpowered via epia rig (good from an electricity standpoint, bad from a performance standpoint).  That made the MP3 conversion slow.  As a result, I moved the recording, transcoding, and editing to the cloud for $5/month.

Outline of process:

  1. Initiate WAV recording on remote server
  2. Stop WAV recording on remote server
  3. Initiate WAV recording on remote server (I’ll explain this below)
  4. Stop WAV recording on the remote server
  5. Combine the two WAV files created in steps 1 and 2, 3 and 4, respectively
  6. Transcode the unified WAV file into a single MP3 file
  7. Download the MP3 file from the remote server to a local MP3 player
  8. Unmount the MP3 player

To put it it bit more simply - create the file on the remote server and then move it onto the local MP3 player.

What you will need (or at least what I used):

  • A cloud server running ubuntu (I used Digital Ocean because they had great documentation to walk me through a process I didn’t fully understand.  You could just do this with a local computer). $5/mo
  • A Raspberry Pi (although this is not necessary if you have a linux box that is always running - raspberry pi as the advantage of having a low enough energy requirement that I don’t feel bad running it all the time). $40
  • An MP3 player.  I use a sansa clip for working out because it is small, relatively cheap, gets FM radio reception, and much more sweatproof than my phone.  But anything that mounts as an external drive will work here. $40.

Step 1: Prepare the remote server

I’ve been running a local ubuntu server for long enough that I almost kind of know what I was doing.  Fortunately, the Digital Ocean’s documentation was fantastic and helped me though the complicated parts.

After setting up the cheapest ubuntu server available (this is an embarrassingly low impact use of a remote server) there were a few things that I made sure were working:

First, set up SSH keys for access.  This is more secure and allows my local computer to log in automatically.  Tutorial here.  Life is slightly easier if you have generated the public keys for all of the local computers (including the PI) you want to use to access the remote server ahead of time (the server setup will let you automatically include them) but only just.  It isn’t the end of the world if you do it after the server is set up.

Second, make sure your server automatically installs security updates.  The server is a computer and it needs updates just like any other computer, but I’m not fooling myself into thinking that I will ever remember to do this.  Tutorial here.

Third,  make sure lame, mplayer, and sox is installed on your server.

Step 2: Create some scripts

This section builds off of this amazing instructable from before.  As I mentioned above, I made this process a bit harder because I wanted to edit out about 10 minutes from the middle of the recording.  Why? Because it was a recurring segment that I didn’t really like and its my recording so I can do whatever I want.

Instead of recording everything and finding a way to automatically edit out the part I didn’t want, I decided it would be easier to record the first part, wait 10 minutes, and record the second part.  After that I pulled the two parts together and turned them from a (big) WAV file into a (small) mp3 file.  In order to do that, I needed to create 4 super small scripts.  (Note: there is probably a much more efficient way to do this).

streamrecord0 and streamrecord 1

These are identical programs, except for the fact that they name their output “mystream0.wav” and “mystream1.wav” respectively.  One is used to record the first chunk and one is used to record the second chunk.  The entirety of the scripts is below.  Just copy it into a text editor, save it, and make it executable (as described here.)

#!/bin/sh
NOW=$(date +“%b-%d-%y”)
mplayer “http://wamu.org/streams/live/1/live.pls” -ao pcm:file=/tmp/mystream0.wav -vc dummy -vo null ;

and

#!/bin/sh
NOW=$(date +“%b-%d-%y”)
mplayer “http://wamu.org/streams/live/1/live.pls” -ao pcm:file=/tmp/mystream0.wav -vc dummy -vo null ;

There is no line break in the line that starts with “mplayer."  Replace the URL with the URL of the stream you want to record (this can often be found by looking for a streaming option called "MP3” or “PLS” - here’s WAMU’s page as an example.  It WILL NOT just be the main website of the station).  /tmp/mystream0.wav is the name of the output file and can be changed to whatever you want (just make sure you change the other scripts accordingly, and that you name the outputs of the two files different things or the second one will just save over the first one).

pkill

This script stops the streamrecord script, thus “finalizing” the file. It probably doesn’t even need to be a script.

pkill mplayer

soxer

This script takes the two recorded files, combines them, and turns them into a single mp3 file.

#!/bin/sh
pkill mplayer;
sox /tmp/mystream0.wav /tmp/mystream1.wav /tmp/mystream2.wav;
lame /tmp/mystream2.wav /home/mystreamB.mp3;

The first line ends any recording that is happening.  The second line (the sox line) takes the two recordings (mystream0.wav and mystream1.wav) and turns them into a single recording  (mystream2.wav).  The third line (the lame line) takes the single recording (mystream2.wav) and turns it into an mp3 (mystreamB.mp3).  Change that final directory (just /home/ in the example above) to wherever you want the file to go.

Step 3: Schedule some scripts

Now you have some scripts, but they are just kind of hanging out.  They need to be scheduled to do us any good.  This is a job for cron.  Cron is a program that automatically runs scripts at scheduled intervals, which is exactly what you want to do.

Typing “crontab -e” in your command line will bring up the croneditor (there are also various programs that can guide you through the process).  My cron table looks like this:

30 4 * * * /scripts/streamrecord0
50 4 * * * /scripts/pkill
00 5 * * * /scripts/streamrecord1
30 6 * * * /scripts/soxer

The first number is minutes, the second is hours, and the next 3 are day of month, month, and week.  Since I want these to run every day, the last 3 are just *, which means “every time.”

As you can see, streamrecord0 (which lives in the /scripts/ directory) starts at 4:30am.  At 4:50am pkill stops it.  At 5:00am streamrecord1 starts.  At 6:30am soxer stops streamrecord1, merges streamrecord0 and streamrecord1, and turns the output into an mp3 (because that’s what is in the soxer script).  While everything else is pretty much instantaneous, transcoding almost two hours of WAV into mp3 takes about 3 minutes (which will vary by processor).  If you want to see how long your rig will take, just type “lame [combined wav file] [location and name of output file]” like you see in the soxer script into the command line and watch.

Congratulations!  Now you have the recording of your choice on a server far away.  It’s time to bring it home.

Step 4: Prepare your local computer (Raspberry Pi)

I explained why I am using a raspberry pi for this above, but you can use pretty much any computer that will be on when you need it for this.  If you are using a raspberry pi, set it up.  I followed adafruit’s guide, specifically steps 1, 2, 3, 6, and 7 (although 7 just made my life a bit easier and isn’t strictly required).  If you didn’t when you set up the original server, you will also need to add the pi’s public key (in ~/.ssh/id_rsa.pub) to the cloud server (in /root/.ssh/authorized_keys by default).

Step 5: Create script on local computer

This one is pretty easy.  Assuming you have ssh set up correctly, you can just pull the file off of the cloud server automatically.  I also added a line to my script that unmounts my mp3 player so I can just pull it off of the pi in the morning (this means that if I don’t use the mp3 player that day, I need to detach and reattach it before I go to bed so it is actually mounted when the script runs).

Create this script the same way as the others (and don’t forget to make it executable).  I called it “pipull” because it pulls the file for the pi.

#!/bin/sh

scp USERNAME@IPAaddressOfRemoteServer:/recordings/mystreamB.mp3 /media/0123-4567/PODCASTS/mystreamB.mp3;

umount /media/0123-4567;

in order for this to work for you, you need to replace “USERNAME” with your username for the remote server (hint: it is probably just “root”) and IPAaddressOfRemoteServer with, you know, the IP address of your remote server.  Also, if you changed the output  location of the mp3 you will need to change that part.

The /media/0123-4567/PODCASTS may be specific to your MP3 player as well.  Mine happens to mount at /media/0123-4567 and have a default directory of PODCASTS, but if yours does not you will need to make changes accordingly.

The scp part automatically transfers the mp3 from the server to your remote player (and overwrites any existing mp3 by the same name - like yesterday’s).  The umount unmounts the player.  Note: my MP3 player doesn’t recognize that the file is actually new and if I don’t tell it to play it from the beginning the file will begin playing where I ended the previous day. This can be slightly disorienting.

Step 6: Schedule your script

Just as with the server, the final step is to schedule your script to happen every day.  My crontab looks like this:

34 6 * * * home/pi/scripts/pipull

Note that it runs at 6:34am.  That gives the remote server time to turn the WAV file into an MP3 before downloading.

And that’s it!  One final note.  In order to make my mp3 player mount automatically (instead of having it pop up a dialogue box asking me if I wanted to mount it) on the pi, I opened the pi’s file manager (via VNC ) and went to edit -> preferences -> volume management.  There I checked “mount mountable volumes automatically” and “mount removable media automatically” and unchecked “show available options for removable media."  I don’t know if all of  those are strictly necessary, but they got the job done.


 img

This post was originally published on Makezine.com.

Earlier this week news broke that the long running patent infringement lawsuit between 3D Systems and Formlabs is over. The two sides settled, agreeing to dismiss all claims and counterclaims and for each side to pay its own legal costs. Additionally, Formlabs will pay 3D Systems an 8% royalty on Formlabs sales. This development brings to an end one of the great legal dramas of the early desktop 3D printing era.  However, some questions remain.

The Original Suit

Around this time in 2012 3D Systems – one of the largest and oldest players in 3D printing – sued Formlabs – then a startup desktop 3D printing company that had not yet fulfilled its kickstarter orders. This was the first patent lawsuit of the desktop 3D printing era, and the first time that one of the established 3D printing companies decided to sue a desktop manufacturer.

As elaborated in this writeup of the original suit, the suit raised a number of questions. Unlike the lawsuit between Stratasys and Afinia that would come the following year, 3D Systems’ lawsuit against Formlabs did not necessarily have direct implications for other desktop printers. This is because Formlabs’ printer did not share its core process with most other desktop printers. But questions remained: did Formlabs find a way around 3D Systems’ patents? If so, would they be able to afford to defend themselves? If not, would 3D Systems use its patent portfolio to quash this next generation desktop 3D printing technology, or possibly enter the market themselves? Was this all just a way for 3D Systems to reduce the price of Formlabs before trying to buy the company?

We Wait

After the initial lawsuit was filed in South Carolina, not a lot happened in public. The partieswent back and forth, eventually requesting and receiving a series of extensions from the court. Oftentimes, these extensions are granted when the parties are privately negotiating a settlement.

In November of 2013 3D Systems voluntarily dismissed the case against both Formlabs and Kickstarter (oh yeah, for a period of time Kickstarter was involved in this suit as well. This could have had massive implications for Kickstarter as a platform for crowdfunding hardware, but nothing appears to have come of it so we can set it aside for now.), only to refile an amended complaint against Formlabs in the Southern District of New York. This amended complaint involved different patents, but the core of the complaint was the same. Very little of notehappened publicly in the new case until now.

The other notable development during this time was the presence of a documentary camera crew. The crew was recording footage that would eventually be turned into the 3D printing documentary Print the Legend. A key storyline in Print the Legend is the suit between 3D Systems and Formlabs, and the interactions between 3D Systems CEO Avi Reichental and Formlabs CEO Max Lobovsky. Perhaps most interestingly in this context, at one point Avi Reichental explains to the camera that the entire lawsuit has caused 3D Systems to rethink their approach to intellectual property and that it was thinking about doing something new. Unfortunately, the film does not follow up on what that new approach could be and 3D Systems itself has said very little publicly to elaborate. What does it mean to rethink an approach to intellectual property, and what impact could that rethink have on this lawsuit and 3D printing more generally? We still don’t know.

The Settlement

It would be great if the settlement answered all of the questions raised by the lawsuit. Regrettably, on its face it does not. The public memorandum of the settlement is 3 pages long(most of which is taken up by headings, signatures, and whitespace) and almost totally devoid of information: the parties agreed to drop their claims, everyone is paying their own expenses. Neither Formlabs nor 3D Systems sought fit to issue a press release or post a blog post in the wake of the settlement.

However, that settlement document was augmented by a filing quietly made by 3D Systems with the SEC.  In the filing 3D Systems disclosed that they granted Formlabs a license for the patents involved in the lawsuit in exchange for “8.0% of net sales of Formlabs products through the effective period.” The filing does not elaborate which products are included or how long the agreement will remain in place.

The combination of the settlement and the filing answer some questions, but leave others. Perhaps first and foremost, we do not really know how strong 3D Systems’ case was against Formlabs. The parties could have settled because 3D Systems had a strong case and Formlabs knew it could not win. Alternatively, the parties could have settled because 3D Systems had a weak case but decided they would rather settle than make that weakness known to everyone.

We also do not know if this is part of some sort of new intellectual property strategy for 3D Systems. Maybe the 3D Systems patents are strong, but 3D Systems decided to license them to Formlabs as part of their new (undisclosed) strategy.  Maybe the 3D Systems patents are weak, but they are still offering some sort of cheap license as an insurance policy to Formlabs.  The combination of the significant-but-not-company-destroying 8% royalty rate with the possibility of 3D Systems trying a new intellectual property strategy makes it hard to read the settlement tea leaves with any precision.

Even with the uncertainty, we do know some things. Formlabs is not immediately shutting down as a result of this suit. Similarly, Formlabs was not acquired by 3D Systems. 3D Systems and Formlabs did not take this opportunity to announce a new partnership or joint venture. And 3D Systems did not roll out a new intellectual property strategy in conjunction with the settlement.

Looking Forward

Of course, the absence of an announcement does not mean that none of these things happened or will not happen in the near future. But they fact that they have not been announced yet, especially since the royalty rate was announced, is at least worth noting.

The Consumer Electronics Show is next month, and it might (or might not) be a place where we start to get more answers about how 3D Systems and Formlabs will walk away from this lawsuit, and how they will see each other going forward. Do either Formlabs or 3D Systems roll out something new? Does 3D Systems announce a desktop printer that competes directly with Formlabs? Does 3D Systems announce their new intellectual property policy?

Or does nothing happen, leaving us all to keep wondering


 img

This article was originally published on Makezine.com.

Recently, the Cooper Hewitt Smithsonian Design Museum released detailed 3D scan data for its home. And it has a pretty nice home. The museum, which is located in Manhattan and is dedicated to historic and contemporary design, is housed in the former mansion of Andrew Carnegie. Built around the turn of the last century when Carnegie was arguably the richest man in the world, the mansion itself makes Cooper Hewitt worth the visit. Now, you can just download the files and visit the mansion from the comfort of your sofa.

That in and of itself is a pretty cool thing and would probably be worth a stand alone blog post. Being able to download one of the world’s historic buildings, peek around at your leisure, and 3D print it however you want is fairly amazing. But that’s not really what this blog post is about.

Instead, this post is about how the Cooper Hewitt decided to make the mansion available to the public. Specifically, how they went out of their way to encourage people to do interesting things with the files without restriction. Hopefully, it will begin to serve as a model for other institutions working to make scans available to the public.

Let’s break down exactly what Cooper Hewitt did right:

Make it Clear That the File is Available

Let’s start at the top. This isn’t a look-but-don’t-touch page with some jankyproprietary viewer that crashes your browser. Right out of the box, the page let’s you know that the mansion is here for you to download – and use outside of Cooper Hewitt’s control.

Encourage Free Remix and Reuse

thingies Cooper Hewitt Shows How To Share 3D Scan Data Right

This is probably the best part of the entire site. A clear, unambiguous invitation for people not only to download the file and view it, but to remix and reuse it.

Give People the Data They Need, In Ways They Can Use It downloads Cooper Hewitt Shows How To Share 3D Scan Data Right

Some people want as much data as they can get, with full color and texture. Why not set a level in your next videogame in the Carnegie Mansion? For them, there is a big FBX file available. Others are just looking to 3D print a model, and may not even have tools to easily work with the FBX file. Those users can download the STL file and get to printing.

License Permissively (Or, the One Part That Gives Me Pause)

license Cooper Hewitt Shows How To Share 3D Scan Data Right

update: Seb Chan at Cooper Hewitt explains that they used CCO because the status of 3D scan files is not necessarily clear in non-US jurisdictions and they wanted to make crystal clear that these files were not protected by copyright.  That’s a pretty good reason.

This is a super-permissive license. Why does it give me pause? Because, in all likelihood, Cooper Hewitt doesn’t really hold any copyright in the file to license in the first place. The building itself is not protected by copyright, and the state of the law right now does not give people who scan an object an independent copyright in that scan. As a result, even a super-restrictive license would probably not be enforceable.

But even then, Cooper Hewitt makes it easy to overlook that concern. First they use the Creative Commons Zero license, which effectively waives all rights in the file. I would argue that such a license is redundant, but at least it doesn’t introduce potentially unenforceable restrictions into the process.

Second, Cooper Hewitt makes a bunch of reasonable requests to users. Importantly, these are not requirements backed up by a legal threat. Instead, they are just telling you what they would appreciate you do. These requests are designed to make the dataset even more useful to everyone, let people know how Cooper Hewitt would appreciate they use the data, and make it easy for other people to track down the original files if they are interested.

Make Getting in Touch Easy

gettingintouch Cooper Hewitt Shows How To Share 3D Scan Data Right

Making it easy for users to get in touch greatly increases the likelihood that Cooper Hewitt will learn about how they are using the models. That should make it easier for Cooper Hewitt to prioritize the scanning of other things, and to understand what is most useful to people.

It may also help get more things scanned and available. This may not be true for Cooper Hewitt, but often neat new things happen at institutions because a handful of people within the institution are passionately advocating for it to happen. In those cases, positive feedback and information about what users are doing with the files make it easier to push for even more scanning and releasing of files.

A Great Model

The Carnegie Mansion file release is a fantastic example of how to make architectural scans available to the public in a way that truly encourages engagement. Here’s to hoping that it is the first of many. Now stop reading this post and start downloading files. And be sure to let Cooper Hewitt know if you do something neat.

Top image credit: Flickr user Kent Wang


img

This post was originally published on Makezine.com.

As you may recall, back in November Stratasys (the company that owns MakerBot) sued Microboards Technology, LLC (the company that makes the Afiniadesktop 3D printer) for patent infringement. Specifically, Stratasys accused Afinia of violating four of its patents.

This case is important beyond the fates of Stratasys and Afinia because the Stratasys patents could potentially cover many more desktop 3D printers. Last month, the court directed Stratasys to dismiss the accusation of infringement in relation to the patent that was related to controlling infill. In short, that means that one of the four patents from the complaint (the one that covers infill) is no longer in play. Furthermore, it is still possible that the patent could be invalidated entirely.

Background

In its original complaint, Stratasys accused Afinia of infringing on four of its patents: the ‘925 patent that related to controlling infill, the ‘058 patent that related to heated build environments, the ‘124 patent that related to Afinia’s extruder, and the ‘239 patent related to a seam layer concealment method.

In response to that complaint, Afinia generally challenged the validity of all four patents andaccused Stratasys of abusing the patent system to try and monopolize the 3D printing market. The challenges to each of the patents were fact-specific, but this post will focus on the ‘925 patent because that is the one that was directed to be dismissed.

Discovery of Prior Art – And Old Stratasys Patent

The ‘925 patent covered ways to control the infill of a 3D printed object. In order to successfully receive a patent, an applicant must show that the invention is actually new. As a result, as part of the patent application process original applications are often narrowed in order to avoid “prior art.” Prior art can be anything that shows part of the proposed patent existing before the time of the patent application. If the proposed invention existed before the patent application, that shows that the invention wasn’t actually new (and therefore should not get a patent).

As part of its response, Afinia claimed to have found an example of prior art that should have prevented Stratasys from getting the ‘925 patent in the first place. But this wasn’t just any prior art. Afinia claimed that Stratasys itself already had a patent that included the invention that was being patented (again?) by Stratasys. That old patent should have prevented the ‘925 patent from ever being granted.

Accusation of Inequitable Conduct

Afinia didn’t stop there. If an old patent really did include the parts of what became the ‘925 patent, that would be enough to invalidate the ‘925 patent. However, the old patent isn’t just an old patent. It is Stratasys’ old patent. As such, Stratasys probably knew about it (or should have known about it) when it was filing the application for what became the ‘925 patent. Afinia claimed that withholding this information from the Patent Office constituted inequitable conduct and patent misuse. Essentially, Afinia was saying that Stratasys had a duty to tell the Patent Office about its old patent – and that failing to do so was a breach of good faith.

Stratasys Tries to Dismiss

After Afinia’s response, Stratasys decided to voluntarily dismiss the claims of infringement related to the ‘925 patent. However, Stratasys told Afinia that Stratasys would only dismiss the claims if Afinia also agreed to dismiss Afinia’s counterclaims – the ones where Afinia tried to have Stratasys’ patent declared invalid and accused Stratasys of inequitable conduct. Afinia declined this deal.

Soon thereafter, both Stratasys and Afinia sent short (2 page) letters to the Court explaining why Stratasys’ decision to voluntarily dismiss the infringement claims should or should not also require Afinia to withdraw the counterclaims.

Stratasys’ position was straightforward: since Stratasys was dismissing the original claim, Afinia should have to dismiss all of the counterclaims that flowed from the original claim.

Afinia responded that even if Stratasys withdrew the infringement claim, Afinia wanted to keep its counterclaims because 1) Afinia was worried that Stratasys could use the patent against them in the future (just because Stratasys withdraws the claim of infringement against this Afinia printer doesn’t mean that Stratasys couldn’t use it against a new printer – or a new defendant – in the future), 2) Afinia thought that exploring what happened with the ‘925 patent could help them uncover similar problems with the ‘058 patent, and 3) they want to recover all of the attorney fees that have been billed in order to prepare the responses to Stratasys’ ‘925 complaint.

Stratasys Ordered to Dismiss

On July 11th, the Court directed Stratasys to voluntarily dismiss the claims related to the ‘925 patent. However, it did not order Afinia to dismiss its counterclaims. Instead, after Stratasys dismisses the ‘925 claims, the court will reconsider both arguments related to Afinia’s counterclaims. If nothing else, this removes a link between Stratasys’ complaint and Afinia’s counterclaim.

What Does This Mean?

The fact that Stratasys was willing to dismiss the claims related to the ‘925 patent at least suggests that they were worried about how Afinia responded to them. That might (but does not necessarily) suggest that there is some truth to Afinia’s counterclaims. At a minimum, right now we can be sure that the current Afinia printer will not be held to infringe the ‘925 patent.

However, at least right now Stratasys is free to use that patent against other printer manufacturers. That is why the Court’s decision on Afinia’s counterclaims will become so important. If the Court dismisses Afinia’s counterclaims, Afinia will not have an opportunity to invalidate the patent. However, if the Court allows the claims to go forward, it is at least possible that Afinia could succeed in invalidating Stratasys’ ‘925 patent. If the ‘925 patent was invalidated, Stratasys would not be able to use it against anyone.

In the original post about the Afinia response, one of the things that we did not know was how strong Afinia’s legal arguments where. As of now it appears that, at least in relation to the ‘925 patent, they were strong enough to convince Stratasys to dismiss the claims.

Unfortunately, we still don’t know how this lawsuit will impact the larger 3D printing industry and community. It increases the likelihood that Stratasys will not use its ‘925 patent against other companies, although does not guarantee it. Beyond that, we do not know if Stratasys’ patents will be upheld, overturned, or if the issue will be settled without resolution. Furthermore, we do not know what Stratasys intends to do with any patents that survive this lawsuit.

One other thing that we know is that the full trial date is set for Dec. 1, 2015. Between now and then we can be pretty sure to see a ruling on Afinia’s ‘925-related counterclaims. And it is at least possible that we see more rulings on other Stratasys patents and Afinia counterclaims.

The final thing we know is that the growth of desktop 3D printing continues. And that’s a good thing.


 img

Today Public Knowledge sent letters to AT&T, Sprint,T-Mobile, and Verizon as the first step in the process of filing open internet complaints against each of them at the FCC.  The letters address violations of the FCC’s transparency requirements, which are the only part of the open internet rules that survived court challenge. 

Specifically, they call on AT&T, Sprint, and Verizon to make information available about which subscribers have their wireless data connections throttled and where that throttling happens.  The letter to T-Mobile calls on it to stop exempting speed test apps from its practice of throttling some users, thus preventing them from understanding actual network speeds available to them.

The transparency requirement imposes an obligation on ISPs to “publicly disclose accurate information regarding the network management practices … sufficient for consumers to make informed choices regarding use of such services.”  The carriers’ practices with regards to throttling fail to live up to that obligation.

This blog post explains Public Knowledge’s concerns with the policies of AT&T, Sprint, and Verizon first.  It then explains our concerns with T-Mobile.

AT&T, Sprint, and Verizon

Who is Eligible for Throttling?

All three target subscribers who use larger amounts of data each month.  All Sprint subscribers are eligible for throttling, while AT&T and Verizon limit throttling to those subscribers holding on to legacy unlimited data plans.  Once a subscriber hits a threshold, she may be throttled during times of network congestion.

AT&T sets that threshold at a specific level – either 3 GB per month or 5 GB per month, depending on their phone.  However, both Sprint and Verizon set that threshold and the far more opaque “top 5 percent of users.”  Sprint suggests that this number is around 5 GB a month, but admits that the actual number will fluctuate on a month-to-month basis.  Without access to network information, it is impossible for subscribers to translate “top 5%” into an actual data amount on their own.

In light of this, we are calling on both Sprint and Verizon to publicly publish monthly information about where the 5% threshold is located.  Failure to do so prevents consumers from being able to make the informed choices regarding use of services referenced in the rule.

When and Where are Subscribers Throttled?

Regardless of the threshold, subscribers are not automatically throttled as soon as they reach it.  Instead, they are merely eligible for throttling.  Subscribers will only actually be throttled when they are attached to a congested part of the network.

Unfortunately, as with the 5% threshold, it is impossible for subscribers to know where those congested parts of the network might be.  That is why we are calling on AT&T, Sprint, and Verizon to publish real time information about network congestion events that would trigger throttling for eligible subscribers in order to comply with the rule.

Comparing Offerings

One of the reasons that transparency is such an important part of the open internet rules is that, to the extent consumers have competitive choices, it makes it possible to compare one carrier to another.  AT&T, Sprint, and Verizon’s current policies make that impossible.  If I am a heavy data user, I can’t easily compare how much data will actually trigger throttling.  Similarly, I can’t look at maps of the places that I frequent to determine if they are likely to be congested (and therefor throttled).  Transparency can fuel competition, which is why compliance with transparency rules are so important.

Open and Accessible Formats

In our letters, we emphasize the importance of making this information available in open and accessible formats.  This is important for at least two reasons.  First, it makes it easier for third parties to package that information in ways that are useful to subscribers.  Instead of being forced to rely on whatever alert system AT&T, Sprint, and Verizon decide to make available, releasing this information in an open and accessible format will allow outside developers to create tools that bring alerts to people in ways the prefer.

Second, open and accessible formats make it easier for outsiders to understand exactly how AT&T, Sprint, and Verizon are implementing network management practices.  This type of monitoring was one of the key drivers of the transparency rule in the first place.  As the FCC explained in its open internet order:

A key purpose of the transparency rule is to enable third-party experts such as independent engineers and consumer watchdogs to monitor and evaluate network management practices, in order to surface concerns regarding potential open Internet violations.

This type of evaluation is much easier when the data that fuels it is freely available.

T-Mobile

T-Mobile’s policies also raise transparency concerns, although those concerns flow from slightly different behavior.  When a T-Mobile customer reaches her clearly defined data cap, her connection is automatically throttled regardless of how congested the network is.  This sets them apart from AT&T, Sprint, and Verizion. 

However, that throttling is not universal.  In addition to exempting select music services, T-Mobile exempts speed testing services from throttling.  As a result, even when a customer is being throttled a speed test will indicate that she is connected to a fast 4G network.  Unfortunately, when she tries to use that network it is throttled to unknown “2G” speeds.

When a customer performs a speed test, she is rarely curious to explore the theoretical maximum speed of her network.  Instead, she wants to determine exactly what speed her network is actually providing her.  T-Mobile’s policy of exempting speed test apps make it very hard for throttled T-Mobile subscribers to come by that information.

What Happens Now?

These letters are the first step in the open internet rule formal complaint process.  Once ten days have passed, we can file a formal complaint to the FCC.  At that point, AT&T, Sprint, T-Mobile, and Verizon will each have an opportunity to reply to our complaint, and we will have the opportunity to reply to that reply.

Of course, that process can stop at any time.  As soon as AT&T, Sprint, T-Mobile and or Verizon comply with the transparency rule, we will drop our complaint.

Image credit: Flickr user tiff_ku1