How Explaining Copyright Broke the Spotify Copyright System

This post originally appeared on the Engelberg Center blog.

This is a story of how Spotify’s sophisticated copyright filter prevented us from explaining copyright law.

It is strikingly similar to the story of how a different sophisticated copyright filter (YouTube’s) prevented us from explaining copyright law just a few years ago.

In fact, both incidents relate to recordings of the exact same event - a discussion between expert musicologists about how to analyze songs involved in copyright infringement litigation. Together, these incidents illustrate how automated copyright filters can limit the distribution of non-infringing expression. They also highlight how little effort platforms devote to helping people unjustly caught in these filters.

The Original Event

This story starts with a panel discussion at the Engelberg Center’s Proving IP Symposium in 2019. That panel featured presentations and discussions by Judith Finell and Sandy Wilbur. Ms. Finell and Ms. Wilbur were the musicologist experts for the opposing parties in the high profile Blurred Lines copyright infringement case. In that case the estate of Marvin Gaye accused Robin Thicke and Pharrell Williams of infringing on Gaye’s song “Got to Give it Up” when they wrote the hit song “Blurred Lines.”

The primary purpose of the panel was to have these two musical experts explain to the largely legal audience how they analyze and explain songs in copyright litigation. The panel opened with each expert giving a presentation about how they approach song analysis. These presentations included short clips of songs, both in their popular recorded version and versions stripped down to focus on specific musical elements.

The YouTube Takedown

After the event, we posted a video of the panel on YouTube and the audio of the panel in our Engelberg Center Live! podcast feed. The podcast is distributed on a number of platforms, including Spotify. Shortly after we posted the video, Universal Music Group (UMG) used YouTube’s ContentID system to take it down. This kicked off a review process that ultimately required personal intervention from YouTube’s legal team to resolve. You can read about what happened here.

The Spotify Takedown

A few months ago, years after we posted the audio to our podcast feed, UMG appears to have used a similar system to remove our episode from Spotify. On September 15, we received an email alerting us that our podcast had been flagged because it included third party content (recall that this content is clips of the songs the experts were discussing analyzing for infringement)

screeenshot from the Spotify alert page with the headline "We found some third-party content in your podcast"

Using the Spotify review tool, we indicated that our use of the song was protected by fair use and did not need permission from the rightsholder.

screeenshot from the Spotify alert page with the headline "We found some third-party content in your podcast" and information about challenging the accusation of infringement

We received a confirmation that our review had been submitted and hoped that would be the end of it.

screeenshot from the Spotify alert page with the headline "Thank you for submitting this episode"

The Escalation

That was not the end of it. On October 12th, we received an email from Spotify that they were removing our episode because it was using unlicensed music and we had not responded to their inquiry.

screeenshot from the Spotify alert email informing us that the episode has been removed from the service

The first part was true - we had not obtained a license to use the music. This is because our use is protected by fair use and we are not legally required to do so. The second part was not true - we had immediately responded to Spotify’s original inquiry. We immediately responded to this new message, noting that we had responded to their initial message, and asking if they needed anything additional from us.

Spotify Tries to Step Away

Four days later, Spotify responded by indicating that this was now our problem:

The content will remain taken down from the service until the provider reaches a resolution with the claimant. Both parties should inform us once they reach a resolution. We will make the content live upon the receipt of instructions from both parties and any necessary updates. If they cannot reach a resolution, we reserve the right to act at our discretion. The email address we have for the claimant is [redacted].

This is probably where most users would have given up (if they had not dropped off well before). However, since we are the center at NYU Law that focuses on things like online copyright disputes, we decided to push forward. In order to do that, we needed more information. Specifically, we needed the original notice submitted by UMG.

Why the Nature of the Notice is Relevant

We needed the original notice from UMG because our next step turned on the actual form it took.

Many people are familiar with the broad outlines of the notice and takedown regime that governs online platforms. Takedown actions initiated by rightsholders are sometimes called “DMCA notices” because a law called the Digital Millennium Copyright Act (or DMCA for short) created the process. While most of the rules are oriented towards helping rightsholders take things off the internet, there is a small provision - Section 512(f) - that can impose damages on a rightsholder who misrepresents that the targeted material is infringing (this provision was famously litigated in the “Dancing Baby” case).

In other words, the DMCA includes a provision that can be used to punish rightsholders who send baseless takedown requests.

We feel that the use of the song clips in our podcast are exceptionally clear examples of the type of use protected under fair use. As a result, if UMG ignored the likelihood that our use was protected by fair use when it filed an official DMCA notice against our podcast, we could be in a position to bring a 512(f) claim against them.

However, not all takedown notices are official DMCA notices. Many large platforms have established parallel, private systems that allow rightsholders to remove content without going through the formal DMCA process. These systems rarely punish rightsholders for overclaiming their rights. If UMG did not use an official DMCA notice to take down our content, we could not bring a 512(f) claim against them.

As a result, our options for pushing back on UMG’s claims were very different depending on the specific form of the takedown request. If UMG used an official DMCA notice, we might be able to use a different part of the DMCA to bring a claim against them. If UMG used an informal process created by Spotify, we might not have any options at all. That is why we asked Spotify to send us the original notice.

Spotify Ignores Our Request for Information

On October 12th, Spotify told us that in order to have our podcast episode reinstated we would need to work things out with UMG directly. That same day, we asked for UMG’s actual takedown notice so we could do just that.

We did not hear anything back. So we asked again on October 23rd.

And on October 26th.

And on October 31st.

On November 7th — 26 days after our episode was removed from the service — we asked again. This time, we sent our email to the same infringement-claim-response@ email address we had been attempting to correspond with the entire time, and added legal@. On November 9th, we finally received a response.

Spotify Asks Questions

Spotify’s email stated that our episode was “not yet subject to a legal claim,” and that if we wanted to reinstate our episode we needed to reply with:

  • An explanation of why we had the right to post the content, and
  • A written statement that we had a good faith belief that the episode was removed or disabled as a result of mistake or misidentification

This second element is noteworthy because it matches the language in Section 512(f) mentioned above.

We responded with a detailed explanation of the nature of the episode and the use of the clips, asserting that the material in question is protected by fair use and was removed or disabled as a result of a mistake (describing the removal as a “mistake” is fairly generous to UMG, but we decided to use the options Spotify presented to us).

Our response ended with another request for more information about the nature of the takedown notice itself. That request specifically asked if the notice was a formal notice under the DMCA, and explained that we were asking because we were considering our options under 512(f).

Clarity from Spotify

Spotify quickly replied that the episode would be eligible for reinstatement. In response to our question about the notice, they repeated that “no legal claim has been made by any third-party against your podcast.” “No legal claim” felt a bit vague, so we responded once again with a request for clarification about the nature of the complaint. The next day we finally received a straightforward answer to our question: “The rightsholder did not file a formal DMCA complaint.”

Takeaway

What did we learn from this process?

First, that Spotify has set up an extra-legal system that allows rightsholders to remove podcast episodes. This system does a very bad job of evaluating possible fair uses of songs, which probably means it removes episodes that make legitimate use of third party content. We are not aware of any penalties for rightsholders who target fair uses for removal, and the system does not provide us with a way to pursue penalties ourselves.

Second, like our experience with YouTube, it highlights how challenging it can be for regular users to dispute allegations of infringement by large rightsholders. Spotify lost our original response to the takedown request, and then ignored multiple emails over multiple weeks attempting to resolve the situation. During this time, our episode was not available on their platform. The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode.

Third, it highlights the weakness of what may be an automated content matching system. These systems can only determine if an episode includes a clip from a song in their database. They cannot determine if the use requires permission from a rightsholder. If a platform is going to deploy these types of systems at scale, they should have an obligation to support a non-automated process of challenging their assessment when they incorrectly identify a use as infringing.

We do appreciate that the episode has finally been restored. You can listen to it yourself here, along with audio from all of the Engelberg Center’s events on our Engelberg Center Live! feed, wherever you get your podcasts (including, at least as of this writing, on Spotify). That feed also includes a special season on the unionization of Kickstarter, and on the Knowing Machines project’s exploration of the datasets used to train AI models.

This post originally appeared on the OSHWA blog .

Earlier this month OSHWA, along with Public Knowledge, the Digital Right to Repair Coalition, Software Freedom Conservancy, iFixIt, and scholars of property and technology law, filed a brief in the US Court of Appeals supporting the principle that owning something means that you get to decide how to use it. While that principle has been part of US (and, before there was a US, British) law for centuries, recent attempts to protect copyright have worked to undermine it.

We filed the brief in a case that EFF has brought on behalf of Dr. Matthew Green and Dr. bunnie Huang (someone who is well known to the open source hardware community) challenging the constitutionality of parts of the US law that prevent access to digital works.This issue is important to the open source hardware community because owning hardware is a critical part of building and sharing hardware.

The Issue

The case focuses on Section 1201 of the Digital Millennium Copyright Act (DMCA). The DMCA is probably best known for its Section 512 notice and takedown regime for works protected by copyright online (that’s the “DMCA” in a “DMCA Notice” or “DMCA Takedown” that removes videos from YouTube). Section 1201 is a different part of the law that creates legal protections for digital locks that limit access to copyright-protected works.

Basically, Section 1201 is a special law that makes it illegal to break DRM. And as long as DRM prevents you from using your toaster how you see fit, you don’t really own it.

These protections were originally designed to protect digital media – think the encryption of DVDs. However, since code is protected by copyright, and just about everything has code embedded in it, the 1201 protections undermine ownership rights in a huge range of things.

The brief illustrates how 1201-protected DRM undermines traditional rules of ownership in a number of different ways:

  • The right to repair: DRM blocks third-party parts or fixes, monopolizing the repair market or forcing consumers to throw away near-working devices.
  • The right to exclude: DRM spies on consumers and opens insecure backdoors on their computers, allowing malicious software to enter from anywhere.
  • The right to use: DRM prevents consumers from using their devices as they wish. A coffee machine’s DRM may prohibit the brewing of other companies’ coffee pods, for example.
  • The right to possess: Device manufacturers have leveraged DRM to dispossess consumers of their purchases, without legal justification.

The Challenge

This case is challenging Section 1201 on First Amendment grounds. As written, the law imposes content-based restrictions on speech. Tools for circumventing DRM can advise users on how and why to protect their property rights. Prohibiting them means that the law gives legal benefits to anti-ownership DRM software while criminalizing pro-ownership DRM-circumvention software.

Additionally, whatever one thinks about using DRM to protect digital media, the current law is not well tailored to achieve that goal. Today, DRM has been added to all sorts of devices that are very far from “digital media” in any reasonable sense. As the brief notes:

Devices like refrigerators have [DRM] not to stop rampant refrigerator copyright piracy, but so manufacturers can maintain market dominance, block competition, and force wasteful consumerism that boosts those manufacturers’ bottom lines.

These uses of DRM are protected by the current law but have nothing to do with protecting digital media.

What’s Next

This brief is part of an appeal in the U.S. Court of Appeals for the District of Columbia Circuit. It will be argued in the coming months. EFF’s page on the case is here.

We want to end this post with a huge thank you to Professor Charles Duan, the author of our brief. Professor Duan does a great job of bringing clarity to this important issue facing the open source hardware community. Plus, you always know any brief written by him will include citations reaching back centuries. This brief shows that case law reaching back to 1604 is still relevant to questions about ownership today!

Powerful ToS Hurt Companies and Lawyers, Not Just Users

I recently found myself reading Mark Lemley’s paper The Benefit of the Bargain while also helping a friend put together the Terms of Service (ToS) for their new startup. Lemley’s paper essentially argues that modern ToS - documents that are written by services to be one sided and essentially imposed on users as a take-it-or-leave-it offer - should no longer be enforced as contracts because they have lost important fairness elements of what make contracts contracts.

This argument, which I found fairly compelling, mostly focuses on the harm that the modern ToS regime does to users. ToS allow companies to impose a wide range of conditions on users that are beyond the scope of what users would ever reasonably agree to if they were offered a meaningful choice. That is in addition to the unreasonable expectation that everyday people are reading the millions of words worth of contracts they agree to in any given week.

Since I was reading this article while helping to draft ToS for a new service, I was also drawn to something the article did not mention: the ways in which these unilaterally imposed ToS hurt the other entities connected to them. Specifically, the lawyers who draft them and the companies that offer them.[1]

The Drafting Lawyers

I want to start with the most sympathetic characters in this drama: the lawyers hired to write ToS that are heavily skewed in favor of their client. Spare a thought!

It is possible to imagine such a lawyer who is pulled between two competing forces.

On one hand, they know various types of clauses in these agreements are Bad Policy, or at the least unfair to users. Such a lawyer might agree with Lemley that the world would be better without a default set of fair rules, and with a presumption that those rules could only change if the users made a meaningful choice. In their heart of hearts, they might want to draft ToS that they thought more fairly balanced the interests of the company and the company’s users.

On the other hand, that same lawyer is bound by some form of a duty to vigorously represent their client. These unbalanced terms are clearly in the company’s (at least short term) interest. Furthermore, they are essentially industry standard. As a result, this lawyer might worry that it could be a form of malpractice to fail to include the unbalanced terms in the ToS.

When faced with this tension, the lawyer might try to explain to their client that there are long term benefits to maintaining a balanced agreement with users, and therefore to leave out the most one-sided clauses. However, and there is no small irony here, it could be hard to give the client enough information so that they could meaningfully opt out of the tilted ToS arms race.

It is hard to understand the cost of giving up the short, medium, and long-term advantages provided by unbalanced ToS in service of a larger principle of social fairness. There are some startup founders who are interested in the discussion and have the bandwidth to actually process it. There are many more who will never prioritize the discussion enough to meaningfully consent to giving away the advantage.

Thus, the lawyer may face two options: a) vigorously represent their client’s interest and support the bad equilibrium by writing an industry-standard, unbalanced ToS, or 2) get out of the writing ToS business for anyone without the time and inclination to wade through the larger policy arguments.

Those feel like bad options!

The Company

This state of affairs can also harm the company, and not just in the “forcing your customers into unbalanced agreements is bad karma” kind of way.

As Lemley’s paper points out, in the offline world most businesses operate without any sort of formal written contracts at all (“you didn’t sign a contract governing the purchase of an apple from the grocery store.”). The ability to append a ToS to every digital transaction has helped to create an expectation that companies will do exactly that.

In order to do so, the companies need to take the time to write those ToS in the first place. Suddenly, instead of focusing on building and shipping their widgets, companies spend time with lawyers making sure that their ToS include all of the advantages they could possibly claim.

That’s probably a waste of time for just about everyone involved. This is made even more of a waste of time because, when faced with this new obligation, most small companies don’t hire lawyers (which would be one type of waste of resources). Instead, they tap someone without a legal background to semi-arbitrarily assemble their ToS from random corners of the internet (a slightly different type of waste of resources). And I suspect that person will increasingly outsource that task to generative AI (a third type of waste of resources).

These companies don’t really understand what unbalanced terms they are imposing on their users, what advantages they receive from them, and probably would not miss them if they were not there. They are just checking a box they don’t fully understand because it has ended up on the “things startups do” list.

I think all of these behaviors argue in favor of Lemley’s ultimate suggestion that we make a policy choice to move towards a default set of balanced rules and away from unbalanced ToS. Until then, the current system is so broken that it might make you feel bad for the lawyers and companies supposedly benefitting from it.

[1] I’ve been to enough “your paper should actually be my paper” peer reviews that I want to be clear that nothing in this post is intended to suggest that the Lemley paper is incomplete without including these points, or even that they did not occur to him. Word counts, and time, are limited in this life. Something that is interesting to me does not need to be interesting to everyone else.

hero image: a portion of Lawyers in dispute from the Met’s open access collection.

Why Can't You Own an Ebook?

Earlier this summer the Engelberg Center released a new study on ebook ownership. The study was motivated by a superficially simple question: “why can’t you own an ebook?”.

It is very easy to pay money to access an ebook. However, if you want to own that ebook - and own in the traditional sense of ownership, giving you the ability to resell it, or give it away, or simply read it without someone tracking what you are up to - in the vast majority of cases you are out of luck. Instead, your money will buy you a license that gives you access to the ebook on a specific platform under terms that prevent you from doing the ownership things I just mentioned.

This is a fairly well documented problem. In fact, one of my co-authors co-wrote an entire book about it. And another one of my co-authors wrote an entire other book touching on the surveillance aspects of these types of agreements.

What is less well documented is the source of the problem. There are a number of different stakeholders in the world of ebooks, including publishers, authors, ebook platforms, readers, and libraries. If you talk to any one of those stakeholders about this market dynamic, you will often get two things. First, you will get a fairly detailed description of the incentives and constraints that influence how they come to the ebook market. Second, you will get less detailed projections about the incentives and constraints that the other stakeholders bring to the market.

One goal of our investigation was to pull together all of the detailed first person descriptions of incentives and constraints in order to replace the less detailed projections.

You can read the report yourself to see if we succeeded. Instead of rehashing our findings, I wanted to use this post to flag one other thing that emerged during the investigation (I will probably touch on this other thing during the conference the Engelberg Center will be hosting on this topic later on this month - you should come!).

The thing that emerged was this: the key question of why ebooks are licensed instead of sold was a pretty obscure one. When we talked to stakeholders, many of them did not fully understand what we were asking at first (this was not the fault of my third co-author, who did an amazing job with these interviews). They often understood some of the second order ramifications of licensing over ownership, and might have some thoughts about how that decision connected to something else they were worried about. However, they were rarely fluent in the specifics of licensing vs sales itself, and often needed some time to deeply engage with the questions before providing in-depth answers.

I am not mentioning this to belittle any of the stakeholders we talked to, or even to suggest that they should be fluent in the limitations of licensing as compared to sales. Instead, I think it is noteworthy because this lack of familiarity with the concepts might present an opportunity.

One of the ideas guiding our investigation was that all of the stakeholders involved were responding in good faith to the incentives and constraints they faced. No one was twirling their mustaches trying to eliminate ownership as an end in and of itself. Instead, the current (non-optimal, at least in my view) licensing-based market structure was the result of an equilibrium between all of those incentives and constraints.

The fact that the specifics of licensing vs ownership were secondary to so many of those stakeholders may be a sign of hope because it suggests that it is rarely part of anyone’s primary incentives and constraints. They don’t care about licensing vs ownership as an end. It just happens to be that licensing has become part of the way they achieve their primary goals. That could mean that there are other equilibria that balance everyone’s incentives that do not require licensing instead of ownership.

How likely is that? I’m not sure. But I’ll take the hope where I can get it.

AQI Sensor

The air quality in NYC has been . . . not great this summer. This presents and opportunity. Why settle for knowing that the air is not great in the city when you can know how not-great it is in your very own home?

Wow’d by Marty McGuire’s ability to check the air quaility of his apartment on his phone, I decided I would copy him by building a worse implementation of his setup. The features of my version of this setup include:

  • Check the PM2.5 levels in my apartment
  • Check the local AQI
  • Display the PM2.5 and AQI levels with LEDs
  • Display the PM2.5 and AQI levels on a screen
  • Chart the curent and historical PM2.5 levels on a website that I can access with my phone outside the house

In order to make this happen I needed:

The process is pretty straightforward. Every few minutes, the board checks the PM2.5 level. It then changes the LEDs at the top of the FunHouse accordintly, displays the number on the screen, and uploads the data to an Adafruit IO dashboard. At the same time, it also pulls the local AQI levels from the AQI API and updates the LEDs and screen accordingly.

The entire script is available in this repo. In addition to the script you will need:

  • The library files, which are also in the repo (make sure everything in the /lib folder in the repo is in the /lib folder on the board)
  • A secrets.py file to hold your wifi and Adafruit IO credentials. You can learn how to create that here.
  • A seperate keys.py file. This is for the AQI API. I’m sure there’s a way to incorporate this into the secrets.py file, but I couldn’t quite figure out the syntax. In any event, the entire contents of the file is AQI_URL = "the_url_with_your_api_key". You can create your URL by playing around with the AirNow API.

The Code

Here’s a walkthrough of the code.

This first block just imports all of the libraries and sets up the FunHouse object. If you are running into problems with libraries, make sure you have the library in you /lib folder on the device.

import time
import board
import busio
from digitalio import DigitalInOut, Direction, Pull
from adafruit_pm25.i2c import PM25_I2C
from adafruit_funhouse import FunHouse

#for the external API
import adafruit_requests as requests
import keys
import socketpool
import ssl
import wifi

#for the light sensor mapping
from adafruit_simplemath import map_range


reset_pin = None

funhouse = FunHouse(default_bg=None)

The next chunk creates a few more objects, turns on wifi, and sets up variables for the AQI download. It uses the existing funhouse network elements to set up the requests object.

# Create library object, use 'slow' 100KHz frequency!
i2c = board.I2C()
# Connect to a PM2.5 sensor over I2C
pm25 = PM25_I2C(i2c, reset_pin)

print("Found PM2.5 sensor, reading data...")

# Turn on WiFi
funhouse.network.enabled = True
print("wifi on")
# Connect to WiFi
funhouse.network.connect()
print("wifi connected")

#these variables sets up the requests
pool = socketpool.SocketPool(wifi.radio)
requests = funhouse.network._wifi.requests

These are the variables for the various sensor readings.

#IO Stuff
FEED_2_5 = "2pointfive"
TEMP_FEED = "temp"
HUM_FEED = "humidity"
TEMPERATURE_OFFSET = (
    3  # Degrees C to adjust the temperature to compensate for board produced heat
)

These are the RGB color values as variables to make them slightly easier to work with.

#Colors
BLACK = (0,0,0)
GREEN = (0,228,0)
YELLOW = (255, 255, 0)
ORANGE = (255,40,0)
RED = (255,0,0)
PURPLE = (143,63,151)
MAROON = (126,0,35)

This next bit creates the text blocks that will be used to display the readings. The first and third ones are the reading labels. The second and fourth are the actual readings. They are much larger. The last line pushes them to the screen.

#text
funhouse.display.show(None)
pm_label = funhouse.add_text(
    text_scale = 2, text_position = (10,10), text_color = 0x606060
)
pm_value = funhouse.add_text(
    text_scale = 12, text_position = (90,60), text_color = 0x606060
)
aqi_label = funhouse.add_text(
    text_scale = 2, text_position = (10,110), text_color = 0x606060
)
aqi_value = funhouse.add_text(
    text_scale = 12, text_position = (60,180), text_color = 0x606060
)
funhouse.display.show(funhouse.splash)

With all of that set up, the rest of the code is in a While loop that just runs forever.

First, it reads the PM2.5 data from the sensor

try:
    aqdata = pm25.read()
    # print(aqdata)
except RuntimeError:
    print("Unable to read from sensor, retrying...")
    continue

Then it pushes the PM2.5, temp, and humidity data to Adafruit IO. The temp and humidity come from sensors that are built into the FunHouse.

# Push to IO using REST
    try:
        funhouse.push_to_io(FEED_2_5, aqdata["pm25 env"])
        funhouse.push_to_io(TEMP_FEED, funhouse.peripherals.temperature - TEMPERATURE_OFFSET)
        funhouse.push_to_io(HUM_FEED, funhouse.peripherals.relative_humidity)
        print("data pushed")
    except:
        print("error uploading data, moving on")

This section downloads the AQI data from the API. It reads the target URL from the keys.py file, downloads the payload, parses the json, and assigns the AQI value to a new variable. The AQI API website is not the most user friendly UX in the world, but I did end up narrowing my query down to a single monitoring station. AQI will be set to 0 if there is an error, which will serve as a signal that something is wrong.

    # get remote AQI data
    # #https://learn.adafruit.com/adafruit-funhouse/getting-the-date-time   
    target_URL = keys.AQI_URL
    
    try:
        response = requests.get(target_URL, timeout = 10)
        #print(response)
        jsonResponse = response.json()
        print(jsonResponse[0]["AQI"])
        currentAQI = jsonResponse[0]["AQI"]
    except:
        currentAQI = 0
        print('request failed')

The next section sets the text on the display. The labels are just one line each to set the text.

The actual reading display is more complicated. Using the AirNow AQI calculation data sheet, the if/elif statements set the color of the reading to match the alert color.

    #text stuff
    #set the label
    funhouse.set_text("PM 2.5", pm_label)
    #set the color for the pm2.5 reading
    if aqdata["pm25 env"] <= 12.0:
        funhouse.set_text_color(GREEN, pm_value)
    elif 12.0 < aqdata["pm25 env"] <= 35.4:
        funhouse.set_text_color(YELLOW, pm_value)
    elif 35.4 < aqdata["pm25 env"] <= 55.4:
        funhouse.set_text_color(ORANGE, pm_value)   
    elif 55.4 < aqdata["pm25 env"] <= 150.4:
        funhouse.set_text_color(RED, pm_value)
    elif 15.4 < aqdata["pm25 env"] <= 250.4:
        funhouse.set_text_color(PURPLE, pm_value)
    elif 25.4 < aqdata["pm25 env"] <= 500.4:
        funhouse.set_text_color(MAROON, pm_value)
    #set the reading
    funhouse.set_text(aqdata["pm25 env"], pm_value)
    #set the aqi label
    funhouse.set_text("AQI", aqi_label)
    #set the aqi color
    if currentAQI <= 50.0:
        funhouse.set_text_color(GREEN, aqi_value)
    elif 50.0 < currentAQI <= 100:
        funhouse.set_text_color(YELLOW, aqi_value)
    elif 100 < currentAQI <= 150:
        funhouse.set_text_color(ORANGE, aqi_value)   
    elif 150 < currentAQI <= 200:
        funhouse.set_text_color(RED, aqi_value)
    elif 200 < currentAQI <= 300:
        funhouse.set_text_color(PURPLE, aqi_value)
    elif 300 < currentAQI <= 500:
        funhouse.set_text_color(MAROON, aqi_value)
    funhouse.set_text(currentAQI, aqi_value)

After working through the display, things move on to the five LEDs built into the top of the FunHouse. First I create variables and set them all to off.

    #LED Stuff
    #https://www.airnow.gov/sites/default/files/2020-05/aqi-technical-assistance-document-sept2018.pdf
    #set all of the LEDs to black by default
    led_0 = BLACK
    led_1 = BLACK
    led_2 = BLACK
    led_3 = BLACK
    led_4 = BLACK
    print ("2.5 = " + str(aqdata["pm25 env"]))

Then the first two are updated based on the local PM2.5 reading and the last two are updated based on the local AQI.

    #update first two leds depending on the 2.5 reading
    if aqdata["pm25 env"] <= 12.0:
        led_0 = GREEN
        led_1 = GREEN
    elif 12.0 < aqdata["pm25 env"] <= 35.4:
        led_0 = YELLOW
        led_1 = YELLOW
    elif 35.4 < aqdata["pm25 env"] <= 55.4:
        led_0 = ORANGE
        led_1 = ORANGE   
    elif 55.4 < aqdata["pm25 env"] <= 150.4:
        led_0 = RED
        led_1 = RED 
    elif 15.4 < aqdata["pm25 env"] <= 250.4:
        led_0 = PURPLE
        led_1 = PURPLE 
    elif 25.4 < aqdata["pm25 env"] <= 500.4:
        led_0 = MAROON
        led_1 = MAROON

    #update the last two LEDs based on AQI
    if currentAQI <= 50.0:
        led_3 = GREEN
        led_4 = GREEN
    elif 50.0 < currentAQI <= 100:
        led_3 = YELLOW
        led_4 = YELLOW
    elif 100 < currentAQI <= 150:
        led_3 = ORANGE
        led_4 = ORANGE   
    elif 150 < currentAQI <= 200:
        led_3 = RED
        led_4 = RED
    elif 200 < currentAQI <= 300:
        led_3 = PURPLE
        led_4 = PURPLE
    elif 300 < currentAQI <= 500:
        led_3 = MAROON
        led_4 = MAROON

Finally, the new colors are pushed to the LEDs themselves

    #update the LEDs
    funhouse.peripherals.set_dotstars(led_0, led_1, led_2, led_3, led_4)

The LEDs are pretty bright. That’s helpful during the day, but it is a bit much at night. The next bit dims the LEDs based on ambient light. It uses the light sensor built into the FunHouse and maps the readings to a 0-1 scale, which is the scale used to control the brightness of the LEDs.

It is possible control the brightness of the LEDs individually (the syntax is (R,G,B,Brightness)), but in this case I want all of them to be the same level.

    #set LED brightness so they aren't super bright at night
    #map_range works (inputnumber, orig min, orig max, new min, new max)
    #right reading bounds appear to be ~1800-54000, real world is closer to 1800-5000)
    #goal here is to make the lights bright when it is bright and dim when it is dark
    brightness = map_range(funhouse.peripherals.light, 1800, 6000, 0, 1)
    print(brightness)
    funhouse.peripherals.dotstars.brightness = brightness

Finally, everything just waits for 2 minutes before starting over again.

time.sleep(120)