New Met Partnership Shows Open Access Can Coexist With Revenue?

Last month the Metropolitan Museum of Art announced a series of partnerships that brought famous works in the Met’s collections to limited edition items from a series of brand partners. This announcement may end up being one of the most important developments for museum open access this year (and it’s been a busy year for museum open access).

In 2017 the Met made all images of public domain works in its collection available under a Creative Commons Zero (CCO) license. As commenters quickly pointed out, this open access program means that none of the just-announced brand partners needed to partner with the Met in order to incorporate works from the Met’s collection into the products.

And yet, these brands still decided to enter into a formal partnership with the Met to put the Met’s collection on their products. In doing so, they validated an important path for open access success. These new partnerships show how the power of an institution’s brand can help it monetize an open access program.

A Recurring Open Access Question - Can it Make Money?

Many cultural institutions have revenue-related internal discussions when they are considering creating open access programs. While many people argue that open access should be a core part of the mission of cultural institutions regardless of its impact on revenue, it is clear that revenue implications are part of many institution’s evaluation of open access.

Oftentimes, the discussion is tied to concerns that open access will eliminate revenue that the institution currently receives from licensing its collection. Within this context, it is first important to remember that the role of revenue from image licensing is a disputed one. A Mellon Foundation study from 2004 suggests that many institutions actually lose money on licensing because the costs of operating the program exceed any revenues associated with it. Also, the idea of licensing works already in the public domain strikes many as problematic, if not illegitimate.

Nonetheless, it can be helpful for open access proponents to have a way to address revenue concerns related to open access programs. While there are a number of ways to approach this question, many of the easiest to conceptualize options rely on leveraging the institution’s brand. Unlike the works, which are made publicly available under an open access program, the institution maintains control over its brand even if the works they control are available to everyone.

The Unicorn Rests in the Garden

As a result, while anyone can put The Unicorn Rests in the Garden on a tote bag, only BAGGU can tout their bags as being part of “A limited collection created in collaboration with the Metropolitan Museum of Art”. In many cases, that formal association with the institution’s brand is valuable.

How Will This Apply to Others?

Admittedly, not all cultural institutions have brands that are as prominent as the Met. However, there is likely to be a correlation between the amount of revenue an institution historically saw from image licensing and the value of that institution’s brand. That is what makes the brand licensing path an interesting one in the context of open access, and why these partnerships are especially interesting.

This program means that partnering with the Met because of its brand - as opposed to obtaining access to its collection - is interesting to a wide range of companies. Will it be successful enough to sustain their attention? Will success translate to institutions with brands that are not as strong as the Met’s? It is too early to say. Either way, it is encouraging to see the Met continue to show leadership with open access.

Feature image: A Goldsmith in His Shop by Petrus Christus

Simulating Firefly Flashes with CircuitPython and Neopixels

Update 10/11/20: I did figure out how to use classes to automatically scale this to n lights! Updated post is here.

This post is a walkthrough for having neopixels (individually addressable LEDs) flash in firefly patterns. The script uses circuitpython and uses three flash patterns from a National Park Service website. It should be very easy to add additional patterns as you see fit. The full script can be found here.

One quick note before getting started. The current version of the script has 90% of the functionality I want it to have. I have a strong suspicion that the last 10% will require a full rewrite and more complex code. I’m putting up this post with simple code for anyone who wants to avoid the more complex version (if it ever ends up existing).

Here’s the full code:

#https://www.nps.gov/grsm/learn/nature/firefly-flash-patterns.htm

import board
import digitalio
import time
import neopixel
import random



#variables to hold the color that the LED will blink
neo_r = 255
neo_g = 255
neo_b = 0

# variable to hold the number of neopixels
number_of_lights = 7

#create the neopixel. auto_write=True avoids having to push changes (at the cost of speed, which probably doesn't matter here)
pixels = neopixel.NeoPixel(board.NEOPIXEL, number_of_lights, brightness = 0.2, auto_write=False)

# automatically spins up the seed reset times for each light
reset_time_dict = {}

# sets the seeds to zero
for i in range(0, number_of_lights):
    var_name = 'resetTime' + str(i)
    reset_time_dict[var_name] = time.monotonic()


print(reset_time_dict)

def on(light_num):
    pixels[light_num] = (neo_r, neo_g, neo_b)
    pixels.show()
def off(light_num):
    pixels[light_num] = (0, 0, 0)
    pixels.show()

def brimleyi(reset_time_input, light_number):
    #calculates how much time has passed since the new zero
    time_from_zero = time.monotonic() - reset_time_input
    # creates the carry over reset_time variable so that it can be returned even if it is not updated in the last if statement
    reset_time = reset_time_input

    # on flash
    if 5 <= time_from_zero <= 5.5:
        on(light_number)
    elif 15 <= time_from_zero <= 15.5:
        on(light_number)

    # reset (includes 10 seconds after second flash - 5 on the back end and 5 on the front end)
    elif time_from_zero > 20:
        off(light_number)
        reset_time = time.monotonic() + random.uniform(-3, 3)

    # all of the off times
    else:
        off(light_number)

    return reset_time

def macdermotti (reset_time_input, light_number):
    #calculates how much time has passed since the new zero
    time_from_zero = time.monotonic() - reset_time_input
    # creates the carry over reset_time variable so that it can be returned even if it is not updated in the last if statement
    reset_time = reset_time_input

    # on flash
    if 3 <= time_from_zero <= 3.5:
        on(light_number)
    elif 5 <= time_from_zero <= 5.5:
        on(light_number)
    elif 10 <= time_from_zero <= 10.5:
        on(light_number)
    elif 12 <= time_from_zero <= 12.5:
        on(light_number)

    elif time_from_zero > 14.5:
        off(light_number)
        reset_time = time.monotonic() + random.uniform(-3, 3)

    else:
        off(light_number)

    return reset_time

def carolinus(reset_time_input, light_number):
    time_from_zero = time.monotonic() - reset_time_input
    # creates the carry over reset_time variable so that it can be returned even if it is not updated in the last if statement
    reset_time = reset_time_input

    if 0 <= time_from_zero <= 0.5:
        on(light_number)
    elif 1 <= time_from_zero <= 1.5:
        on(light_number)
    elif 2 <= time_from_zero <= 2.5:
        on(light_number)
    elif 3 <= time_from_zero <= 3.5:
        on(light_number)
    elif 4 <= time_from_zero <= 4.5:
        on(light_number)
    elif 5 <= time_from_zero <= 5.5:
        on(light_number)
    elif 6 <= time_from_zero <= 6.5:
        on(light_number)

    elif time_from_zero >= 15:
        off(light_number)
        reset_time = time.monotonic()

    else:
        off(light_number)

    return reset_time


while True:

    reset_time_dict["resetTime2"] = brimleyi(reset_time_dict["resetTime2"], 2)
    reset_time_dict["resetTime3"] = brimleyi(reset_time_dict["resetTime3"], 3)
    reset_time_dict["resetTime4"] = macdermotti(reset_time_dict["resetTime4"], 4)
    reset_time_dict["resetTime5"] = carolinus(reset_time_dict["resetTime5"], 5)
    reset_time_dict["resetTime6"] = carolinus(reset_time_dict["resetTime6"], 6)





    #briefly pauses the loop to avoid crashing the USB bus. Also makes it easier to see what is happening.
    time.sleep(0.25)

At a high level, it creates three functions (one for each type of firefly flash pattern) and then assigns that pattern to a light. The patterns are based on timing, so it uses the monotonic() function to keep track of time. There is not a real clock on microcontrollers, so monotonic() just counts up from the moment the board turns on.

#https://www.nps.gov/grsm/learn/nature/firefly-flash-patterns.htm

import board
import digitalio
import time
import neopixel
import random

The first part of the code imports the libraries used by the script.

#variables to hold the color that the LED will blink
neo_r = 255
neo_g = 255
neo_b = 0

The next part holds the color for the LED. The current color is yellow, although you could make it whatever you want. This script uses the same color for all of the lights, regardless of their pattern.

# variable to hold the number of neopixels
number_of_lights = 7

This variable holds the number of lights you are using. It is moderately useful to have this as a variable now, and likely very useful when the script is fully functional and can automatically populate n lights.

#create the neopixel. auto_write=True avoids having to push changes (at the cost of speed, which probably doesn't matter here)
pixels = neopixel.NeoPixel(board.NEOPIXEL, number_of_lights, brightness = 0.2, auto_write=False)

This line initializes the neopixels. I developed this on an Adafruit circuit playground board, so you may need to change this line depending on your setup. The other thing to point out here is that the brightness variable is set to 0.2. Neopixels are bright, so I toned things down during development. You might want to make them brighter for your final installation.

# automatically spins up the seed reset times for each light
reset_time_dict = {}

This creates a dictionary to hold the reset times for each light. Each light resets its timer at the end of a cycle, so you need a variable to hold the reset time for each individual light.

# sets the seeds to zero
for i in range(0, number_of_lights):
    var_name = 'resetTime' + str(i)
    reset_time_dict[var_name] = time.monotonic()

This automatically sets the reset time for each light, by iterating based on the number_of_lights variable from above.

print(reset_time_dict)

This print line was just for troubleshooting. I should probably just delete it.

def on(light_num):
    pixels[light_num] = (neo_r, neo_g, neo_b)
    pixels.show()
def off(light_num):
    pixels[light_num] = (0, 0, 0)
    pixels.show()

These two little functions define the neopixel being on and being off. Each pattern function needs to turn lights on and off, so it was easier to define that behavior once and reuse it as a function.

def brimleyi(reset_time_input, light_number):
    #calculates how much time has passed since the new zero
    time_from_zero = time.monotonic() - reset_time_input
    # creates the carry over reset_time variable so that it can be returned even if it is not updated in the last if statement
    reset_time = reset_time_input

    # on flash
    if 5 <= time_from_zero <= 5.5:
        on(light_number)
    elif 15 <= time_from_zero <= 15.5:
        on(light_number)

    # reset (includes 10 seconds after second flash - 5 on the back end and 5 on the front end)
    elif time_from_zero > 20:
        off(light_number)
        reset_time = time.monotonic() + random.uniform(-3, 3)

    # all of the off times
    else:
        off(light_number)

    return reset_time

This is the first blinking function. It takes two arguments. The reset_time_input is the counter start time. The light_number is which neopixel it is controlling.

Without a real clock, all of the flash functions are controlled by a counter. You can think of the counter starting at 0 for the first loop (it doesn’t actually start a 0 the first time, but ignore that for a minute).

time_from_zero = time.monotonic() - reset_time_input figures out how long it has been since the start of the counter. In the example first loop, the reset_time_input would be 0. If it has been 2 seconds since the counter started counting, the time_from_zero would equal 2.

That value is then compared to a bunch of if statements that determine if the light is on or off. In this first function, the light goes on if the time_from_zero is between 5 and 5.5 seconds, and between 15 and 15.5 seconds. Because the default state of things is that the light is off, we only need if triggers for when the light needs to be on.

Once the time_from_zero exceeds 20 seconds, the counter resets. That reset is based on the current time (time.monotonic()) with a bit of random variation (random.uniform(-3, 3)) so that the different lights are not all in sync (the carolinus() function does not include this random variation because the carolinus bugs flash in unison).

As soon as the cycle is complete, it returns a new reset_time. Remember that there is only one counter on the board, and it just keeps counting up. The first time through the cycle, reset_time_input might be 0. The second time through, the cycle ‘starts’ closer to 20. Similarly, instead of being 2 the first time around, the time.monotonic() will be 22 the second time around. The time_from_zero function normalizes all of this, because 2-0, 22-20, and 82-80 are all the same value. That allows the function to keep working over time.

The macdermotti() and carolinus() functions work the same way. If you want to make a new function for a new pattern, just duplicate it, rename it, and change the if statements.

while True:

    reset_time_dict["resetTime2"] = brimleyi(reset_time_dict["resetTime2"], 2)
    reset_time_dict["resetTime3"] = brimleyi(reset_time_dict["resetTime3"], 3)
    reset_time_dict["resetTime4"] = macdermotti(reset_time_dict["resetTime4"], 4)
    reset_time_dict["resetTime5"] = carolinus(reset_time_dict["resetTime5"], 5)
    reset_time_dict["resetTime6"] = carolinus(reset_time_dict["resetTime6"], 6)

Now that all of the functions work, this while loop will just keep running them forever.

reset_time_dict["resetTime2"] starts with the reset time for light #2 that we automatically generated at the top of the script. brimleyi(reset_time_dict["resetTime2"], 2) calls the brimleyi() function, using that reset time. Because the end of the functions all return their ‘new’ reset time (even if it was not updated that cycle), the reset time in the dictionary will always be the one you want to work with.

#briefly pauses the loop to avoid crashing the USB bus. Also makes it easier to see what is happening.
    time.sleep(0.25)

This last line just rests for 0.25 seconds. Before I added it, the looping was flooding the USB bus and creating all sorts of problems. Briefly pausing everything just makes it easier to work with.


At the top of this post I mentioned that the script did 90% of what I want it to do. The remaining 10% has to do with everything in the while loop.

You might have noticed that the reset times are automatically generated for each light at the start of the loop. However, you need to manually create an entry for every light in the while loop.

Ideally, this script would automatically create the entries for the lights in the while loop and randomly assign them a flash pattern. Unfortunately, I think doing so will probably require turning the pattern functions into classes. Or at least that’s what the Coding Train’s Nature of Code series has me thinking about these days.

Classes or no classes, I still haven’t figure out how to fully automate things yet. Once I do, I’ll post some updated code. Until then, hopefully this is useful to someone else.

header image: Case (Inrō) with Design of Fireflies in Flight and Climbing on Stone Baskets and Reeds at the Shore

Preserving Glam3D.org

This post originally appeared on the Engelberg Center blog.

We anticipate being able to support Glam3D.org, our newly launched site focusing on creating Open Access 3D cultural resources, well into the future. Nonetheless, we also recognize that technology evolves, priorities change, and that there may be a day where the site is no longer viable. As a result, we have taken steps to make it easier to preserve the site if and when it is no longer active.

In many situations, planning to preserve digital resources can be an elaborate task. Sites built on complex content management systems (CMS) relying on proprietary software can simply disappear. Even the most ubiquitous technologies can fade over time (see, for example, attempts to preserve flash-based web media now that flash is no longer actively supported). Although organizations like the Internet Archive work hard to preserve some digital resources, even their powers have limits.

This makes it important to build digital preservation planning into the structure of a site from the beginning. While it is impossible to anticipate every eventuality, it is possible to follow best practices that help maximize the chance that the resource will be accessible well into the future.

Ease of preservation was one of the reasons we decided to build Glam3D.org with the Jekyll framework. Jekyll uses a static site approach. Among other things, that means that there is no traditional backend (such as wordpress or drupal) holding all of the content in a format that is inaccessible to the site’s visitors. Instead, Jekyll uses markdown files - which are human-readable text files - to make each page of the site. It draws images from a folder that is helpfully called “images”.

This approach makes sure that the heart of the site - the text, images, links, and structure - are simply a series of text files, images, and folders that are easy to navigate. Furthermore, all of the files are publicly available (and freely licensed) on the Engelberg Center github page. Anyone can go to the github repo and download all of the files required to make the site. The repo also contains all of the prior versions of the site.

Jekyll itself is open source software. That makes it easy for someone to reproduce the entire site by downloading Jekyll, downloading our repo, and just putting them together. It also means that someone attempting to reproduce the site well in the future will be able to find the version of Jekyll we used to build the site today and modify it to work on whatever computers they have access to.

As part of our digital preservation efforts, we will use Conifer, a web archiving tool by Rhizome, to collect and capture the interactivity of navigating the site as dynamic web content. At moments of substantive updates, we plan to create screen capture tutorials with commentary to provide users of the site an overview of the components. These videos, hosted on the publication site with a Creative Commons Attribution 4.0 license, will also serve as another means to preserve the publication.

Preservation is an evolving practice, so please contact us with your recommendations about other forms of preservation that we should consider.

Sarah Goehrke’s recent article on the University of Tennessee’s new ‘patent pending’ 3D printed face shield made me wonder - “what is going on here?” There are a huge number of open source 3D printed face shields out there, and the value of a patent on an ‘innovation’ in the field would probably be pretty small. Why bother paying to get a patent in this case?

After exploring that question, the most interesting things I found were:

  • UT’s application is for a design patent, not a utility patent. That means that if the patent was ever issued it would only cover the design elements of the face shield - not any of the functional elements. In spite of this, UT only lists functional features among the benefits of the shield. UT was unable to point to any notable design elements (the ones protected by the patent) that the shield may possess.

  • UT claims that the shield was made without any reference to the numerous open source face shields that are already publicly available.

This story really starts when the University of Tennessee (UT) released a story about the “UT-Shield,” a 3D printed face shield designed by Professor Maged Guerguis to help protect people from COVID. This in and of itself is (amazingly) not that unique - at this point a number of 3D printable face shields and other types of PPE have been released publicly.

However, two things about the announcement struck me as strange. The first was the ‘patent pending’ part of the announcement. The second was the license that UT planned to release the shield under.

‘Patent Pending’

When many people read ‘patent pending’ they think it means something like ‘I am about to get a patent.’ While that could be true, all ‘patent pending’ really means is ‘I have paid to create and file a patent application at the Patent Office.’ There is a long road between filing a patent and getting a patent, and the fact that an application has been filed is far from a guarantee that a patent will ever be granted.

More interestingly, ‘patent pending’ is usually used in the context of utility patents. However, the patent number touted by UT indicates that they have applied for a design patent, not a utility patent (thanks to my colleague Chris Morten for pointing this out to me).

Utility patents are most likely the types of patents you think of when you hear the word ‘patent’. They are designed to protect inventions and other functional items. Design patents - as the name suggests - are not designed to protect utility or functionality. Instead they protect ornamental aspects of functional items.

The result is an announcement that is less than it may appear at first. UT has applied for a patent but not yet received it (and may never receive it). And the patent they applied for is for the ornamental aspects of the face shield, not any of the functional elements.

This raises the question - what are the ornamental aspects of the face shield that are worth protecting with a patent? After all, they don’t give patents away for free. I asked UT if they would elaborate about what they intended to protect with the patent. The UT media team was incredibly responsive, and told me:

Here is what makes his design distinctive.

  • Headband is optimized to minimize material use and weighs only 1 ounce, significantly reducing manufacturing time
  • Does not need to be held on with a rubber band, thereby reducing parts that can get contaminated
  • Headrest follows forehead profile curvature for long comfortable periods of use.
  • Keeps the curvature of the clear visor around the face, reducing exposure to contaminants along the sides of the face.
  • Visor spaced to provide maximum clearance for glasses or other wearable medical equipment
  • Provides cover to prevent contaminants from entering from above.
  • Ergonomic temple tips for comfortable sliding that don’t catch on hair or loose objects

These are all great functional features of a face shield! Unfortunately UT is not applying for a utility patent on any of them. Their patent can only protect decorative elements of the shield. What are they trying to protect? We won’t know until the application is released, which will not happen any time soon. What I do know is that UT’s inability to point to a decorative feature that might be protected if the design patent is ever granted does raise questions about the usefulness of the endeavor.

The License (or, a completely sui generis face shield?)

The second interesting part of UT’s announcement had to do with the license they were offering the UT-Shield under. Setting aside the fact that without an issued patent UT had very few (if any) rights that actually needed a license, I was curious how they planned to structure the license.

The license defines the shield in part as being “developed (i) without the use of any open source designs…”.

As noted earlier, the internet is currently awash with open source, 3D printable face shield designs. Even the most cursory google search would turn up scores of options, and it seems unlikely that anyone would start designing a new 3D printable face shield without trying to develop some understanding of what already existed.

I asked the UT media contact if they could clarify the term in the license. Their response was “Our position is that his design was made from scratch and is distinctive.”

I suppose there are a lot of ways to understand that response. One charitable way is to read “from scratch” as meaning “designed from an empty CAD environment instead of modifying an existing file.” That reading would define “use” incredibly narrowly in a way that excludes referring to existing designs. That strikes me as an exceedingly tortured parsing and an unlikely path to creation, but it may be their best option.

An alternative explanation is that UT’s position misrepresents the development of the shield, and that the definition of the shield in the license excludes any shield that exists in the real world.

Why Bother With The Patent At All?

The good news in all of this is that - at least as of now - it appears that the rest of the world has very little to fear from the UT-Shield. They have no patent today, and if they ever get a patent it is unlikely to cover anything functional about the shield. In fact, the majority of the value of any patent ever issued for the UT-Shield will probably be as a decoration on Professor Guerguis’ wall and a line on his CV (which is . . . fine?).

UT also gets whatever PR bump it gets from this feel good story. But, as Goehrke’s recent article documents, UT’s version of this story has now resulted in some criticism from the larger 3D printed face shield community. That criticism is completely due to the way UT handled the patent part of it. This raises the question - why bother with the patent at all?

Keep 3D Printers Unlocked

update 6/23/20: a version of this post is now also up on Make

tl;dr: I need your help to keep 3D printers unlocked. If you know of a 3D printer that:

requires you to purchase printing material (filament, powder, resin, etc.) from the printer manufacturer (or approved vendor)

AND

uses something besides a microchip to verify the source of the material,

please email me at hello@michaelweinberg.org or dm me on twitter @mweinberg2D. Feel free to send me this information anonymously if you prefer. Please let me know soon, because the deadline to alert the Copyright Office is the end of July.

What’s Happening

Every three years, the US Copyright Office gets to make it legal to break Digital Rights Management (DRM, also known as digital locks) in certain situations. The default rule in the United States is that breaking digital locks on copyright-protected works is illegal, so the Copyright Office process is designed to create exemptions for groups with good reasons to break those digital locks.

In the past, these groups have included media studies professors who want to show video clips in class, people who want to jailbreak cell phones, visually impaired users who need speech-to-text technology to access ebooks, and - importantly for this blog post - people who want to use the printing material of their choice in 3D printers.

The Copyright Office can only grant these exemptions for three years, so every three years everyone needs to go back and ask for the exemptions to be renewed. The last exemptions were granted in 2017, so now that it is 2020, we need to renew the request.

Renew and . . . Expand?

The text of the 2017 exemption for 3D printers was pretty good. It defined the exemption as being for:

“Computer programs that operate 3D printers that employ microchip-reliant technological measures to limit the use of feedstock, when circumvention is accomplished solely for the purpose of using alternative feedstock and not for the purpose of accessing design software, design files, or proprietary data.”

Therefore, the first thing we will be doing in 2020 is asking for this exemption to be renewed.

However, you may notice that the exemption does have a caveat:

` . . . that employ microchip-reliant technological measures . . . `

This language was proposed with an eye towards the kind of chip-based verification measures 2D printers use for ink and some 3D printers use for filament.

The question this time around is if there are technologies used to limit the source of 3D printing material that do not fall within that definition. In other words, are there technologies that do not rely on microchips to verify the source of the printing material? If there are, we can use them as evidence for eliminating the caveat. If there are not, we can just ask for a renewal of the existing rule.

Do you know of a technology that does not fall within the definition outlined above? If you do, please email me at hello@michaelweinberg.org or dm me on twitter @mweinberg2D. That way I can use it as an example in the petition to expand the exemption.