Unlocking Our Shared Cultural Resources

This post originally appeared on the Engelberg Center blog.

Today the Engelberg Center is launching Glam3D.org, a guide for digitizing and making available 3D cultural resources as part of an Open Access program.

Digital versions of cultural resources have never been more important. Huge swaths of the world are social distancing, with cultural institutions large and small closed (or severely limited) for the foreseeable future. That makes physically accessing cultural resources impossible for many of us, leaving digital versions as our only options.

Fortunately there is an increasing amount of high resolution digital assets of cultural resources in the form of two dimensional images. Even better, more of these digital assets are available under Open Access terms - free for anyone to use without copyright and other restrictions.

There are far fewer three dimensional digital assets of cultural resources. While many GLAM (Gallery, Library, Archive, and Museum) institutions have operated two dimensional imaging programs for years, they are just starting to explore what it means to create three dimensional digital assets.

Glam3D.org is designed to accelerate that process. Co-Created by Sketchfab Cultural Heritage Lead Thomas Flynn, Engelberg Center Fellow Neal Stimler, and Engelberg Center Executive Director Michael Weinberg, Glam3D.org brings together current best practices for digitizing, storing, documenting, licensing, and distributing digital versions of 3D models. Just as importantly, it presents them in the context of Open Access. That means that as our shared cultural resources are digitized they are immediately available for exploration, inspiration, scholarly research, and commercial use - all without having to ask permission.

Glam3D.org is a guide for GLAM professionals and Open Access advocates interested in 3D digitization. It walks users through the process of creating an Open Access digitization program, including selecting objects, running the scanners, archiving the data, licensing the files, and making them available online. The site collects examples and best practices from institutions from around the world to show that these things are possible today.

We are at the beginning of the 3D digitization process. Although Glam3D.org is filled with today’s best practices, we know that those practices are rapidly evolving. That is why Glam3D.org is a collaborative online resource (you can read more about why we decided to make it a website in this companion post. We welcome suggestions and additions from the community, and hope to regularly update the site and new technologies emerge and old practices fall away.

While the site will continue to evolve, there is already lots to explore today. Dive in, take a look, and let us know what you think.

Why Glam3D.org is a Website, not a PDF

This post originally appeared on the Engelberg Center blog.

Today the Engelberg Center launched Glam3D.org, a website that guides GLAM (Gallery, Library, Archive, and Museum) institutions through the process of creating an Open Access program for the three dimensional objects in their collections. You can read more about the site in our announcement post. This post will explain why we decided to build Glam3D.org as a website.

The project that would become Glam3D.org started as a more traditional whitepaper co-authored by Sketchfab Cultural Heritage Lead Thomas Flynn, Engelberg Center Fellow Neal Stimler, and Engelberg Center Executive Director Michael Weinberg. Engelberg Center whitpapers are designed to explore topics for audiences beyond academia, while providing depth of expertise that exceeds a blog post or shorter article. Because of this, the whitepaper format seemed to lend itself to a guide for creating a 3D Open Access program. That meant thinking about it as a document that would primarily exist as a PDF.

However, once we started writing the paper, we realized that we might also want to make an online version of the paper. One of our primary motivations for this was to make it easier to navigate. As the paper grew longer we realized that many elements were deeply connected to each other. Although there are centuries of formatting tricks that make it easier for readers to navigate complex paper documents, the linking structure of websites can make that process much more efficient.

We also wanted to take advantage of the fact that we were talking about 3D objects. Our original draft was full of flat screenshots of 3D models. Why not use the 3D models themselves? One of the points of the project was that it is easy to make 3D models available to people online. A website allowed us to do that.

Finally, we knew that we were capturing a fast moving process in its early stages. Although the document captures today’s best practices, we also assume that those best practices will evolve in the coming months and years. Furthermore, we anticipate (hope?) that the 3D Open GLAM community will contribute ideas and improvements to this project as those changes happen. Tracking that evolution over a series of PDF documents can be challenging. Maintaining a website may be a bit more straightforward.

For most of the drafting process we assumed that we would produce both a PDF and website version of the paper. However, as we got closer to finalizing things, we realized that subtle differences between the two formats were forcing us to maintain two substantially different documents. Hyperlinks in the online version needed to be additional footnotes in the PDF. Citations built into 3D viewers online needed to be captions below 2D images in the document. These differences compounded as we imagined tracking improvements and community suggestions across two different, increasingly divergent, formats.

That is why, fairly late in the process, we decided to go online-only. Doing so greatly simplified the finalization of the document and allowed us to assume that all readers could interact with 3D objects and easily follow outside links. We also hope it will make it easier to maintain the site going forward.

We recognize that this decision comes with some tradeoffs. There are still many people who prefer to read longer documents as digital PDFs or printed on paper (I count myself among them). We have designed the site so that it can be easily printed in most circumstances, which we hope will soften that blow.

It can also be harder to recognize changes between versions. Updating a PDF is often a fairly involved, obvious process. Changes to a website can be more subtle. While it is easy to tell people that they can check out our github repo for changes to the site, many people find github to be a foreign, complicated platform that is hard to navigate. We will include ‘last updated’ information in the footer of Glam3D.org, which hopefully will provide some indication as to when changes occur.

Like the GLAM sector’s adoption of 3D scanning that it documents, Glam3D.org is an experiment in its early days. We are excited to hear what you think about the project, and about the decision to make it a digital resource. We certainly plan to provide updates as the practice, and the site, evolve.

The Good Actor/Bad Actor Approaches to Licensing

All things being equal, do you want to make it easy for good actors to be good or to be able to punish bad actors for being bad?

This post is about two different ways to approach licensing questions: focusing on good actors and focusing on bad actors. Fundamentally, it is about how to weigh various tradeoffs inherent in making licensing decisions. This choice is not intended to be a dichotomy. Instead, it is more like a framework for understanding the choices that need to be made.

Recently I have found this framework to be especially useful in two contexts. The first is in situations where it is not entirely clear how the intellectual property rights you have map onto the thing you are nominally licensing. This can often be the situation in open source hardware, where some aspects of the hardware are protected (or protectable) by an IP right, some aspects are not, and it may or may not be possible to engineer around the protected parts to recreate the underlying functionality. In other words, the thing you have is protectableish in ways that are not super clean.

The second is in situations where you are hoping to engage a large number of legally unsophisticated users. This can often be the case in the GLAM Open Access world, where you are making works available to the public in the hopes that they will make use of them. In these types of situations you want as many people as possible to make use of what you are releasing. In doing so you need to assume that the vast majority of your users are casual, and very few of them will have access to lawyers that can help them navigate the specifics of the license terms.

I am not suggesting that there is anything revolutionary about this approach. It has just been especially useful recently so it seemed worth writing down in case it is helpful to someone else.

Good Actor/Bad Actor

The core concept of this framework is to optimize your licensing structure and presentation to target the actor you are most concerned about. In this context ‘good actors’ are users following your rules and behaving in ways that you approve of. ‘Bad actors’ are users breaking your rules and behaving in ways you disapprove of.

Given the choice, which one of those is your priority?

Good Actor Approach

In order to support a good actor, you want to prioritize facilitating use and removing barriers. That means you want to use broadly permissive licenses that minimize the number of obligations and restrictions imposed on licensees. You also want to avoid extensive legal language and disclaimers (even if they are substantively reasonable!) that can intimidate users and undermine confidence that they really are allowed to use the licensed works as you intend.

Good Actor Pros

  • Makes it maximally easy for users to make use of whatever it is you are licensing
  • Gives users confidence that you are inviting them to make use of whatever it is you are licensing, and that the terms are unlikely to change in the future
  • Builds goodwill among user community

Good Actor Cons

  • Likely prevents you from reserving every possible legal remedy to use against bad actors
  • Allows bad actors to break some rules without facing direct legal consequences

Bad Actor Approach

In order to maximize your ability to punish a bad actor, you want to provide legal language that defines the types of permitted behavior as specifically as possible. Although there is not an inherent conflict between legal specificity and clean writing (see, for example, the Blue Oak Model License), in many cases that specificity will come with additional legal language and restrictions. Even the most user-friendly legal language can intimidate legally unsophisticated users, and each additional restriction can dissuade some (potentially good) users from making use of the thing.

Bad Actor Pros

  • Reserves as many legal tools as possible to punish users who break the rules
  • Maximally shield you from legal liability

Bad Actor Cons

  • Can inadvertently discourage welcomes uses that users (incorrectly) perceived as beyond the scope of the rules
  • May create ongoing necessity to engage with lawyers to revise rules and enforce violations

Lawyers are often trained to be conservative pessimists. As a result, we tend to undervalue the costs associated with adding one more clause to an agreement or one more disclaimer on a site. It is also true that adding the additional clause or disclaimer can allow you to prevail in litigation.

However, It is also true that many projects are not built on adversarial relationships with users. In many cases there may be real benefits to a more minimal approach that outweigh the cost of leaving additional clauses out.

None of this means that you should be legally irresponsible on how you approach your licensing. Instead, I have found thinking about empowering good actors and punishing bad actors as a useful way to help develop an approach that will help everyone achieve their goals while giving myself permission not to add that one more clause and disclaimer.

Specifically, when I think about empowering good actors as a goal of a licensing regime, it gives me something to weigh against my impulse to add additional legal language. Instead of being lazy, taking a more minimal approach is actually doing a better job of achieving the project’s goal.

Feature Image: Excerpt from The Temptation in the Smithsonian’s Open Access Judge Magazine archive. I had to crop out many geographic features of silver. There’s really nothing better than golden age political cartoons.

(the lack of) Official Guidance and the Maker Response to Covid-19

There have been a fantastically large number of maker and open source hardware responses to the Covid-19 virus. These responses are largely focused on developing stopgap solutions to shortages of medical supplies critical to fighting the outbreak. While there will be a number of lessons to be learned from studying this movement, I want to flag one in this post: the (unexpectedly?) important role that authorities should play in channeling maker energy in important directions. Unfortunately, that role’s importance is becoming clear because authorities have been slow to play it.

Grass Roots Initiatives Emerge to Create Stopgap Solutions

For reasons I will not dwell on here, over the past few weeks public health authorities have been announcing a dire shortage of medical supplies. Although everyone hopes that traditional manufacturing capacity will eventually scale up to meet demand, in the meantime a huge number of distributed efforts have emerged to try and fill the gaps. These efforts largely started by focusing on designing inexpensive and easy-to-produce alternatives to critical medical supplies such as ventilators, masks, and other personal protective equipment (PPE).

These community-driven initiatives did not purport to be better than the traditional version of the equipment. Rather, in the absence of the traditionally sourced version, they attempted to be better than nothing.

These initiatives are tapping into a deep well of engineering capacity and interest in helping out in an emergency, both of which are inspiring. Since they were often engineering-driven, these initiatives often worked on solving the many engineering-related challenges with developing and manufacturing the equipment in a distributed environment.

Unfortunately, oftentimes these efforts did not also include public health experts. This is totally understandable, as many public health experts were active in the primary effort to combat the virus. However, it meant that in many cases the projects were focusing on solutions that made sense from an engineering standpoint but were not necessarily foptimal from a public health standpoint.

At a minimum, this had the potential to make inefficient use of the engineering and community-building expertise that the initiatives did have. At worst, they could create solutions that were actually worse than having nothing at all. Naomi Wu was one of the first people I saw raising critical questions about the value of some of these initiatives (and also pointing out that the least sexy of the solutions may be the best).

The Role of Authorities

This is the point where you might expect authorities to step in with guidance. After all, public health authorities do have the expertise required to evaluate options created by distributed communities. By elevating some those public health authorities have the ability to drive efforts towards the most effective options.

Authorities in the United States have been slow to do this. Again, this is understandable - medical regulators have safety and evaluation standards for a reason, and their institutional culture rightly makes it hard to pivot from ‘this passes a formal review process’ to ‘this is a non-horrible option in a pinch.’

What those authorities do not appear to fully appreciate is that their silence does not prevent the grassroots projects from going forward. For better or worse, grassroots enthusiasm can be channeled but it cannot easily be stopped. By avoiding endorsing any of the projects, they are failing to channel the capacity to design, manufacture, and distribute stopgap solutions in productive directions.

In a crisis it may be worth conferring slightly too much validity on a solution that is only good enough. That is especially true if doing so focuses efforts away from solutions that fail to even meet the ‘good enough’ standard. Official imprimatur can also activate even more capacity to support the approved solutions. It is likely that at least some manufacturing capacity is waiting for some sort of official guidance before jumping in to help.


This dynamic is evolving. Some larger projects have recruited enough public health experts to create their own informal medical review boards. The US government has also started to release files for PPE that have been reviewed for clinical use on the NIH 3D Print Exchange.

These are all encouraging signs. However, they could have been much more effective if there were systems in place to help grassroots initiatives quickly focus on areas of highest need that were likely to be addressable by distributed design and manufacture. In addition to identifying the most promising solutions, they could also create testing protocols that allow manufacturers to verify that the objects they create match the specifications as intended.

This piece is not intended to be a criticism of authorities at this moment. The government and public health community is full of people acting in good faith to triage and address a firehose of challenges. I am in no position to second guess those prioritization decisions. Instead, this piece is intended to serve as a reminder that there are roles for the government to play in coordinating even informal networks, and in the hopes that we have plans to do so more effectively next time.

list image: Getting Out One of the Large Buoys for Launching

Earlier this month CERN (yes, that CERN) announced version 2.0 of their open hardware licenses (announcement and additional context from them). Version 2.0 of the license comes in three flavors of permissiveness and marks a major step forward in open source hardware (OSHW) licensing. It is the result of seven (!) years of work by a team lead by Myriam Ayass, Andrew Katz, and Javier Serrano. Before getting to what these licenses are doing, this post will provide some background on why open source hardware licensing is so complicated in the first place.

While the world of open source software licensing is full of passionate disputes, everyone more or less agrees on one basic point: software is fully protected by copyright. Software is ‘born closed’ because the moment it is written it is automatically protected by copyright. If the creator of that software wants to share it with others, she needs to affirmatively give others permission to build on it. In doing so she can be confident that her license covers 100% of the software.

At least at an abstract level, that makes open source software licenses fairly binary: either there is no license or there is a license that covers everything.

Things are not as clean in open source hardware. Hardware includes software (sometimes). It also includes actual hardware, along with documentation that is distinct from both. Hardware’s software is protected by copyright. The hardware itself could be protected by an idiosyncratic mix of rights (more on that in a second) that include copyright, patent, and even trademark. The result of this is, at a minimum, an OSHW license needs to be aware of the fact that there may be many moving intellectual property pieces connected to a single piece of hardware - a fairly stark contrast to open source software’s ‘everything covered by copyright’ situation.

OSHW Licenses are Hard 2: Coverage is Hard to Generalize

The (at least superficially) straightforward relationship between software and copyright makes it easy to give generalized advice about licensing and to develop licenses that are useful in a broad range of situations. A lawyer can be fairly confident that the advice “you need a copyright license” is correct for any software package even without having to look at the software itself. That, in turn, means it is safe for non-lawyers to adopt “I need a copyright license for my software” as a rule of thumb, confident that it will be correct in the vast majority of cases. It also means that software developers can be confident that the obligations they impose on downstream users - like an obligation to share any contributions to the software - are legally enforceable.

As suggested above, hardware can be much more idiosyncratic. The physical elements of hardware might be protected by copyright - in whole or in part - or they might not. That means that the hardware might be born closed like software, or it might be born open, free of automatic copyright protection, and available for sharing without the need for a license. The flip side of this ambiguity is that a creator may be able to enforce obligations on future users (such as the classic copyleft sharing obligations) for some hardware, but not for other hardware. Expectations misalignment with regards to these kinds of obligations can create problems for creators and users alike.

All of this means that it can be hard to create a reliable software-style licensing rule of thumb for OSHW creators. Many OSHW creators end up following the practices of projects that went before them and hoping for the best. In fact, this ‘follow others’ model is the premise for the educational guidance that the Open Source Hardware Association (OSHWA) makes available.

OSHWA’s Approach

One of the many questions all of this sets up is a bundling vs breakout approach to licensing. Is it better to try and create an omni-license that covers the IP related to software, hardware, and documentation for OSHW, or to suggest users pick three licenses - one for software, one for hardware, and one for documentation? A creator could make very different choices about sharing the three elements, so the omni approach could get complicated fast. At the same time, having three distinct licences is a lot more complicated than just having one.

OSHWA ultimately decided to go with the three license approach in its certification program. This was driven in part by the realization that there were already mature licenses for software (OSI-approved open source software licenses) and documentation (Creative Commons licenses). That allowed OSHWA to take a “don’t do anything new if you can avoid it” approach to licensing education. It also required OSHWA to recommend licenses for hardware.

Existing OSHW Licenses

While many open source hardware creators use software (such as the GPL) or documentation (Creative Commons) licenses for hardware, neither of those licenses were really written with hardware in mind. Fortunately, there were three existing hardware licenses. OSHWA provided a quick comparison between the three licenses: CERN 1.2, Solderpad, and TAPR. Although all of these licenses were good first steps, they were all developed fairly early in the history of open source hardware. Solderpad and TAPR in particular were essentially designed to add hardware wrappers to existing open source software licenses.

CERN 2.0

CERN’s 2.0 licenses have been informed by all of the developments and thinking around open source hardware and licensing in the seven years between the release of 1.2 and today. In recognition that creators may be interested in multiple types of openness and obligations on downstream users, they come in the flavors: the strongly reciprocal S variant, the weakly reciprocal W variant, and the permissive P variant. While this structure makes it hard to mix reciprocities (by, for example, requiring strong reciprocity on documentation and weak reciprocity on the hardware itself), they provide a clear way for hardware creators to license the hardware portion of their projects. This is a deeply reasonable approach.

CERN’s ‘Available Components’

One evergreen question for open source hardware is ‘open down to what?’ Your design may be open, but does that mean that all of your components have to be open as well? Did you have to use open source software to create the design? Running on an open source operating system? Running on open source silicon?

OSHWA’s certification program addressed this question with the concept of the ‘creator contribution.’ The idea is that the creator must make available and openly license everything within her power to make available and open. Generally those will be her designs, code, and documentation. It is fine to include components sourced from third parties (even non-open components) as long as they are generally available without requiring an NDA to access.

CERN’s ‘available component’ definition achieves much the same goal. As long as a component is generally and readily available, and described with enough information to understand their interfaces, they do not themselves have to be open. Of course, both the contours of the creator contribution and available component may vary from hardware to hardware. Hopefully time and experience will help give us all a better sense of how to draw the lines.

Let’s See How it Goes

This post has mostly focused on the CERN license’s role in helping making ‘born closed’ components more open through licensing. There is a flip side to all of this: what happens when a license is used on a ‘born open’ piece of hardware. That can give both users and creators a distorted sense of their obligations when using a piece of hardware. However, that is probably a problem for public education, not license design.

This is an exciting time for open source hardware. CERN’s new license is a big step forward in licensing. As it is adopted and used we will learn what works, what doesn’t, and what tweaks might be helpful. The best way to do that is to use it yourself and see how it fits.