Skip to main content

Behavioral Licenses Aren’t Open Source and Are Unenforceable

· 5 min read
DeepMake

DeepMake's software is dedicated to supporting AI education and expanding AI access for everyone. As we say on our website, "As an open core company, we believe that open source software democratizes AI development and promotes accessibility, leading to rapid iteration and scaling. True open source — available to anyone regardless of background, industry, or location — enables experimentation without constraints and drives innovation."

When we say we're committed to remaining open source, we mean it. Our software has no behavioral use restrictions, also known as ethical use licenses. 

Furthermore, we don't believe in behavioral licenses. We don't think they're appropriate for open source software and we don't believe they're enforceable. We're here to develop and promote AI tools and techniques, not control users' actions.

What Are Behavioral Licenses?

Behavioral licenses are restrictions that spell out how you can behave or act while using a particular software. Restrictions may cover the use of the software for non-commercial purposes, redistributing or sharing it, or reverse engineering it. They may also spell out activities users are not to participate in like using the software to do harm or break the law.

Some of these restrictions are common in proprietary software. They are less common in open source software but do sometimes show up. For example, the license for Stable Diffusion, a deep-learning text-to-image model, includes behavioral restrictions. Attachment A of the license agreement lists the ways you can't use the software, including violating laws, defaming or harming others, and spreading false information. 

Are these restrictions enforceable? In theory, yes. Developers can restrict anything they want --- we once saw a software license that said the product could only be used with "green" energy. In practice, however, many developers don't have the resources or the desire to investigate and enforce these clauses.  

Why We Don't Like Behavioral Licenses 

Some people argue that there are good reasons to have behavioral licenses. For example, suppose your software is licensed so it can't be used for child pornography. That sounds reasonable, right? Yes, of course. 

Why stop there? What about restricting using it for terrorism?

Again, that sounds good --- until you remember that one person's terrorist is another person's freedom fighter. Where do you draw the line? 

Suppose you support Ukraine's resistance against Russia in 2023. Looking ahead to 2027, how would you feel if Ukraine started using terrorist tactics? They are the good guy, right? Does our support for their cause equate to support for any unethical tactics they might use? If not, would that mean that they can't use software that's restricted against unethical use?

Now we're getting to the heart of the problem. These are hypothetical examples, and we're definitely not advocating for using software for any illegal or terrorist activities, but when you attempt to outline exactly what ethical and non-ethical use looks like, things get very muddy. 

Our point is simply that we don't like the practice of regulating how software can be used, based on the developer's ideas of what's right or wrong --- precisely because humans are fallible, context matters, and the definition of "ethical" is not consistent person-to-person. We're on the side of keeping open source open. We feel that the intent of open source technology is to make it available so everyone can enjoy it, and that behavioral licenses reverse that intent.

Why Does This Matter?

This whole debate is on everyone's mind right now. Currently, the Open Source Initiative (OSI) --- the organization that defines what open source is --- is in the process of defining open source in AI. The organization wants to establish a shared set of principles to guide the AI community, much like they did for software.

OSI is still asking for input and plans to release its initial ideas later this year. We don't know what they're going to recommend, but at DeepMake, we're recommending they stay true to the promise of open source, including higher-quality, flexible, lower-cost technology, with no predatory vendor lock-in.

Frankly, we don't think behavioral licenses are even necessary. The open source community already operates by its own code of ethics where users hold developers and each other accountable, even if nothing is written into a license. When members of the FaceSwap community were interviewed about ethics in open source, they said that the community has and enforces its own code of ethics by refusing support and banning users that violate the code. 

Simply put, we're not adding behavior clauses to DeepMake software. We do support the ethical use of our open source Deepfake technology, of course. We spell it out in our Ethical Manifesto, and discuss it further in our article about why open source needs a code of ethics. However, we won't enforce our standards. Ultimately, that's up to you, the user.