Skip to main content

Why open source projects need a code of ethics

· 5 min read

The reason AI development has made such impressive strides lately is largely due to the use of open source software. This isn’t just some niche trend — everyone’s using open source AI, from programmers to academics to everyday users.

But here’s the thing: with great power comes great responsibility. Now that AI is everywhere, it’s time to think not just about what it can do, but what it should do. Suddenly, there are endless options, and there is little oversight.

The big question is: How the hell can we decrease the chances of AI being employed with malicious intentions??

The answer is a Code of Ethics.

As it stands with open source AI licenses, anyone can use them for any reason. And while that’s kind of the point of open source, it’s also a potential nightmare.

Sure, most people will act responsibly. But there are always bad apples out there.

Take deepfakes, for example. When people use it ethically, deepfakes can be fun and educational. Yet they were initially used to place actresses’ faces on porn performers’ bodies without consent — not exactly an above-board use of innovation.

And then there are people who, despite their good intentions, unknowingly create harmful technologies. Misuse of AI has serious consequences, such as when predictive law enforcement perpetuates bias and inequality, when facial recognition invades privacy, or when a biased algorithm decides who gets life-saving medical care.

Now, don’t get the wrong idea; the technology itself is not the issue. No technology is inherently bad or good. It comes down to how people use it.

Let’s look at some examples of AI and open source companies that operated with questionable ethics and caused real harm.

Let’s start with Amazon. After building an AI-powered tool designed to assess job applicants and streamline the hiring process, it discovered the software favored men over women. After attempts to edit the bias out of the program, the company scrapped the whole thing in 2018 without ever using it to evaluate candidates (thankfully).

But wait, it gets better. Google, the supposed champion of responsible AI use, fired not one, but two AI ethics researchers for simply pointing out problems in AI LLMs. Oh, the irony. They hired researchers to study the ethics of AI, then turned around and terminated them when they found ethical problems in the AI language models Google uses.

Don’t forget the messy legal battle between Arduino LLC and Arduino SRL. Originally there was just one company. Then an Arduino LLC board member formed Arduino SRL and gave it trademarks. After that ploy was discovered, the two companies tried to collaborate on an open source hardware and software project. However, that didn’t work out and they ended up in a bitter ethical dispute over royalties, trademarks, and the status of open source licenses. The two companies split completely in 2016 after months of legal wrangling, leaving a swath of confused customers. To customers’ relief, the companies have since recombined.

It’s time to talk about implementing a clear code of ethics in technology, especially in the realm of AI. AI is still the new kid on the block, so the sooner we set down some ground rules for ethical use, the sooner we circumvent further individual, social, and corporate damage.

Establishing ethical guidelines builds trust between users and developers, and that’s when real advancements happen. Without those guidelines, users are in the dark about what’s right and wrong, and the potential for abuse skyrockets.

Right now, we’re at a crossroads. AI has the power to impact millions of people every day, and it’s not going anywhere. Stabilizing this technology with a code of ethics will encourage growth and development in a positive direction — and it may be all that’s standing between an AI world where we create amazing things and a world where we create all kinds of problems.

DeepMake doesn’t just support AI development; we live and breathe it. We want our users to take a hands-on approach and dive headfirst into the world of AI. That’s why we’re open source. But we also expect our users to apply our software responsibly.

We feel so strongly about this, we’ve published an Ethical Manifesto to communicate our expectations for the use of our software:

  • DeepMake technology is not for creating inappropriate content.
  • DeepMake technology is not for use without the consent of those affected or with the intent of hiding its use.
  • DeepMake technology is not for any illicit, unethical, or questionable purposes.
  • DeepMake technology exists to experiment and discover AI techniques, for social or political commentary, for movies, and any number of ethical and reasonable uses.

Since we’re open source, you’re free to use our software as you see fit. But make no mistake: We have zero tolerance for anyone using our software for any kind of unethical or harmful reason. We expect you to respect our policy and our software.

We’re not alone here. Other tech organizations are on board, too. The IEEE, an organization dedicated to advancing technology worldwide, has its own AI development guidelines. More are sure to follow.

It’s time to hit this again: Any technology can be abused. Technology itself isn’t a problem, it’s how it’s used that can be problematic. So keep that in mind, go forth, and have fun learning AI. But always keep one eye open to the potential fallout from what you build.