Since ChatGPT's launch in 2022, much of the discussion around AI has been in the domain of AI-generated content. The technology is improving efficiency and innovation in leaps and bounds across the software development, marketing, and entertainment industries.
Ever since the term "deepfake" was first coined in 2017 --- describing a type of AI that can create fake but convincing images, audio, and videos --- the media began speculating about how deepfakes could change the world. Fake videos came with the possibility of widespread ethical issues and misuse. If you believed everything you read six years ago, you would have thought that we were headed for a deepfake apocalypse --- a world where seeing is no longer believing and video footage can no longer be trusted.
While AI is one of the hottest trends of the year, it has a dirty little secret: It lies. Okay, it's not really a secret. If you've worked with an AI chatbot like ChatGPT or a text-to-image generator like Stable Diffusion, you've probably seen it hallucinate, or generate a result that was completely unexpected or even blatantly untrue.
AI made a big splash this year as the next big thing in technology. Now that AI is famous, it's the target of a lawsuit over copyright infringement.
On the 19th of September, George R.R. Martin, the Author's Guild and a handful of other authors filed a lawsuit against OpenAI, creators of ChatGPT. This follows other lawsuits such as one headlined by Sarah Silverman (which also targeted Meta). Between these two lawsuits, the authors allege that OpenAI's ChatGPT and Meta's LLaMA were trained on datasets containing illegally acquired copies of the plaintiffs' works. Their lawsuit claims the books were illegally acquired from pirate websites and that they didn't consent to the use of their copyrighted work as training material.
As a brand-new startup, we found ourselves in need of a company logo. Unfortunately, the graphic designer we were relying upon had recently moved on, leaving us with no internal design expertise.