Read the news on any given day and you’ll see media firestorms about the dangers of AI and fearmongering over robots being developed for warfare. Simultaneously, countries are diving headfirst into AI development and new software is emerging daily. The hustle for AI dominance has our heart rate rapidly rising, reminiscent of the nuclear arms race of the twentieth century.
The reason AI development has made such impressive strides lately is largely due to the use of open source software. This isn’t just some niche trend — everyone’s using open source AI, from programmers to academics to everyday users.
You wouldn’t expect a behemoth like Google, which has its hands in almost every aspect of technology, to even care about open source and open core technology. But it turns out, it does care. And it’s afraid. Very afraid.
Deepfakes are everywhere these days, from Tom Cruise TikTok impersonations to fake video of former president Donald Trump resisting arrest. Because the AI engine behind deepfakes is relatively complex and technical, there is a fair amount of misinformation about how these videos are created.
Unless you live under a rock, you’ve heard about deepfakes. Deepfakes are videos that blend reality with fiction by training a generative neural network to replace a person’s face with someone else’s. These videos are making news, and not always in a good way. They can create the idea that a person — usually a celebrity or political figure — is doing something they did not do. And sometimes, they are made and distributed with malicious intent.
As a brand-new startup, we found ourselves in need of a company logo. Unfortunately, the graphic designer we were relying upon had recently moved on, leaving us with no internal design expertise.