Skip to main content

Why DeepMake Prefers Cloudless Open Source AI

· 5 min read

In the year since OpenAI's ChatGPT debuted, it's taken the world by storm. Reportedly, nearly 49% of companies and over two million developers use the service.

Yet, here at DeepMake, we're not jumping on the ChatGPT bandwagon --- there are several reasons why. One of them being that ChatGPT only runs in the cloud, which adds many complexities even though it makes the model easy to use and widely accessible. We'd rather integrate a cloudless open-source AI model like Stable Diffusion that can run both locally and in the cloud.

Cloudless open source AI --- using open source AI technologies and models within a cloudless computing environment --- removes any constraints on where developers can deploy applications. This gives developers the flexibility to host their AI applications and workloads wherever best suits them --- be it on-premises, at the edge, or in the public or private cloud. We feel the combination of cloudless computing and open source AI makes AI models more accessible by providing greater flexibility, choice, and lower costs for developers and users. There's an inherent trade-off between the convenience of cloud-only AI and the privacy, flexibility, and robustness of cloudless AI.

Why we prefer integrating cloudless Stable Diffusion over cloud-only ChatGPT

Let's look at what we like about cloudless AI like Stable Diffusion, the open-source image synthesis model. It generates unique photorealistic images in response to text and image prompts and is compatible with desktops or laptops built with GPUs. Beyond static imagery, users can leverage Stable Diffusion to create dynamic content like videos and animations. Anyone can easily download and use Stable Diffusion thanks to its permissive license, which places little restriction on how the model can be modified or redistributed. Plus, you don't have to install Stable Diffusion locally if you don't want to --- there are many online services you could use to run the model like DreamStudio and Hugging Face

Stable Diffusion allows anyone to access the source code and model weights, so users can run the model locally and make changes as needed. You could even train your own models based on your own dataset to get them to generate the specific kind of images you want. Stable Diffusion excels at VFX like style transitions, changing backgrounds, animating videos, and other edits while maintaining visual continuity throughout the video sequence. Stable Video, a framework based on Stable Diffusion, allows users to generate realistic and temporally coherent videos from text prompts and enables text-driven consistency-aware video editing.

Finally, Stable Diffusion gives users full control over the entire stack. You can add or remove features at any time. You can debug and resolve issues by yourself as soon as you wish. That's how we feel AI should work.

Now let's take a look at ChatGPT, a cloud-only AI model. It runs on a cloud server owned by OpenAI, limiting its overall utility. ChatGPT needs a constant internet connection, creating a hassle for users in environments with limited connectivity. Then there's the issue of latency from communication delays between user devices and the cloud servers hosting the model, affecting real-time interactivity. 

The cloud nature of models like ChatGPT result in privacy concerns, since transmitting user inputs to external servers raises questions about data security. On top of that, scalability is typically capped depending on your pricing plan, which may limit the number and length of messages you could send within a given timeframe. Cloud-only AI models are simply impractical for many users and applications that require data security, offline functionality or local deployment. 

As if that weren't enough, there's the issue that despite the "open" in OpenAI, ChatGPT is closed source --- you can't access or modify the source code of the model. The company rarely releases open source models and maintains ownership of its neural network weights, which define the core functionality of its AI models. Even though you can now create a custom version of ChatGPT, you can't do much with it because you still have little to no control over the internal architecture. In other words, you can't advance or extend it. 

Lastly, ChatGPT is a closed source model, so users have virtually no control over it and no say over how it will evolve. What works today may not work next week. You could request new features, but you'll have to wait indefinitely --- if they're addressed at all. Moreover, when there are bugs, you either need to wait for the OpenAI team to fix them or find workarounds. And don't forget that OpenAI could decide to modify the model at any moment, and you'll be left with no other choice but to adapt and adjust accordingly.  

We're not against cloud-only AI models, we just prefer cloudless

As a cloud-only model, ChatGPT has undoubtedly made strides in generative AI, but it's closed source and its limitations are evident. Cloudless Stable Diffusion, on the other hand, is open source and it offers greater privacy, flexibility, and robustness --- which is why we prefer it. 

We're not saying that we're only going to allow cloudless AI and not cloud-only models. We're not playing gatekeeper here at DeepMake. Rather, we're saying we prefer cloudless open source AI. As a result, we're not going to put any effort into cloud-only models like ChatGPT.