How to Deploy Craiyon (DALL·E Mini) to Production
November 08, 2022Deprecated: This blog article is deprecated. We strive to rapidly improve our product and some of the information contained in this post may no longer be accurate or applicable. For the most current instructions on deploying a model like Craiyon to Banana, please check our updated documentation.
Let's walkthrough how you can run Craiyon (DALL·E Mini) in production on Banana's serverless GPU platform. We're using this model source code for the tutorial.
Let's begin!
What is Craiyon aka DALL·E Mini?
To clarify, Craiyon is a text-to-image generation model that was formerly called DALL·E Mini. The team was asked to change the name of the model by OpenAI to avoid confusion with OpenAI's DALL·E project. You will see Craiyon and DALL·E Mini used interchangeably on the Internet, this is why.
The intended use of Craiyon is to generate images from text for personal consumption and research. There are many use cases of this model that teams are building with it. Some examples would be to leverage this model to inspire creativity, create humorous content, or adapt into educational tools.
Here is a list of business ideas that you could build with text-to-image models like Craiyon and Stable Diffusion.
What's better? Comparing Craiyon vs Stable Diffusion
It's a good question, and quite frankly Stable Diffusion is the most popular text-to-image model on the market right now (2022). More teams are building with Stable Diffusion than they are with Craiyon. That said, you need to consider what your planned use case is for a text-to-image model and assess which model fits your specific needs. For example, some niches of image generation may be better suited for the style output of Craiyon than Stable Diffusion.
We recommend trying both models to understand what best fits your use case. Here is the fastest and easiest method to deploy Stable Diffusion to production.
How to Deploy Craiyon (DALL·E Mini) to Production
1. Fork Banana's Serverless Framework Repo
The first thing you'll need to do is fork Banana's Serverless Framework into a private repo on your GitHub account. Our serverless framework is repository-based and will be your base repo for deploying Craiyon to Banana. It's worth noting that our repository framework can be used to deploy any custom model to Banana, not just Craiyon.
2. Customize Repository to run Craiyon (DALL·E Mini)
Time to customize the repository you just forked. Our serverless framework runs BERT as the demo model, so we'll need to change that to Craiyon. Make sure you read our documentation on the Serverless Framework to understand how each file and the important code blocks function within this repo.
To simplify, you'll need to:
- Modify download.py to download Craiyon
- Load Craiyon within init()
- Adjust
inference()
block to run Craiyon
3. Create Banana Account and Deploy Craiyon (DALL·E Mini)
One you have customized your repo with Craiyon, it is advised that you test your code before deploying to production. The simplest method to test your code is to use Brev (follow this tutorial).
Once you verified everything is working, login to your Banana Dashboard and click the "New Model" button.
A popup will appear:
Select "GitHub Repo", and choose your Craiyon repository from the list. Click "Deploy" and the model begins to build. The build process can take upwards of an hour so please be patient.
You can monitor Model Status to check when the build is complete. You will see the status change from "Building" to "Deployed" when it's ready to be called.
You can also check the status of your build in the Model Logs tab.
4. Call your Craiyon (DALL·E Mini) Model
After your model builds on Banana it is ready to run in production! To run your model, choose your language of choice (Python, Node, Go) and call it by using the Banana SDK.
Great job! You are now running Craiyon (DALL·E Mini) on serverless GPUs.
Wrap Up
How did that go? We'd love to hear your questions and chat with you about Craiyon. The best place to do that would be on our Discord or by tweeting us on Twitter. Ping us with requests for more tutorial content and we'll make it happen!