How to Deploy GPT-J to Production (the easy way)
June 20, 2022Deprecated: This blog article is deprecated. We strive to rapidly improve our product and some of the information contained in this post may no longer be accurate or applicable. For the most current instructions on deploying a model like GPT-J to Banana, please check our updated documentation.
In this tutorial you'll learn the easiest method to deploy HuggingFace's GPT-J model to production on serverless GPUs.
We will take you step-by-step from setting up your dev environment all the way to actually using your production GPT-J model for inference.
This tutorial has been broken up into four videos and in total will take around 45 minutes of your time. Let's begin!
Refer to our GPT-J Serverless Repository (referenced in video).
Did you know Banana offers a 100% free API for GPT-J with unlimited usage? It's true! If you are a hacker or hobbyist that wants to try GPT-J head over to the free APIs on our website and follow the usage instructions.
Part 1:
Part 1 of this tutorial series is focused on setting up your dev environment, cloning the Banana serverless template repo from GitHub, and verifying the hello world BERT model is working before we swap it with the GPT-J model.
Part 2:
Part 2 of this tutorial series is focused on loading the GPT-J model from HuggingFace into your template repo and making code changes to test that the model works proper locally.
Part 3:
Part 3 of this tutorial series is focused on dockerizing your GPT-J model repo, testing the docker build, and pushing your GPT-J repo to production.
Part 4:
Part 4 of this tutorial series is the wrap up where we run your GPT-J model in production on Banana's serverless GPUs! Wrap Up:
Hopefully you enjoyed this tutorial. If you have any questions, send us a message on our Discord. Let us know on Twitter what machine learning model you'd like to see a deployment tutorial on next and we'll make it happen!