From the course: Hands-On AI: Building LLM-Powered Apps

Solution: Adding an LLM to the Chainlit app - Python Tutorial

From the course: Hands-On AI: Building LLM-Powered Apps

Solution: Adding an LLM to the Chainlit app

- [Instructor] Welcome back. Hope you enjoy building your own simple version of ChatGPT. So let's walk through the exercises. Here, the first exercise, we will define a model that's saved here. We'll do model equals to gpt-3.5-turbo-16k-0613. And that's set, streaming equals to true. So we can see the tokens streaming back one by one. And this helps a lot with the user experience. Now let's go down to the next exercise. A prompt template. Here is my version. We will start with a system prompt. "You are Chainlit GPT, a helpful assistant." And then we will add the user prompt says, "human" and here I will define my prompt template variable as question. So whenever we substitute the model call with question variable, then it will get replaced in here. Then the next step is to set up the chain. So we will set up the chain right here for large language model, we will use the model we have just created and the prompt, we will use the prompt that we just created. And here we save the chain into the user session. So whenever we get our message, we will retrieve the user, the chain from the user session. And then here is where we integrate back to channel it. So here we would input question equals to message.content, which means we are going to grab the message content and put that in into our question variable to substitute in the question here. And this completes the application. So now we can try to run the application using chainlit run app/app.py -w. And this should bring up our chat application. And here is our chat application. Right now it is using GPT-3.5 to respond so that say if we type in, "Hello world." It will say, "Hello! How can I assist you today?" So that's it. We just built our own very simple version of ChatGPT. Feel free to play around with the application that we just built. And next we will discuss the limitations of large language models.

Contents