From the course: Hands-On AI: Building LLM-Powered Apps

Solution: Set up prompting - Python Tutorial

From the course: Hands-On AI: Building LLM-Powered Apps

Solution: Set up prompting

- [Instructor] Okay, hope you enjoyed the lab. So now let's go through how to navigate the application to find and extract the exact prompt. First we go to app.py and find RetrievalQAWithSourcesChain and we Command or CTRL click on it to navigate to RetrievalQA with chain, the source code. We scroll around and we don't see anything that's interesting here about prompt templates. So let's navigate to its base class, which is BaseQAWithSourcesChain. Since we initialized the chain from chain_type, we can see here there is a chain_type_kwargs, this means this is the keyword argument for the chain type that we input. And in this function we see a load_qa_with_sources_chain. So let's click on here and jump to the load_qa_with_sources_chain function. We navigate down here and we see that it uses the _load_stuff_chain function here. So let's click on the _load_stuff_chain function and voila, this is the prompt. So we do see the prompt is BasePromptTemplate and then there's a document_prompt, it's BasePromptTemplate, stuff_prompt, EXAMPLE_PROMPT. So if we navigate by clicking on the stuff_prompt, we can see the full prompt template that is used in our current chat with PDF application. So let's go ahead and copy paste the whole thing into our prompts.py. And here we're done, let's do a little bit of cleanup here just so it's clean. Save this, and then we go to app.py and import the prompt template into our application. So it was PROMPT and EXAMPLE_PROMPT that we need to import. And then when we initialize our chain here, we can then input. Just as a reminder, let's go back to BaseQAWithSourcesChain. We see that it uses _chain_type_kwargs, so let's use this parameter and then set it to prompt equals to PROMPT, document_prompt as EXAMPLE_PROMPT. And here we are done enabling our application to have its own prompt template. Then we will give it a spin just to make sure that everything runs, so we will do chainlit run app/app.py -w, and hopefully everything loads. And once the application is loaded, we know that it is working. So that's open in the browser. Boom, boom, boom, browse files, Downloads, NVDA, and it is now running. So next we will dive into the toolings to help our prompt engineering efforts.

Contents