From the course: Hands-On AI: Building LLM-Powered Apps
Challenge: Putting it all together - Python Tutorial
From the course: Hands-On AI: Building LLM-Powered Apps
Challenge: Putting it all together
- [Instructor] In the previous labs, we enabled PDF document upload and processing, and we built a search engine to ingest the documents. In this lab, we are going to put everything together and create a full chat with PDF application using Langchain. So that's first navigate to app.py. on_chat_start, we first ask the users to upload a PDF file. Then we load and chunk the PDF file and ingest them into our Chroma search engine. Now we will have to create our chain to retrieve with QA, so this is the retrieval augmented generation portion. When we retrieve, usually we would like to retrieve sources as well to provide to the users so they have grounding and trust to the documents and the answers that we provided. This is super exciting. Let's get to work.
Contents
-
-
-
-
Retrieval augmented generation3m 30s
-
Search engine basics2m 32s
-
Embedding search3m
-
Embedding model limitations3m 15s
-
Challenge: Enabling load PDF to Chainlit app48s
-
Solution: Enabling load PDF to Chainlit app5m 4s
-
Challenge: Indexing documents into a vector database1m 50s
-
Solution: Indexing documents into a vector database1m 43s
-
Challenge: Putting it all together1m 10s
-
Solution: Putting it all together3m 17s
-
Trying out your chat with the PDF app2m 15s
-
-
-