From the course: Introduction to AI Orchestration with LangChain and LlamaIndex

Unlock the full course today

Join today to access over 24,500 courses taught by industry experts.

Solution: Local LLM task offloading

Solution: Local LLM task offloading

Welcome back. How did it go? Were you able to get an elegant solution in place? Let's take a look at what I came up with. Okay. The prompt section is pretty straightforward. I just had a task weather, city and I gave a description of what it does here. So that part was pretty straightforward. The real action comes in the REPL loop. So to start off, we look at the response from the LLM if it starts with our task string here, then we further check to see whether it is a time request or a weather request. If it's a time request, we set the variable observation to what that function returns. But if it's a weather request, then we set the observation variable to what that function returned. Otherwise, we just say it's an unknown task and we set the observation to none. That observation gets reused back here at the top of the loop. When if there is an observation, we'll use that as the input instead of asking the user to type something. So that's how we get the multiple conversational turns…

Contents