An LLM workflow is the underlying structure that makes RAG and AI agents possible. It is the "recipe" that guides the AI to solve a problem, consisting of a series of steps that happen before and after an LLM call.
Building these smart workflows requires three key components: input, process, and output.
The first step is input, which is where prompt engineering comes in.
Getting the right output from an LLM begins with providing clear, specific instructions. A useful framework is the 5Ws and H (What, Where, Who, When, Why, and How) to ensure prompts are detailed and focused.
For example:
- Instead of asking: "Give me some title tag ideas,"
- A more effective prompt would be: "Generate five SEO-friendly title tags for a white jeans category page. The audience is potential jeans buyers, and the tone should be engaging and descriptive. Each title should be under 50 characters, optimized for search results."
This method ensures the LLM delivers a more relevant result on the first try.
The next component is the process, which requires the LLM to provide a structured output.
For a workflow to function correctly, it needs to understand the LLM's response to take the next step. This can be achieved by asking the LLM for a specific output format, such as a table or a JSON file, which can then be easily parsed and acted upon by the system.
The final step is the output, which involves integrating the LLM's response with other systems.
This is where the workflow uses tools and APIs to make things happen in the real world, like sending an email or updating a record in a database.
A case study illustrates how these components build on each other.
Consider the simple business problem of finding a company's return policy.
- A simple LLM might provide a generic answer.
- By adding a RAG component, the workflow first finds the specific policy document and then sends that context to the LLM to generate an accurate answer.
- An AI agent can take this a step further by using planning and tool use. It can find the policy, look up the customer's purchase history, check if the item is still returnable, and then send the customer a pre-populated return label.
This progression shows that LLM workflows are a gradient of complexity. Businesses can start with simple prompts and gradually add more sophisticated components like RAG and agents as their needs grow, which addresses the pain point of "inadequate expertise" by showing a clear and manageable path to adoption.