Overview
Prompt engineering is the strategic crafting of prompts to guide Large Language Models (LLMs) like GPT-4 to produce accurate and desired outputs.
Before prompt engineering
Before you follow along, we assume that you:
- have a first draft of your prompt
- know the audience that you are tailoring your prompt to
- have some benchmark to measure prompt improvements
- have some example inputs and desired outputs to test your prompts with
We recommend taking a moment to brainstorm on these points to make the most of the following suggestions.
How to prompt engineer
- Be specific and clear
- Use structured formats
- Leverage role-playing
- Implement few-shot learning
- Use constrained outputs
- Use Chain-of-Thought prompting
When to start prompt engineering?
- Start from the beginning. It’s never to early to think about how your prompt will affect the output.
- When you are refining model outputs to meet your expectation.
- When you are expanding features and need the model adapts to the new use cases.
- When you want to optimize cost and performance to reduce token usage, lower latency, and improve performance.
Why prompt engineer?
- Get more accurate and relevant responses.
- Get the response in a specific instructions, styles, or formats.
- Reduce costs by decreasing the number of tokens used, lowering API costs.
- Avoid inappropriate or biased outputs.
- Get consistent and reliable responses across different interactions.
- Improve user experience with more helpful and concise responses.
FAQ
Was this page helpful?