- the weekly swarm
- Posts
- 10 Prompts and When to Use them
10 Prompts and When to Use them
Your Go-To Cheat Sheet for Prompting with Purpose
Start learning AI in 2025
Everyone talks about AI, but no one has the time to learn it. So, we found the easiest way to learn AI in as little time as possible: The Rundown AI.
It's a free AI newsletter that keeps you up-to-date on the latest AI news, and teaches you how to apply it in just 5 minutes a day.
Plus, complete the quiz after signing up and they’ll recommend the best AI tools, guides, and courses – tailored to your needs.

Hello everyone and welcome to my newsletter where I discuss real-world skills needed for the top data jobs. 👏
This week I’m proving you with 10 prompts and when to use each. 👀
Not a subscriber? Join the informed. Over 200K people read my content monthly.
Thank you. 🎉
Prompting is how you directly interact with Large Language Models (LLMs), whether that’s through AI tools like ChatGPT or when building your own LLM-powered applications.
If you want more control over an LLM’s outputs, you need to master prompt engineering — the art of crafting clear, effective prompts to guide the model’s responses. With the right prompt engineering techniques, you can boost an LLM’s accuracy, consistency, creativity, and overall usefulness for your specific use cases.
In this article, we’ll explore 10 proven prompt engineering techniques and show you when and how to use them to get better results from LLMs and build more effective AI applications.
1. Zero-Shot Prompting
How to use: give clear instructions without examples.
When to use: when the task is simple and straightforward like translations or factual queries.
PROMPT: Translate the English phrase 'Flowers on the road' to Spanish.
2.One-Shot Prompting
How to use: give clear instructions and add a single example to demonstrate the task/ desired output.
When to use: when one example is needed (and enough) to clarify the task or output format for the model.
PROMPT: In uppercase, return the Spanish trasnlation of the English word 'basket' only.
EXAMPLE:
English word (input): River
Spanish translation (output): RÍO
3. Few-Shot Prompting:
How to use: Give clear instructions with a few examples in the prompt.
When to use: When you need to adapt the model to a specific task/ domain without fine-tuning and when you need more consistent and accurate outputs.
PROMPT: return the sentiment of this statement 'The lecture was quite boring' only. Either positive, negative, or neutral.
EXAMPLES:
'This movie was great!': Positive,
'I hated the service.': Negative,
'I don't know how I feel about it.': Neutral
4. Role Prompting
How to use: Give clear instructions and assign a specific persona to the model .
When to use: When the task is open-ended and the output needs to be in line with a peculiar perspective, personality, or tone.
PROMPT: Write a short blog (500 words) with 4 points about college hacks.
ROLE: Act as a sweet college girl who uses a lot of Gen z slangs.
5. Style Prompting
How to use: Specify the desired style, tone, or genre in the prompt.
When to use: When the output needs to match a particular style or tone.
PROMPT: Write a brief formal email requesting a raise.
6. Emotion Prompting
How to use: Add an emotionally charged sentence or phrase to the prompt.
When to use: When the task involves creative text generation such as storytelling and poetry.
PROMPT: Write a poem about my lost imaginary friend who never gave up. I still miss my friend.
7. Contextual Prompting
How to use: provide background or custom information before giving clear instructions in the prompt.
When to use: When background or domain-specific details are needed to make the response more accurate or relevant i.e. in RAG chatbots.
CONTEXT: My name is Jennifer Luke and I'm a marketing manager in JL firm.
PROMPT: Write an email to the team about the upcoming campaign.
8. Rephrase and Respond (RaR)
How to use: Tell the LLM to rephrase the question into a better prompt before generating the final answer.
When to use: For higher accuracy on complex tasks or when you need to evaluate the LLM’s understanding of a task.
PROMPT: Rephrase and expand the following question, and then answer it: What is the difference between correlation and causation?
9. Re-reading (RE2)
How to use: Begin with an instruction or question, add the phrase: “Read that again:” immediately after, and then repeat the original instruction/question.
When to use: For complex tasks that involve reasoning.
PROMPT:
A farmer has a rectangular field that is 3 times as long as it is wide. The perimeter of the field is 400 meters. What are the dimensions of the field?
Read the question again: "A farmer has a rectangular field that is 3 times as long as it is wide. The perimeter of the field is 400 meters. What are the dimensions of the field?"
10. System Prompting
How to use: Give the LLM high-level instructions or context that will be considered throughout the interaction with the LLM.
In ChatGPT, you can do this using the ‘customize GPT’ feature.
When building LLM apps, you can do this by providing the context/ instructions as the ‘system prompt’.
When to use: When you need to set the overall behavior and tone of the LLM in a conversational setting.
SYSTEM PROMPT: You are a helpful assistant that will provide factual responses in a concise tone
There’s no magic formula or one-size-fits-all approach to prompt engineering. Every model behaves differently, so the best way to get the results you want is to experiment. You’ll often need to adjust your instructions, add more context, or combine multiple techniques — just like in some of the examples we covered above.
Thanks everyone and have a great day. 👏