Prompt Engineering — Getting the Best from LLMs
Prompt engineering is the art of writing inputs that reliably elicit high-quality outputs from LLMs. With GPT-4, a poorly written prompt often gets a mediocre response, while a well-structured prompt gets exceptional results — same model, 10x better output.
Advanced Prompting Techniques
from openai import OpenAI
client = OpenAI() # requires OPENAI_API_KEY env var
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 1. ZERO-SHOT: No examples — relies purely on model knowledge
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
def zero_shot(text: str) -> str:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a sentiment analysis expert. Respond with only: POSITIVE, NEGATIVE, or NEUTRAL"},
{"role": "user", "content": f"Classify: '{text}'"}
],
temperature=0, # deterministic for classification
max_tokens=10,
)
return response.choices[0].message.content.strip()
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 2. FEW-SHOT: 3-5 examples teach the expected format
# Often boosts accuracy 10-25% over zero-shot
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
few_shot_prompt = """Classify the sentiment of these reviews:
Review: "The product quality was amazing!" → POSITIVE
Review: "Delivery took 3 weeks and item was damaged" → NEGATIVE
Review: "It's okay, does what it says" → NEUTRAL
Review: "{text}" → """
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 3. CHAIN-OF-THOUGHT (CoT) — dramatic improvement for reasoning
# Adding "think step by step" unlocks 100B+ model reasoning ability
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
def chain_of_thought(problem: str) -> str:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a math tutor. Think through problems step by step before giving the final answer."},
{"role": "user", "content": f"{problem}\n\nLet's think through this step by step:"}
],
temperature=0,
max_tokens=500,
)
return response.choices[0].message.content
# Example: "A store sells apples for $0.75 each. Alice buys 7 apples and pays with $10. How much change does she get?"
# CoT answer walks through: 7 × $0.75 = $5.25 → $10 - $5.25 = $4.75
# Direct answer often fails on multi-step arithmetic; CoT solves it
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 4. STRUCTURED OUTPUT — JSON mode for reliable parsing
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
import json
def extract_entities(text: str) -> dict:
"""Reliably extract structured data from unstructured text."""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": """Extract entities from the text.
Respond with ONLY valid JSON:
{"people": [], "organizations": [], "locations": [], "dates": []}"""},
{"role": "user", "content": text}
],
response_format={"type": "json_object"}, # forces valid JSON output
temperature=0,
)
return json.loads(response.choices[0].message.content)
result = extract_entities("Elon Musk founded SpaceX in 2002 in Hawthorne, California.")
print(json.dumps(result, indent=2))
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
# 5. SYSTEM PROMPT ENGINEERING — define the model's persona
# ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
system_prompts = {
"Code reviewer": """You are a senior software engineer specializing in Python code review.
Analyze code for: bugs, security issues, performance, readability.
Format your response as: ISSUES | SUGGESTIONS | RATING (1-10)""",
"Medical assistant": """You are a medical information assistant.
IMPORTANT: Always recommend consulting a qualified doctor.
Provide factual, evidence-based information.
Never diagnose conditions or prescribe treatments.""",
"Data analyst": """You are a data analyst.
When given data, always:
1. Identify patterns and trends
2. Highlight anomalies
3. Suggest actionable insights
4. Present findings clearly with bullet points"""
}Tip
Tip
Practice Prompt Engineering Getting the Best from LLMs in small, isolated examples before integrating into larger projects. Breaking concepts into small experiments builds genuine understanding faster than reading alone.
Better prompts = better AI output. Structure, examples, and constraints matter.
Practice Task
Note
Practice Task — (1) Write a working example of Prompt Engineering Getting the Best from LLMs from scratch without looking at notes. (2) Modify it to handle an edge case (empty input, null value, or error state). (3) Share your solution in the Priygop community for feedback.
Quick Quiz
Common Mistake
Warning
A common mistake with Prompt Engineering Getting the Best from LLMs is skipping edge case testing — empty inputs, null values, and unexpected data types. Always validate boundary conditions to write robust, production-ready ai code.