GenAI Prompt Engineering Basics

A prompt is the instruction given to a generative AI model. Prompt engineering is the skill of crafting prompts that produce the most accurate, relevant, and useful responses. A well-designed prompt can dramatically improve output quality — even without changing the model itself.

Why Prompt Engineering Matters

The same model given different prompts for the same task produces very different results. Consider this example:

Weak Prompt:   "Write about climate change."
Strong Prompt: "Write a 150-word explanation of climate change for a
                10-year-old student. Use simple vocabulary, one real-world
                example, and a hopeful tone."

Weak prompt result:  Generic overview, unpredictable length and tone
Strong prompt result: Targeted, age-appropriate, concise explanation

The model's capability did not change. Only the instruction changed — but the output is vastly more useful.

The Anatomy of a Good Prompt

A well-structured prompt typically includes four components. Not every prompt needs all four, but including the relevant ones consistently improves results.

ComponentWhat It DoesExample
Role / PersonaSets the model's perspective and expertise level"Act as a senior software engineer"
Task / InstructionStates exactly what the model should do"Review this Python function and identify bugs"
ContextProvides background information the model needs"The function is part of a payment processing system"
Format / ConstraintsSpecifies how the output should look"Respond in bullet points, max 5 items"

Core Prompt Engineering Techniques

1. Zero-Shot Prompting

Give the model a task with no examples. Works well for simple, well-defined tasks.

Prompt: "Classify the sentiment of this review as Positive, Negative,
         or Neutral: 'The delivery was late but the product is excellent.'"

Output: "Mixed (Positive for product, Negative for delivery)"

2. Few-Shot Prompting

Provide a few examples before the actual task. This shows the model the format and pattern expected.

Prompt:
"Classify sentiment:
 Review: 'I love this product!' → Positive
 Review: 'Terrible quality.' → Negative
 Review: 'It works as expected.' → Neutral
 Review: 'Fast shipping but cracked screen.' → ?"

Output: "Mixed"

3. Chain-of-Thought Prompting

Ask the model to reason step by step before giving a final answer. This improves accuracy on complex reasoning, math, and logic tasks.

Without CoT:
  Prompt:  "A shop has 24 apples. It sells 7 in the morning and 9 in the
             afternoon. How many are left?"
  Output:  "8" (may be wrong or rushed)

With CoT:
  Prompt:  "Solve this step by step: A shop has 24 apples..."
  Output:  "Step 1: Start with 24.
             Step 2: Sold 7 in the morning → 24 - 7 = 17
             Step 3: Sold 9 in the afternoon → 17 - 9 = 8
             Answer: 8 apples remain."

4. Role Prompting

Assigning a role to the model changes the style and depth of its output.

Without role: "Explain how the heart works."
With role:    "Explain how the heart works. You are a cardiologist
               talking to a patient with no medical background."

5. Output Format Specification

Specify exactly how the output should be structured — plain text, bullet points, tables, JSON, or markdown.

Prompt: "List the top 3 benefits of exercise. Format as a numbered list.
         Each point should be one sentence long."

Output:
1. Regular exercise strengthens the heart and reduces the risk of cardiovascular disease.
2. Physical activity releases endorphins, improving mood and reducing stress.
3. Exercise supports healthy weight management by burning calories efficiently.

Prompt Structure Diagram

┌────────────────────────────────────────────────────────┐
│                     PROMPT STRUCTURE                   │
├────────────────────────────────────────────────────────┤
│ [Role]     → "You are a data analyst..."               │
│ [Task]     → "Analyze this sales data and find trends" │
│ [Context]  → "This is Q4 2024 data from a retail store"│
│ [Format]   → "Respond with 3 bullet points, under 50w" │
└────────────────────────────────────────────────────────┘
                          │
                          ▼
                     AI Model
                          │
                          ▼
┌────────────────────────────────────────────────────────┐
│                  QUALITY OUTPUT                        │
│  • Sales peaked in December (holiday season)           │
│  • Electronics outperformed clothing by 34%            │
│  • Weekend sales averaged 2x weekday volume            │
└────────────────────────────────────────────────────────┘

Common Prompt Engineering Mistakes

MistakeProblemFix
Vague task descriptionModel guesses what was wantedBe specific about the exact task
No format specifiedOutput length and structure is unpredictableState expected format explicitly
Too much informationModel loses focus on what mattersTrim context to only what is relevant
Asking multiple questions at oncePartial or jumbled answersBreak into separate prompts or numbered questions
Negative instructions onlyModel focuses on what not to do, misses what to doState what to do, not just what to avoid

System Prompts vs User Prompts

In API-based applications, prompts are often divided into two parts:

  • System prompt: Sets the model's overall behavior, tone, and role for the entire session. Written by the application developer.
  • User prompt: The actual input from the end user — a question, request, or command.
System Prompt (developer-written, invisible to user):
"You are a helpful customer support agent for a software company.
 Always be polite, concise, and refer users to documentation when relevant."

User Prompt (from end user):
"How do I reset my password?"

Model Response:
"To reset your password, visit the login page and click 'Forgot Password.'
 You'll receive an email with a reset link within 5 minutes.
 For more help, visit our documentation at support.example.com."

Prompt engineering builds the interface between humans and AI models. The next topic applies this skill directly to the most common use case: text generation — creating written content of all kinds using LLMs.

Leave a Comment