Prompt Engineering

Prompt engineering is the skill of communicating effectively with AI models. It's less about tricks and more about clarity.

Core Principles

Be specific. Vague prompts get vague results:

# Bad
"Make this code better"

# Good
"Refactor this function to:
1. Replace the nested loops with a dict lookup
2. Add type hints for all parameters
3. Handle the case where `user_id` is None"

Show, don't tell. Examples are worth more than descriptions:

# Instead of explaining the format...
"Convert these to the same format as this example:

Input: 'John Smith, 2024-01-15, $500'
Output: {'name': 'John Smith', 'date': '2024-01-15', 'amount': 500}

Now convert:
'Jane Doe, 2024-02-20, $750'
'Bob Wilson, 2024-03-01, $300'"

Set constraints. Tell the model what NOT to do:

"Add error handling to this Django view.
- Do NOT change the existing logic
- Do NOT add logging
- Only handle ValueError and DoesNotExist
- Return JSON error responses with appropriate status codes"

System Prompts

System prompts set the AI's role and constraints for an entire conversation:

You are a senior Django developer reviewing pull requests.

Rules:
- Focus on correctness, security, and performance
- Ignore style issues (we have linters for that)
- Flag any raw SQL queries
- Check for missing migrations
- Verify all new views have tests

Structured Output

Use schemas to get predictable responses:

tools = [{
    "name": "analyze_code",
    "description": "Analyze code quality",
    "input_schema": {
        "type": "object",
        "properties": {
            "issues": {
                "type": "array",
                "items": {
                    "type": "object",
                    "properties": {
                        "severity": {"enum": ["low", "medium", "high"]},
                        "line": {"type": "integer"},
                        "description": {"type": "string"},
                        "fix": {"type": "string"}
                    }
                }
            }
        }
    }
}]

Anti-Patterns

  • Prompt stuffing — cramming too much into one prompt. Break it up.
  • Ambiguous pronouns — "fix it" / "make it work" — fix what exactly?
  • Over-constraining — too many rules cause the model to focus on rules, not the task
  • Ignoring context window — very long prompts dilute important instructions