Use structured, specific prompts with clear context and requirements to achieve consistent, high-quality AI outputs.
Rationale
Dimension
Justification
Business Value
Better prompts reduce iterations and produce more usable outputs
Technical Foundation
AI output quality is directly proportional to prompt quality
Risk Mitigation
Vague prompts lead to incorrect assumptions and flawed implementations
Human Agency
Well-crafted prompts ensure AI operates within intended boundaries
Implications
flowchart LR
Context["Context<br/><i>Background, tech stack, existing code</i>"]
Requirement["Requirement<br/><i>What to do, specific, detailed</i>"]
Constraints["Constraints<br/><i>Boundaries, limitations, style rules</i>"]
Examples["Examples<br/><i>Show patterns, expected format, I/O</i>"]
Context --> Requirement --> Constraints --> Examples
Prompt Quality Comparison
β Poor
β Good
βWrite a function for authenticationβ
βWrite a function that: β’ Takes username (string) and password (string) β’ Validates against the user store β’ Returns { success: boolean, message: string } β’ Hashes password before comparison β’ Logs failed attempts β’ Uses existing auth.utils moduleβ
Area
Implication
Development
Prompts include context, requirements, constraints, examples
Governance
Production prompts reviewed and version controlled
Skills
Prompt engineering included in training curriculum
Review AI-generated code with at least as much rigor as human-written code, using structured checklists for understanding, correctness, quality, and security.
Rationale
Dimension
Justification
Business Value
Reviews catch defects before they reach production
Technical Foundation
AI-generated code can contain subtle errors masked by correct syntax
Risk Mitigation
Security vulnerabilities and logic errors require human detection
Human Agency
Review ensures humans understand and can maintain generated code
Implications
Code Review Checklist
1. UNDERSTANDING - βCan I explain this code?β
β Check
Criteria
β
Logic comprehension - Can I explain the algorithm?
β
Pattern recognition - Do I recognize the patterns?
β
Modification ability - Could I modify this if needed?
β
Behavior prediction - Can I predict edge case behavior?
β οΈ IF ANY UNCHECKED: DO NOT APPROVE
2. CORRECTNESS - βDoes it do what was requested?β
β Check
Criteria
β
Functional correctness - Does the logic appear correct?
β
Edge case handling - Are boundaries handled?
β
Error handling - Are errors handled appropriately?
β
Return values - Are returns correct and consistent?
3. QUALITY - βDoes it meet standards?β
β Check
Criteria
β
Style compliance - Follows project style guide?
β
Naming clarity - Names clear and consistent?
β
Complexity - Is there unnecessary complexity?
β
Documentation - Comments where needed?
4. SECURITY - βIs it secure?β
β Check
Criteria
β
Input validation - All inputs validated?
β
Authentication/AuthZ - Proper access control?
β
Data protection - Sensitive data handled correctly?
β
Known vulnerabilities - No OWASP top 10 issues?
Area
Implication
Development
All AI-generated code reviewed using structured checklist
Governance
Review completion required before merge
Skills
Train reviewers on AI-specific review considerations
Tools
Integrate review checklists into code review tools
DM-001: Pipeline Integration (review as quality gate)
DC-004: Agentic Development
Statement
Deploy autonomous AI agents only for appropriate use cases with defined boundaries, human oversight points, and rollback capabilities.
Rationale
Dimension
Justification
Business Value
Agents accelerate repetitive tasks and large-scale changes
Technical Foundation
Agentic AI operates at L3 (partial automation) requiring oversight
Risk Mitigation
Unbounded agents can make widespread, difficult-to-reverse changes
Human Agency
Human defines boundaries; agent operates within; human approves results
Implications
Appropriate vs Inappropriate Use Cases
β Appropriate
β Inappropriate
Rapid prototyping
Production deploys
Multi-file refactor
Security-sensitive
Boilerplate generation
Complex business logic
Test generation
Compliance-critical
Documentation updates
Unsupervised runs
flowchart LR
subgraph Pre["Pre-Session"]
P1["β Define Scope"]
P2["β VCS Baseline"]
P3["β Backup Ready"]
end
subgraph During["During Session"]
D1["β Monitor"]
D2["β Intervene if needed"]
D3["β Create Checkpoints"]
end
subgraph Post["Post-Session"]
O1["β Review All Changes"]
O2["β Test Thoroughly"]
O3["β Approve or Rollback"]
end
Pre --> During --> Post
Control Mechanisms
Time limits on agent sessions
File/directory scope restrictions
Tool access limitations
Mandatory checkpoints
Rollback capability
Area
Implication
Development
Agent sessions have defined scope and supervision
Governance
Approval required for agent deployment; boundaries documented
Skills
Train developers on agent capabilities and limitations
Tools
Configure agent tools with appropriate restrictions
Maturity Alignment
Level
Requirements
Base (L1)
Agents for low-risk tasks only; constant human supervision
Medium (L2)
Defined use case catalog; checkpoint-based workflows
High (L3)
Broader autonomy for proven scenarios; automated guardrails
Governance
Compliance Measures
Agent use cases documented and approved
Scope boundaries defined before each session
Baseline committed before agent runs
Human review required for all agent output
Rollback tested and available
Exception Process
Condition
Approval Required
Documentation
Extended agent session
Tech Lead
Checkpoint schedule
Broader scope
Architect
Risk assessment
Production-adjacent
Director + Security
Isolation measures
Related Principles
TSI-003: Protocol Adoption (MCP/A2A for agent coordination)