v0.0.1 β€’ In Peer Review

Testing & Quality Control (TQC) Principles

Enterprise Architecture principles for quality assurance in AI-assisted development.

Category TQC
Principles 4
Focus Quality Assurance for AI-Assisted Development
Status πŸ” Under Peer Review

Category Overview

flowchart TB
    subgraph QA["Quality Assurance Lifecycle"]
        direction LR
        Define["Define<br/>Tests first"] --> Generate["Generate<br/>AI Code"]
        Generate --> Test["Test<br/>Execute tests"]
        Test --> Review["Review<br/>Quality review"]
        Review --> Validate["Validate<br/>Security validate"]
    end

    subgraph Principles["TQC Principles"]
        TQC001["TQC-001: AI-Aware Testing"]
        TQC002["TQC-002: Security Practices"]
        TQC003["TQC-003: Quality Framework"]
        TQC004["TQC-004: Continuous Validation"]
    end

    QA --> Principles

Key Concerns:

  • Test-driven development with AI implementation
  • Security validation of AI-generated code
  • Quality metrics and standards enforcement
  • Continuous validation throughout development

Principles in This Category

ID Principle Name Statement Summary
TQC-001 AI-Aware Testing Testing strategies for AI-generated code
TQC-002 Security Practices Security standards for AI development
TQC-003 Quality Framework Quality standards and metrics
TQC-004 Continuous Validation Ongoing validation throughout lifecycle

Relationship to Other Categories

flowchart TB
    DC["DC: Development & Coding<br/><i>Generated code to be tested</i>"]
    PS["PS: Planning & Strategy<br/><i>Quality reqs feed back to planning</i>"]
    TQC["TQC: Testing &<br/>Quality Control<br/><i>Quality gates for AI-assisted dev</i>"]
    DM["DM: Deployment & Maintenance<br/><i>Quality gates in pipeline</i>"]
    GSC["GSC: Governance & Security<br/><i>Compliance and audit requirements</i>"]

    DC --> TQC
    PS <--> TQC
    TQC --> DM
    TQC --> GSC

TQC-001: AI-Aware Testing

Statement

Write tests before AI code generation and verify that tests pass against generated code without AI modification of the tests.

Rationale

Dimension Justification
Business Value Tests define requirements upfront, ensuring AI generates fit-for-purpose code
Technical Foundation AI excels at implementing to spec; tests provide the specification
Risk Mitigation Pre-written tests prevent AI from generating tests that match flawed code
Human Agency Humans define expected behavior; AI implements to meet those expectations

Implications

flowchart TB
    subgraph Step1["Step 1: Write Test Specification (Human)"]
        S1A["Define expected behaviors"]
        S1B["Write test cases (happy path, errors, edge cases)"]
        S1C["Establish boundaries and assumptions"]
    end

    subgraph Step2["Step 2: Share Context with AI"]
        S2A["Tests that must pass"]
        S2B["Project patterns and conventions"]
        S2C["Constraints and requirements"]
    end

    subgraph Step3["Step 3: Generate Implementation (AI)"]
        S3A["AI generates code to satisfy the tests"]
    end

    subgraph Step4["Step 4: Validate Output (Human)"]
        S4A["βœ“ All tests pass (RUN them!)"]
        S4B["βœ“ Tests actually test the code"]
        S4C["βœ“ No test modifications by AI"]
        S4D["βœ“ Code follows project patterns"]
        S4E["βœ“ Security review complete"]
    end

    Step1 --> Step2 --> Step3 --> Step4
Area Implication
Development Tests written by humans before AI generates implementation
Governance Test modification by AI prohibited without explicit approval
Skills Train developers on TDD patterns for AI collaboration
Tools Configure AI tools to implement against provided tests

Maturity Alignment

Level Requirements
Base (L1) Tests required before generation; human runs and validates
Medium (L2) Automated test execution; coverage requirements enforced
High (L3) AI suggests missing test cases; mutation testing

Governance

Compliance Measures

  • Tests exist before AI code generation
  • Test runs documented with results
  • AI-modified tests flagged for review
  • Test coverage meets thresholds
  • Edge cases explicitly tested

Exception Process

Condition Approval Required Documentation
Exploratory code None Tests before merge
AI test suggestions Tech Lead Human review of tests
Coverage exception Manager Risk acknowledgment
  • DC-001: AI-Human Collaboration (tests define collaboration)
  • DC-003: Code Review Practices (tests inform review)
  • DM-001: Pipeline Integration (tests as quality gates)

TQC-002: Security Practices

Statement

Apply security engineering practices to all AI-generated code, treating AI as an untrusted contributor requiring the same security validation as external code.

Rationale

Dimension Justification
Business Value Security breaches are costly; prevention is cheaper than remediation
Technical Foundation AI can generate code with vulnerabilities, including subtle security flaws
Risk Mitigation Treating AI as untrusted ensures security controls are consistently applied
Human Agency Humans remain accountable for security; AI is the assistant

Implications

Five Security Pillars

# Pillar Description
1 You Are Developer Full control and responsibility
2 Best Practices Apply No shortcuts for AI code
3 Be Security-Conscious Assume AI code has vulnerabilities
4 Guide the AI Specify security requirements
5 AI Self-Review Recursive criticism and improvement

Security Validation Checklist

Input Validation Authentication/Authorization
☐ All inputs validated ☐ No hardcoded credentials
☐ Format/length checked ☐ Secure auth libraries
☐ Parameterized queries ☐ Role-based access enforced
☐ Output encoding ☐ Constant-time comparison
Data Protection Dependency Security
☐ Secrets in vaults ☐ Dependencies scanned
☐ Encryption at rest ☐ Known vulnerabilities checked
☐ Encryption in transit ☐ License compliance verified
Area Implication
Development Security requirements included in all AI prompts
Governance Security review mandatory for AI-generated code
Skills Security training for all AI-assisted developers
Tools SAST/DAST integrated; dependency scanning enabled

Maturity Alignment

Level Requirements
Base (L1) Manual security review; basic security checklist
Medium (L2) Automated SAST/DAST; security gates in CI/CD
High (L3) AI-assisted vulnerability detection; predictive risk analysis

Governance

Compliance Measures

  • Security requirements in prompts
  • Security review completed before merge
  • No hardcoded credentials
  • Dependency vulnerabilities addressed
  • Security scans passing

Exception Process

Condition Approval Required Documentation
Security warning bypass Security Team Risk acceptance form
Deprecated dependency Tech Lead Migration plan
Legacy code integration Security + Architect Security architecture

TQC-003: Quality Framework

Statement

Apply identical quality standards to AI-generated code as human-written code, with human accountability for all quality outcomes.

Rationale

Dimension Justification
Business Value Consistent quality standards ensure reliable, maintainable software
Technical Foundation AI code can have subtle quality issues that compound over time
Risk Mitigation β€œAI wrote it” is not an excuse for quality shortcuts
Human Agency Humans remain accountable for approving and maintaining all code

Implications

Quality Principles

# Principle Description
1 Same Standards No shortcuts for AI code
2 Human Accountability Humans responsible for outcomes
3 Verification > Trust Verify rather than trust AI

Quality Dimensions

Functional Correctness Code Quality Performance
Meets requirements Style compliant Meets benchmarks
Handles edge cases Maintainable Scalable
Correct logic Documented Efficient
Error handling Testable Resource-aware

Quality Gates Flow

flowchart LR
    Lint["Lint<br/>Pass"] --> Test["Test<br/>Pass"]
    Test --> Coverage["Coverage<br/>>80%"]
    Coverage --> Review["Review<br/>Approved"]
    Review --> Security["Security<br/>Pass"]
    Security --> Deploy["Deploy"]
Area Implication
Development All code meets same standards regardless of origin
Governance Quality metrics tracked and enforced
Skills Train developers on quality standards for AI code
Tools Automated quality gates in CI/CD pipeline

Maturity Alignment

Level Requirements
Base (L1) Manual quality review; documented standards
Medium (L2) Automated quality gates; metrics dashboards
High (L3) AI-assisted quality assessment; predictive quality analysis

Governance

Compliance Measures

  • Quality standards documented
  • All code passes lint/style checks
  • Test coverage meets thresholds
  • Code review completed and approved
  • Quality metrics tracked and reported

Exception Process

Condition Approval Required Documentation
Coverage exception Tech Lead Technical debt ticket
Style override Team Lead Documented rationale
Performance exception Architect Performance analysis
  • DC-003: Code Review Practices (review enforces quality)
  • DM-001: Pipeline Integration (quality gates in pipeline)
  • GSC-001: Governance Framework (quality governance)

TQC-004: Continuous Validation

Statement

Validate AI-assisted deliverables continuously throughout the development lifecycle, not just at completion.

Rationale

Dimension Justification
Business Value Early detection of issues reduces cost of remediation
Technical Foundation AI can introduce subtle issues that compound when left unchecked
Risk Mitigation Continuous validation prevents accumulation of technical debt
Human Agency Regular checkpoints maintain human awareness and control

Implications

Validation Checkpoints Flow

flowchart LR
    PreGen["Pre-Gen<br/><i>Requirements validation</i>"] --> DuringGen["During Gen<br/><i>Incremental validation</i>"]
    DuringGen --> PostGen["Post-Gen<br/><i>Full test execution</i>"]
    PostGen --> PreMerge["Pre-Merge<br/><i>Gate checks</i>"]
    PreMerge --> PostDeploy["Post-Deploy<br/><i>Monitor production</i>"]

Validation Activities by Phase

Phase Activities
Pre-Generation Requirements complete and clear
Β  Tests written and passing (empty impl)
Β  Architecture documented
During Generation Incremental test runs
Β  Checkpoint reviews
Β  Early issue detection
Post-Generation Full test suite execution
Β  Code review completion
Β  Security validation
Pre-Merge Quality gates pass
Β  Approval obtained
Β  Documentation updated
Post-Deploy Production monitoring
Β  Error tracking
Β  Performance validation
Area Implication
Development Validation activities integrated throughout workflow
Governance Checkpoint completion required for progression
Skills Train developers on continuous validation practices
Tools Automated validation at each checkpoint

Maturity Alignment

Level Requirements
Base (L1) Manual checkpoints; basic validation at key phases
Medium (L2) Automated validation gates; metrics-driven progression
High (L3) AI-assisted validation; predictive issue detection

Governance

Compliance Measures

  • Validation checkpoints defined per phase
  • Checkpoint completion documented
  • Issues tracked and resolved before progression
  • Validation metrics tracked
  • Continuous improvement from validation learnings

Exception Process

Condition Approval Required Documentation
Skip checkpoint Tech Lead Risk acknowledgment
Reduced validation Manager Scope limitations
Emergency bypass Director Post-incident review
  • DM-001: Pipeline Integration (validation gates)
  • DM-002: Observability First (post-deploy validation)
  • DC-001: AI-Human Collaboration (validation maintains control)

Category Summary

Principle Maturity Matrix

Principle Base (L1) Medium (L2) High (L3)
TQC-001 AI-Aware Testing Tests first, human runs Automated coverage AI test suggestions
TQC-002 Security Practices Manual review SAST/DAST gates AI-assisted detection
TQC-003 Quality Framework Documented standards Automated gates AI quality assessment
TQC-004 Continuous Validation Manual checkpoints Automated gates Predictive detection

Key Takeaways

  1. Tests before code - Write tests first; AI implements against specifications
  2. Same standards apply - No quality shortcuts because β€œAI wrote it”
  3. Security is non-negotiable - Treat AI as untrusted contributor
  4. Validate continuously - Don’t wait until the end to find issues
  5. Human accountability remains - Developers own all quality outcomes

Next Steps

Action Link
View all principles Principles Index
Related: Deployment DM Principles
Related: Governance GSC Principles
Maturity assessment Maturity Model

License