Project Adoption checklist, guidelines and use cases

Project Adoption checklist, guidelines and use cases

To effectively evaluate if using AI would be beneficial for a project, considering the existing codebase and platform, and to manage client expectations, here's a comprehensive checklist and guidelines derived from our learnings.

Checklist

Tech Stack Considerations

  • Well-known/Prevalent Tech Stacks: Is the project primarily using widely adopted programming languages (e.g., JavaScript, Python, PHP) and frameworks (e.g. ReactJS)? AI generally performs better with more prevalent tech stacks due to richer training data.

  • Niche or Unfamiliar Technologies: Does the project involve highly specialized, proprietary, or less common technologies? AI may struggle to understand and generate accurate code for these, potentially leading to inaccurate or fabricated outputs.

  • Specific Language/Framework Compatibility: Does the project align with the known strengths of specific AI tools? For example, Cline is efficient for PHP (Laravel/Drupal), while Cursor AI is noted as more effective for React and Node.js projects.

Code Quality and Structure

  • Code Consistency and Modularity: Is the codebase well-structured with consistent formatting, clear separation of concerns, and modular architecture? AI performs better with well-factored codebases, leading to higher quality outputs and easier AI integration.

  • Technical Debt and Legacy Code: Does the project involve a large, old, or highly complex legacy codebase with significant technical debt, intertwined code, or deep object hierarchies? AI can struggle significantly with such code, potentially producing inconsistencies, duplication, and making seemingly obvious mistakes. AI cannot magically replace well-documented and well-automated setup for older applications and stacks.

  • Existing Test Coverage: Is there comprehensive test coverage, especially for critical paths? This is a crucial prerequisite for effective AI use, as tests act as essential feedback loops and guardrails for AI-generated code.

Problem/Task Characteristics

  • Simplicity and Isolation: Are the tasks or problems typically simple, isolated, and with well-defined boundaries? AI performs most effectively in such scenarios.

  • Commonplace/Repetitive Patterns: Does the project involve tasks that are common, repetitive, or boilerplate in nature (e.g., generating API contracts, scripts, standard forms, unit tests, or basic CRUD operations)? AI excels at these, potentially saving 30-50% of time.

  • Complexity of Logic/Refactoring: Do tasks frequently involve complex business logic or significant refactoring? AI often struggles with these, requiring deep contextual understanding that it may lack, and can introduce overly complex or verbose solutions.

  • Need for Precision/Security: Do tasks involve high-stakes areas like security fixes or sensitive data handling? Developers should be highly cautious and prefer manual or deterministic tooling, as AI suggestions may lack precision or introduce vulnerabilities.

  • Clear and Concrete Instructions: Can tasks be broken down and described with clear, concrete instructions and specific acceptance criteria? This significantly enhances AI's success. Vague instructions can lead to misunderstandings and rework.

Documentation Availability

  • Quality and Up-to-dateness of Documentation: Is internal documentation (e.g., READMEs, API docs, coding standards) reliable, findable, and updated? This serves as valuable context for AI, helping it to provide better support and insights. Outdated documentation can lead AI to indiscriminately reproduce incorrect information.

Guidelines

Areas Where AI Excels (Good Candidates for AI Use)

  • Code Generation: Boilerplate, prototypes, scripts, new features/fields.

  • Test Generation: Unit tests, test data, initial failing tests for TDD, suggestions for edge cases.

  • Code Refinement & Minor Enhancements: Context-aware edits, inline documentation, polishing/documenting existing code, small-scale refactoring (e.g., renaming parameters, simple function extraction), performance improvements, edge case handling.

  • Information Retrieval & Explanation: Explaining code snippets, frameworks, tools, architectural concepts, drafting documentation, understanding unfamiliar code, summarizing scripts.

  • Debugging and Troubleshooting: Identifying problems, accurate solutions, streamlining the process, handling basic errors.

  • Onboarding Support: Bridging the gap between documentation and implementation by answering codebase questions and generating example code.

  • Planning and Task Management: Brainstorming in "Plan Mode", creating detailed "Improvement Plan Documents" (checklists), and breaking down tasks into smaller, manageable steps.

Areas Where AI Struggles (Avoid or Use with Extreme Caution)

  • Very Simple Bug Fixes/Minor Code Changes: The overhead of interacting with AI may outweigh the benefit; manual changes can be faster.

  • Projects with Very Low Error Margins: Such as security fixes or vulnerability patches, where precision and attention to detail are paramount. Developers should prefer manual work or existing deterministic tools.

  • Generating Tests for Poorly Structured Code: AI may produce non-viable test suggestions, overuse mocking, or fail to handle complex data structures, leading to NullPointerExceptions.

  • Incident Response/Real-time Diagnostics: Current AI tools are generally not suited for real-time diagnostic work across complex, interconnected systems.

  • When AI Becomes "Delusional" or Over-engineers: AI can sometimes generate code that is not truly required, is overly complex, verbose, or redundant, or introduces unnecessary dependencies and parameters. This can lead to increased maintenance costs.

  • Handling Long Conversations/Context Overflow: AI's performance can degrade in very long coding sessions, leading it to lose track of what it was doing or make seemingly obvious mistakes.

  • Inconsistent Outputs: The same prompt can yield different or even contradictory results, making reliance difficult.

Essential Practices for Successful AI Integration (Governance & Human Factors)

  • Human Oversight and Review:

    • Vigilant Review: Emphasise that all AI-generated code must be carefully reviewed, as it is rarely perfect and can introduce subtle issues or errors.

    • Human-in-the-Loop Philosophy: Reinforce that AI augments, rather than replaces, human capabilities, with developers retaining ultimate responsibility for commits.

    • Awareness of Cognitive Biases: Educate teams about automation bias, framing effect, anchoring effect, and sunk cost fallacy to prevent over-reliance and review complacency. Ask AI Playbook to know more about same.

    • Knowing When to Quit: Encourage developers to abandon AI-generated solutions that are not quickly proving valuable, and instead, revert changes, refine prompts, or proceed with manual coding ("artisanal coding").

    • Task Sizing: Break down complex tasks into smaller, concrete steps (e.g., 1-2 hour atomic tasks for parallel agents, or 4-8 hour Kanban work items for human teams) to facilitate review and prevent AI from doing "too much upfront work".

    • Persistent Context: For larger tasks spanning multiple sessions, implement methods for AI to maintain context (e.g., through detailed improvement plan documents or "memory banks") to avoid repetition and loss of focus.

  • Testing and Feedback Loops:

    • Fast and Reliable Feedback: Ensure quick and reliable feedback loops (e.g., IDE integration, linters, automated tests, human pairing) to validate AI outputs promptly and reduce rework.

    • Test-Driven Development (TDD): Train developers to use AI for generating tests first to maintain TDD principles, enhance code quality, and provide systematic validation of AI-generated code.

    • Success Metrics: Define clear KPIs (e.g., adoption rate, productivity, code quality, developer satisfaction) to measure the impact of AI adoption and guide future decisions.

    • Iterative Feedback Cycles: Conduct regular surveys and retrospectives (e.g., quarterly) to gather feedback on AI tools and adapt the adoption strategy. Share your learnings with larger organisation and teams.