AI usage - PR checklist + Prompt Cookbook + Cursor Rules
PR Checklist / Policy: Using AI Tools Responsibly
Consider making this a page in your Engineering Handbook. Engineers should run through this checklist when using Copilot or Cursor for features, refactors, or bug fixes.
Pull Request / Code Change Policy
Capture Context Before AI Use
In your PR or before prompting AI, clearly define: • The goal (feature/fix/refactor) • The constraints (performance, memory, dependencies, DB schema, API contracts) • Existing patterns/style (coding conventions, error handling, package structure) • What must / must not change (e.g. avoid breaking public APIs)
This ensures the AI’s suggestions are aligned with architecture/style and reduces dangerous drift.
Use Good Prompts / Rules
Use structured prompts: • State the goal • Supply relevant context (file snippet, types/interfaces) • Ask for test coverage/error handling/security as needed (see Prompt Cookbook below) • Limit the scope (one file, or one small change)
Better prompt → higher quality suggestions. Helps make suggestions easier to review.
Review AI Suggestions Like Code Reviews
When you see AI generated or assisted code: • Read it line-by-line • Check correctness, edge cases, security, resource use, database effects • Ensure readability, maintainability, naming & consistency with team style • If using Copilot: experiment with alternative suggestions; pick the best one.
Prevents introducing bugs or messy code. Ensures AI is augmenting, not replacing human oversight.
Tests, Error & Boundary Cases
Always ensure: • Unit tests / integration tests cover new or changed logic, especially for edge cases • Error handling, failure modes are considered (invalid input, timeouts, nulls etc.) • Database transactions are safe, schema compatibility maintained.
Many AI outputs fail at boundary conditions. Testing ensures code works as expected.
Security / Sensitive Data Review
• Don’t paste secrets or credentials into prompts or code. • Use static analysis / code scanning for security vulnerabilities. • Review for injection risks, permission leaks, untrusted inputs. • Keep modules that access sensitive data under tighter human review.
AI may suggest insecure patterns. Avoid accidental data exposure.
Fit With Existing Architecture & Dependencies
• Make sure suggested code plays nicely with existing services, modules, patterns. • Avoid duplicating functionality, violating modular boundaries. • Upkeep dependency versions and compatibility.
Keeps maintainability high and avoids technical debt.
Document Changes & Rationale
If you accept an AI suggestion that changes logic/design: • Write in PR description what was changed, why. • Indicate what trade-offs were made. • If AI used rules / patterns, reference them.
Helps reviewers understand why AI was used; aids future maintenance.
Code Review Signoff & Ownership
• Even if AI wrote a lot, a human must own the change. • Reviewers should verify code quality, tests, performance. • Do not use “AI did it” as justification for skipping review.
Accountability + high quality.
Monitor Post-Merge Behavior
After merging: • Track fallout: bugs, regressions, customer issues. • If this change increases errors / rejects during QA, revisit the usage. • Retrospect what worked / what didn’t.
Data ensures learning and iteration; metrics reflect reality.
Continuous Improvement
• Periodically share best prompt examples internally. • For Cursor - Update rules or style guide based on what suggestions are bad / common failures. • Run occasional audits of AI-assisted PRs to see where suggestions consistently fail. • Collect peer feedback.
Helps reduce noise, improve quality over time.
Prompt Templates / Examples
Here are good prompt patterns you or your team can adopt. Use or adapt in the AI assistants.
Feature Implementation
“Implement a <feature> in <component / service> that does <behavior>. Use <libraries> if relevant. Write unit tests as well. Ensure error handling for invalid inputs, timeouts, and DB failures. Match existing coding style.”
Include the goal, the component or module name, test requirements, error / edge conditions, match style.
Refactor / Clean Up
“Refactor method <methodName> in <file> to simplify logic. Preserve existing behavior and API contract. Add tests if coverage is low. Ensure naming and error handling follow team conventions.”
Helps preserve correct functionality. Trigger deeper thinking.
Bug Fix
“Fix bug where <describe bug> in <module> (e.g. “when user submits empty field, backend 500s”). Include tests reproducing bug and fix. Also review for similar occurrences elsewhere.”
Including description and reproducibility helps. Asking to scan for similar patterns improves code hygiene.
Performance / Optimization
“Optimize the query in <module> retrieving <data> from Postgres. Consider indexes, limit/pagination, avoid N+1, use appropriate joins. Retain readability and maintainability. Show before/after rationale.”
Encourages thinking about performance tradeoffs.
Security / Sanitization
“Add input validation, sanitization, and secure practices for <endpoint / service> that takes user input. Ensure prevention of SQL injection, XSS, unauthorized access. Write tests for invalid inputs.”
Promotes safer code.
Code Review Helper
“Review this diff / file: <paste snippet>; point out issues for readability, maintainability, test coverage, security, style mismatches. Suggest small fixes or improvements.”
Useful to catch oversights before PR.
Writing Tests
“Write unit tests(s) for <component / service> that cover success cases, invalid inputs, error paths. Use <test framework>.”
Ensures test coverage.
Database Schema Changes / Migrations
“Suggest a migration to introduce <new column / table> while preserving existing data. Write migration script. Update service layer and repository accordingly. Ensure backward compatibility.”
Important for DB-safe evolution.
Cursor Rules
Below is a sample set of rules you can keep in a repository (or global) so Cursor (or any similar AI tool) has project-level constraints and guidance.
.cursorrules will be deprecated in the future, hence consider updating rules directly in Cusor Settings -> Rules
Summary / Usage Guidelines
For each PR, assume parts of it are AI-assisted. Always treat suggestion as draft.
Use the checklist above as gating criteria. If fail any step (e.g. no tests, no error handling), request changes.
Prompt templates + Rules help make suggestions consistent and aligned.
Last updated
Was this helpful?