# AI usage - PR checklist + Prompt Cookbook + Cursor Rules

### PR Checklist / Policy: Using AI Tools Responsibly

Consider making this a page in your Engineering Handbook. Engineers should run through this checklist when using Copilot or Cursor for features, refactors, or bug fixes.

***

#### Pull Request / Code Change Policy

<table><thead><tr><th width="148.25">Step</th><th width="391.13671875">What to do</th><th>Why it matters</th></tr></thead><tbody><tr><td><strong>Capture Context Before AI Use</strong></td><td>In your PR or before prompting AI, clearly define:<br>• The goal (feature/fix/refactor)<br>• The constraints (performance, memory, dependencies, DB schema, API contracts)<br>• Existing patterns/style (coding conventions, error handling, package structure)<br>• What must / must not change (e.g. avoid breaking public APIs)</td><td>This ensures the AI’s suggestions are aligned with architecture/style and reduces dangerous drift.</td></tr><tr><td><strong>Use Good Prompts / Rules</strong></td><td>Use structured prompts:<br>• State the goal<br>• Supply relevant context (file snippet, types/interfaces)<br>• Ask for test coverage/error handling/security as needed (see Prompt Cookbook below)<br>• Limit the scope (one file, or one small change)</td><td>Better prompt → higher quality suggestions. Helps make suggestions easier to review.</td></tr><tr><td><strong>Review AI Suggestions Like Code Reviews</strong></td><td>When you see AI generated or assisted code:<br>• Read it line-by-line<br>• Check correctness, edge cases, security, resource use, database effects<br>• Ensure readability, maintainability, naming &#x26; consistency with team style<br>• If using Copilot: experiment with alternative suggestions; pick the best one.</td><td>Prevents introducing bugs or messy code. Ensures AI is augmenting, not replacing human oversight.</td></tr><tr><td><strong>Tests, Error &#x26; Boundary Cases</strong></td><td>Always ensure:<br>• Unit tests / integration tests cover new or changed logic, especially for edge cases<br>• Error handling, failure modes are considered (invalid input, timeouts, nulls etc.)<br>• Database transactions are safe, schema compatibility maintained.</td><td>Many AI outputs fail at boundary conditions. Testing ensures code works as expected.</td></tr><tr><td><strong>Security / Sensitive Data Review</strong></td><td>• Don’t paste secrets or credentials into prompts or code.<br>• Use static analysis / code scanning for security vulnerabilities.<br>• Review for injection risks, permission leaks, untrusted inputs.<br>• Keep modules that access sensitive data under tighter human review.</td><td>AI may suggest insecure patterns. Avoid accidental data exposure.</td></tr><tr><td><strong>Fit With Existing Architecture &#x26; Dependencies</strong></td><td>• Make sure suggested code plays nicely with existing services, modules, patterns.<br>• Avoid duplicating functionality, violating modular boundaries.<br>• Upkeep dependency versions and compatibility.</td><td>Keeps maintainability high and avoids technical debt.</td></tr><tr><td><strong>Document Changes &#x26; Rationale</strong></td><td>If you accept an AI suggestion that changes logic/design:<br>• Write in PR description what was changed, why.<br>• Indicate what trade-offs were made.<br>• If AI used rules / patterns, reference them.</td><td>Helps reviewers understand why AI was used; aids future maintenance.</td></tr><tr><td><strong>Code Review Signoff &#x26; Ownership</strong></td><td>• Even if AI wrote a lot, a human must own the change.<br>• Reviewers should verify code quality, tests, performance.<br>• Do not use “AI did it” as justification for skipping review.</td><td>Accountability + high quality.</td></tr><tr><td><strong>Monitor Post-Merge Behavior</strong></td><td>After merging:<br>• Track fallout: bugs, regressions, customer issues.<br>• If this change increases errors / rejects during QA, revisit the usage.<br>• Retrospect what worked / what didn’t.</td><td>Data ensures learning and iteration; metrics reflect reality.</td></tr><tr><td><strong>Continuous Improvement</strong></td><td>• Periodically share best prompt examples internally.<br>• For Cursor - Update rules or style guide based on what suggestions are bad / common failures.<br>• Run occasional audits of AI-assisted PRs to see where suggestions consistently fail.<br>• Collect peer feedback.</td><td>Helps reduce noise, improve quality over time.</td></tr></tbody></table>

***

### Prompt Templates / Examples

Here are good prompt patterns you or your team can adopt. Use or adapt in the AI assistants.

<table><thead><tr><th width="157.4765625">Task</th><th width="368.25">Prompt Template</th><th>Notes / Things to Include</th></tr></thead><tbody><tr><td><strong>Feature Implementation</strong></td><td><em>“Implement a <code>&#x3C;feature></code> in <code>&#x3C;component / service></code> that does <code>&#x3C;behavior></code>. Use <code>&#x3C;libraries></code> if relevant. Write unit tests as well. Ensure error handling for invalid inputs, timeouts, and DB failures. Match existing coding style.”</em></td><td>Include the goal, the component or module name, test requirements, error / edge conditions, match style.</td></tr><tr><td><strong>Refactor / Clean Up</strong></td><td><em>“Refactor method <code>&#x3C;methodName></code> in <code>&#x3C;file></code> to simplify logic. Preserve existing behavior and API contract. Add tests if coverage is low. Ensure naming and error handling follow team conventions.”</em></td><td>Helps preserve correct functionality. Trigger deeper thinking.</td></tr><tr><td><strong>Bug Fix</strong></td><td><em>“Fix bug where <code>&#x3C;describe bug></code> in <code>&#x3C;module></code> (e.g. “when user submits empty field, backend 500s”). Include tests reproducing bug and fix. Also review for similar occurrences elsewhere.”</em></td><td>Including description and reproducibility helps. Asking to scan for similar patterns improves code hygiene.</td></tr><tr><td><strong>Performance / Optimization</strong></td><td><em>“Optimize the query in <code>&#x3C;module></code> retrieving <code>&#x3C;data></code> from Postgres. Consider indexes, limit/pagination, avoid N+1, use appropriate joins. Retain readability and maintainability. Show before/after rationale.”</em></td><td>Encourages thinking about performance tradeoffs.</td></tr><tr><td><strong>Security / Sanitization</strong></td><td><em>“Add input validation, sanitization, and secure practices for <code>&#x3C;endpoint / service></code> that takes user input. Ensure prevention of SQL injection, XSS, unauthorized access. Write tests for invalid inputs.”</em></td><td>Promotes safer code.</td></tr><tr><td><strong>Code Review Helper</strong></td><td><em>“Review this diff / file: <code>&#x3C;paste snippet></code>; point out issues for readability, maintainability, test coverage, security, style mismatches. Suggest small fixes or improvements.”</em></td><td>Useful to catch oversights before PR.</td></tr><tr><td><strong>Writing Tests</strong></td><td><em>“Write unit tests(s) for <code>&#x3C;component / service></code> that cover success cases, invalid inputs, error paths. Use <code>&#x3C;test framework></code>.”</em></td><td>Ensures test coverage.</td></tr><tr><td><strong>Database Schema Changes / Migrations</strong></td><td><em>“Suggest a migration to introduce <code>&#x3C;new column / table></code> while preserving existing data. Write migration script. Update service layer and repository accordingly. Ensure backward compatibility.”</em></td><td>Important for DB-safe evolution.</td></tr></tbody></table>

***

### Cursor Rules

Below is a sample set of rules you can keep in a repository (or global) so Cursor (or any similar AI tool) has project-level constraints and guidance.

{% hint style="warning" %}
`.cursorrules` will be deprecated in the future, hence consider updating rules directly in **Cusor Settings -> Rules**
{% endhint %}

```xml
# .cursorrules — project / team style & constraints

# General
- Use strict typing everywhere. Do not use `any` or equivalent without explicit justification.
- Always follow the project’s naming conventions for files, classes, methods, interfaces. (e.g. React: components PascalCase, hooks useCamelCase, utils camelCase; Java: classes PascalCase etc.)
- Document public / exported functions and classes with JSDoc / JavaDoc style comments.
- Error handling must log or handle expected and unexpected error paths.
- Avoid copying/pasting code. Use shared modules / utilities.

# React / TypeScript
- Prefer functional components + hooks; avoid class components per new code.
- Use ESLint + Prettier rules; enforce linting pre-commit / CI.
- For UI components: follow existing component library styles, props naming, theming tokens etc.
- Write unit tests (Jest / React Testing Library) for any component with logic (not purely presentational).

# Java / Spring Boot
- Use dependency injection; avoid `new` for dependencies in services/controllers except when encapsulated.
- Use REST controllers / service / repository separation.
- Methods should have clear single responsibility; i.e. small methods.
- Use logging (both info + error levels); exceptions should carry context.
- For database interactions (JPA or JDBC): ensure transactions handled; check for SQL injection risks; avoid raw SQL unless necessary.

# Database / Postgres
- Use parameterized queries or ORM to avoid injection.
- Follow performance best practices: indexes, limits, avoiding N+1 queries.
- Use migrations; track schema evolution; avoid destructive changes without migration plan.
- Partition / sharding only when needed; ensure explain plans used for heavy queries.

# Security / Quality
- No secrets or credentials in code or prompts.
- Use static analysis / linters / security scanning on generated code.
- Tests required for any change with business logic, error handling, external API or DB calls.
- Peer review required for changes > ~50 LOC or touching more than 1 module / package / domain boundary.

```

***

### Summary / Usage Guidelines

* For each PR, assume parts of it are AI-assisted. Always treat suggestion as draft.
* Use the checklist above as gating criteria. If fail any step (e.g. no tests, no error handling), request changes.
* Prompt templates + Rules help make suggestions consistent and aligned.
