Best practices for AI usage

General AI coding assistant practices

Start with a clear intent, then constrain the task

How to do it

  • State goal, constraints, definition of done, and the exact files or interfaces involved.

  • Prefer short iterative prompts that build on the current diff.

Example “Add optimistic updates to IssueList.tsx using React Query. Keep existing typing. Update useIssues.ts to invalidate the issues key only on success. Provide a minimal diff and a 3-item test plan.” Why Specific context reduces ambiguity and improves suggestion quality. GitHub’s prompt engineering guidance stresses being explicit about functions, files, and context. GitHub Docs

Always review, test, and secure

How to do it

  • Ask the assistant to write unit tests first for risky changes, then code to make them pass.

  • Run linters, code scanning, secret scanning, and CI before merge.

Copilot prompt example “Generate Jest unit tests for calculateDiscount.ts that cover 0, boundary, and invalid inputs. Do not stub business rules. Next, propose code changes to make all tests pass.” Why GitHub recommends combining Copilot with tests and automated scanning, and shows step-by-step test generation flows. This ties usage to shipped quality instead of raw acceptance. Security checklist to apply during review

  • Validate inputs on the server, never trust user input.

  • Do not log secrets or tokens.

  • Guard auth, authorization, and output encoding paths.

Reference OWASP quick checks for input validation.

Make metrics reflect shipped quality

How to do it

  • Pair activity metrics with outcomes like merged PRs, test pass trends, rework, and defects.

  • Read acceptance together with language or area, not in isolation. Your Copilot and Cursor pages encourage acceptance and language breakdowns. Use them with outcome signals to avoid vanity measures.

Keep developers accountable

How to do it

  • Require a human LGTM for AI-assisted diffs.

  • Keep architecture and security decisions in team's hands.

  • GitHub’s secure use guidance and enterprise docs reinforce human review and policy controls.

Document decisions inside the PR

How to do it

  • Ask the tool to draft the PR description and test plan, then add your rationale, edge cases, and rollout plan.

Example PR note template “Intent: add optimistic updates for issues. Constraints: preserve cache keys and typing. Tests: 5 unit, 1 E2E. Risk: stale cache on error. Rollback: feature flag optimisticIssues.” Copilot’s cheat sheet includes commands for explaining code and creating tests that help populate this PR template.

GitHub Chat Cheat Sheet

Protect data and IP

How to do it

  • Do not paste secrets or proprietary payloads unless your enterprise configuration explicitly allows it.

  • Use approved enterprise settings for logging and data boundaries.

Build a learning loop

How to do it

  • When output is off, reply with why and ask for a corrected version that follows your examples.

  • Maintain a small internal “prompt cookbook” of patterns that work for your stack. GitHub’s official tips emphasize setting context, making asks specific, and iterating.

The GitHub Blog - a fun and useful read with more examples and tips.


GitHub Copilot - usage best practices with examples

Pick the right Copilot surface

When to use inline

  • Completing a function, adding parameters, small refactors.

When to use Copilot Chat

  • Explaining unfamiliar code, writing tests, multi-file changes, or generating a migration plan. The docs show test generation and E2E examples directly from Chat.

Chat prompts you can copy

  • “Explain how useRetryingFetch works and list failure modes.”

  • “Generate table-driven Jest tests for parseDateRange including invalid ranges.”

  • “Propose a minimal diff to replace legacy crypto with subtle.crypto and update tests.”

Review acceptance with intent

Practice

  • Review the first two or three alternative suggestions. Accept the smallest correct diff. Reject verbose or speculative code.

Inline example

  • “Show two simpler alternatives with fewer allocations and explain tradeoffs.” Why: GitHub encourages scanning alternatives and giving acceptance feedback to shape future suggestions.

Test first for risky changes

Scenario You are adding rate limiting to an API. Chat sequence

  • “Write Jest tests for rateLimiter.ts that cover normal, burst, and blocked user scenarios.”

  • “Generate implementation changes that pass the tests. Keep public API unchanged.”

  • “Add an E2E Playwright test for the 429 response path.”

Use Copilot in code review without skipping human judgment

  • Ask: “Summarize this diff. Flag potential N+1 queries and missing input validation.”

  • Keep your checklist items for secrets, validation, and authorization. GitHub’s secure practice pages advocate combining Copilot with automated scanning and human review.

Keep Copilot scoped

  • Provide only the files needed and the exact function signatures.

  • Never include secrets. Use masked env values in examples.

Interpret Copilot metrics with care

  • Track suggestions accepted by language and area of code, then correlate with merged PRs, test pass rate, rework, and defects.

  • If acceptance is high but rework rises, slow down and tighten review. This mirrors your “expanded metrics” emphasis on segmentation and context.


Cursor - usage best practices with examples

Seed Cursor with rules

  • Keep it focused and composable, under 500 lines.

  • Include language patterns, error handling rules, test frameworks, and things to avoid. Cursor’s docs spell out how to write effective rules.

  • Keep in mind, you can go ahead with setting up .cursorrules, but it will be deprecated soon.

Work in small, reviewable steps

Scenario You need to add a debounce to a search box. Composer prompt “Update SearchBox.tsx to debounce input by 300 ms using useCallback and setTimeout. Show a minimal diff. Add a test for rapid keystrokes.” Follow-up “Explain the diff and provide one simpler alternative.” Why: Cursor guidance and community tips encourage iterative edits and minimal diffs.

Make Cursor refactor your own diffs

Scenario You wrote a pagination helper. Chat prompt “Review paginate.ts. Reduce allocations and avoid off-by-one errors. Propose a smaller version with identical behavior and add tests for edge pages.”

Use context intentionally

Practice

  • Reference only files that are required for the change.

  • For multi-module edits, ask Cursor to first list affected files and functions before it edits. This reduces irrelevant changes.

Interpret Cursor metrics with care

Practice

  • Read AI vs manual lines and acceptance rates next to merged PRs and rework.

  • Sample AI-assisted PRs weekly for maintainability and security. Your Cursor docs mention capture limits, so qualitative review stays important.

Close the loop in the PR

Prompt “Draft a PR description summarizing intent, constraints, test plan, and risk. Keep it under 120 words.” Then you add links to tickets and any rollout notes.


Safeguards for Responsible AI Use

  • Require tests for AI-assisted changes above a size threshold Example GitHub Action: block merge if diff touches more than N lines across M files and the PR has zero new or updated tests. Copilot docs recommend automated checks and testing around AI output.

  • Block merges on security failures Example: run secret scan, SAST rules, and dependency checks on every PR. Reject if any high severity issue appears. Use OWASP input validation reminders in review. owasp.org

  • Count only code that lands and passes CI when discussing “AI impact” Tie assistant usage to merged PRs and green pipelines rather than raw suggestion counts. This aligns with your metrics pages that caution against surface-only numbers.

  • Weekly sampling of AI-assisted PRs Have tech leads sample two AI-assisted PRs per team each week for security, clarity, and maintainability. Record findings and feed them back into rules or the prompt cookbook. Cursor rules are designed to evolve with feedback.

Last updated

Was this helpful?