πMarco - Code Review Tool
Getting Started with Hivel Code Review
Getting Started with Hivel Code Review
Welcome to your new AI-powered code review companion. This guide will help you configure your environment and understand how to get the most out of hivel-marco, our intelligent review agent.
βοΈ Configuration & Setup
Control how the AI interacts with your code through the Hivel dashboard.
1. Organizational Settings
Admins can manage review behavior by navigating to:
π app.hivel.ai/settings/code-review
In this section, you can define:
Specific Instructions: Custom guidelines the AI should follow (e.g., "Ensure all functions have JSDoc").
Global Filters: Files or directories to include or exclude across all projects.
Default Branches: Set the standard target branches for reviews across your organization.
Note: These settings serve as your "baseline" but can be overridden at the individual repository level for maximum flexibility.
2. Repository Level Configs
For the highest quality reviews, we recommend indexing your repository when enabling them for the first time. While Marco will still review PRs without this step, indexing allows the AI to understand your entire codebase contextually.
Analyze/Reanalyze: Trigger a manual scan of your repo to update the AIβs mental model of your project.
Note: We recommend reanalyzing your repo at the end of each sprint or when a sizable revamp/ feature is added to it. A repository owner can make the final decision of what they consider to be a sizable change.
Override Configs: Any of the org-level configs such as instructions, file filters and branches can also be configured here, at the repo level. Any modifications at the repo stage, override the Org Level configs.
π Working in GitHub
Once enabled, the review process is entirely automated. Look for the hivel-marco bot in your Pull Requests.
Automated Triggers
New PRs: When a PR is raised against a configured target branch, a review is automatically triggered.
Incremental Updates: If you push new code to an existing open PR, Marco performs an incremental review. It only analyzes the new changes to avoid repeating previous suggestions.
The Review Report
Every review includes three core components:
Feature
Description
Security Check
First Priority: We scan for Secrets or PII. These are sanitized before reaching the AI to keep your data safe. If detected, remove them before merging!
PR Summary
A concise overview of what the PR intends to accomplish.
File Breakdown
A contextual map of changes, categorized by file.
Complexity Score
An intelligent estimate of the review effort required (see below).
Understanding Complexity Scores
We look beyond "Lines of Code" to calculate how difficult a PR is to digest:
Score
Description
1.0 β 2.0
Minor tweaks, typos, or simple config updates.
2.1 β 4.0
Straightforward logic changes or small features.
4.1 β 6.0
Standard feature work or moderate refactoring.
6.1 β 8.0
Significant logic changes; requires deep focus.
8.1 β 10.0
High-risk architectural changes or massive diffs.
π οΈ Actionable Suggestions
Marco doesn't just critique; it provides solutions. Suggestions are posted directly on the relevant lines of code within the GitHub UI.
Criticality Levels: Suggestions are tagged as Low, Medium, High, or Critical to help you prioritize fixes.
Explanations: Each suggestion includes why the change is needed and how to implement the fix.
One-Click Fixes: You can accept suggestions directly within GitHub.
Pro Tip: Use the "Batch Suggestions" feature in GitHubβs "Files Changed" tab to review and commit multiple AI suggestions in a single go.
Disclaimer: While our AI agent is highly advanced, it may occasionally make mistakes. Always re-test your code after committing AI-suggested changes.
Last updated