# Hivel AI Impact Screen: Overview & Insights

### Overview

Hivel’s **AI Impact** helps give teams a clear and practical signal to reason about AI usage over time. It is designed to help teams see patterns and trends in shipped code.

While tools like Cursor and Copilot track AI usage (e.g., code suggestions or completions), they don’t measure whether the AI-generated code actually reaches **production**. In contrast, **Hivel’s Code Telemetry** tracks AI-generated code that is **merged and shipped** to production, providing a real-world measure of AI adoption and its impact on developer productivity.

### AI Impact

The **AI Impact** view provides a directional measure of AI contribution by analyzing code changes and classifying code blocks as **AI-generated** or **human-written**.

#### What you’ll see

* **AI vs. human classification** for code at a PR level
* **Confidence score** for the classification
* **Aggregated trends** for a clearer team-level view

<div data-with-frame="true"><figure><img src="/files/wnLJduR5yBC5N71a8IYL" alt=""><figcaption></figcaption></figure></div>

### Why teams use this view

Many AI dashboards measure tool activity. Hivel goes a step further by measuring **production outcomes** by focusing on code that has moved through review and been merged.

<div data-with-frame="true"><figure><img src="/files/4HLT6jvK6DHIQUlBgDeY" alt=""><figcaption></figcaption></figure></div>

### User adoption categories

To help you quickly understand adoption at the individual level, Hivel groups users into categories based on AI usage within the selected time range.

* **Inactive**: 0%
* **Occasional**: 1–30%
* **Moderate**: 31–70%
* **Heavy**: 71–100%

By default, the analysis covers the **last 2 days**, with an option to run a **historical sync** to understand past trends.

<div data-with-frame="true"><figure><img src="/files/nvmw3nspSWYBadDBvRLA" alt="" width="563"><figcaption></figcaption></figure></div>

### Trends over time

The screen also highlights how adoption changes over time.

* **Team AI usage trends**: understand which teams are leading adoption and where additional enablement may help.

<div data-with-frame="true"><figure><img src="/files/puYDpFIN3yyCSIACksmK" alt="" width="563"><figcaption></figcaption></figure></div>

* **Developer distribution trends**: see how usage is distributed across developers and how it shifts over time.

<div data-with-frame="true"><figure><img src="/files/WXVk1PgnJ1VLpF62NTb1" alt="" width="563"><figcaption></figcaption></figure></div>

### Common questions

#### Why might Hivel show lower AI usage than other dashboards?

In most cases, discrepancies are caused by one of the following:

1. **Unmerged pull requests** (only merged code is included).
2. **Processing delays** for very large PRs.
3. **Commits details have not been processed for some reason**, which blocks telemetry collection.
4. **Historic data has not been synced**, which can limit comparisons.

### How the confidence score is summarized

* Code blocks within a PR are analyzed individually.
* At the PR level, the confidence score generally scales with the amount of code in the PR. In practice, larger PRs tend to yield higher confidence, while smaller PRs tend to yield lower confidence.

<div data-with-frame="true"><figure><img src="/files/vxAdoxHBMA0u8tMOKb39" alt="" width="353"><figcaption></figcaption></figure></div>

### Conclusion

Hivel’s AI Impact Screen provides a practical, production-focused way to understand AI usage across teams and developers, so you can scale AI tooling with clarity and confidence.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.hivel.ai/ai-adoption/hivel-ai-impact-screen-overview-and-insights.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
