Team level Hivel Score Calculation
What is the Team-Level Hivel Score?
The Team-Level Hivel Score gives you a focused view of how a specific team - and its sub-teams - are performing across Speed, Quality, and Throughput. It uses the same scoring framework as the org level, but scoped to the team you select, letting you drill deeper into performance patterns within your org.
Setting Up Your Configuration
The same configuration used at the org level applies here. If you've already set your bucket weights (Speed, Quality, Throughput totalling 100%), you're all set. Scores are calculated only after configuration is saved and are visible for completed months only.

How the Team-Level Hivel Score is Calculated
The score is built bottom-up - from individual metric scores, to bucket scores, to a final team score.
Step 1: Metric Scores (0β100 scale)
Each metric (e.g., Commits, Review Time) is scored by comparing a sub-team's value against all sub-teams' performance over a rolling 6-month window, including the current month. This keeps scores contextual and reflective of recent trends.
To ensure scores aren't skewed by extreme outliers or inactive periods, the following data cleaning is applied before scoring:
Values of 0 are excluded (treated as no activity)
The top and bottom 2% of values are removed
The remaining values set the scoring range (min to max). Each sub-team's current month value is then plotted within this range to produce a 0β100 score. Sub-teams with no activity (zero value) score 0, while those at the very top of the range score 100.
Example - Metric: Commits, Month: January
Here's how 6 months of Commits data across sub-teams translates into January scores:
A
0
0 (no activity)
B
120
36.6
C
89
21.46
D
300
100 (top outlier)
E
72
13.17
F
0
0 (no activity)
After removing zeros and 2% outliers from all 6 months of data, the scoring range is set at Min = 45, Max = 250. Each sub-team's January value is scored within this range. Sub-team D's value of 300 falls above the top 2% threshold, so it automatically scores 100.
Step 2: Bucket Scores
Within each bucket, metrics carry predefined weights. A sub-team's bucket score is the weighted average of its individual metric scores.
Example - Throughput bucket with Commits (60%) and PRs Merged (40%): A sub-team scoring 80 on Commits and 70 on PRs Merged would receive a Throughput score of 76.
Step 3: Final Team Hivel Score
The three bucket scores are combined using your configured weights to produce the final Hivel Score for each sub-team.
Example - with Speed 30%, Quality 40%, Throughput 30%: A sub-team with Speed = 60, Quality = 75, Throughput = 76 would receive a Hivel Score of 70.8.
What You'll See on the Dashboard
The team-level dashboard mirrors the org view, scoped to your selected team:
Hivel Score for the selected month, with a 6-month trend
Score Trend Graph - how the team's score has moved over the past 6 months
Category Breakdown - Speed, Quality, and Throughput scores with expandable metric detail, including each metric's score, weight, and change vs. the 6-month average
Sub-Team Ranking Table - all sub-teams ranked by score, showing % change vs. 6-month average
If the selected team has no sub-teams, the dashboard will display a "No sub-teams available" message.
A Few Things to Keep in Mind
Scores use a rolling 6-month window, so they naturally adapt as your team's performance evolves
Sub-teams with no activity on a metric receive a score of 0 for that metric
Scores are only visible for completed months - no mid-month previews
If no data is available, the score will show as 0
For questions or help with your configuration, please reach out to our support team.
Last updated