Skip to main content

Reviewing AI Auto-Scoring Results

Justine Rosenblat avatar
Written by Justine Rosenblat
Updated this week

The candidate assessment report is your primary tool for making data-driven hiring decisions. While manual review is often a key part of the process, Canditech’s AI Auto-Scoring provides an immediate layer of insight, offering a standardized evaluation of candidate responses for Open-Text and Video questions.

Understanding how to read these results, explore the underlying logic, and apply your own judgment is essential for a seamless grading experience.


Understanding Your AI Auto-Scoring Options

Before diving into the results, it’s helpful to know which "lens" the AI is using to evaluate your candidates. Canditech offers three distinct auto-scoring modes: Pre-built agent for specific skills (e.g. Conflict Resolution, Language Proficiency), Example Answers, and Custom Agents, which allows you to define your own specific rubric.

For an in-depth guide into the AI Auto-Scoring options offered by Canditech, click here.


Reviewing AI Auto-Scoring Results on a Candidates' Session

When you open a candidate’s assessment report, scroll to the specific question you wish to review. If AI Auto-Scoring was enabled, you will see a dedicated AI Results section below the candidate's answer.

If you find a question with only the standard 5-star rating and no AI Results row, it simply means the question was not set for auto-scoring and requires your manual input.

A single question can be evaluated by multiple AI models simultaneously (e.g., Conflict Resolution and Language Score). In these cases, the system distributes the weight equally: if two methods are used, each accounts for 50% of the question's total grade. If three are used, each contributes a third.

For more information on reviewing candidates, click here.


Expanding the Logic: Why the AI Scored This Way

Transparency is at the heart of our AI. By clicking on any of the scoring methods, you can expand it to see a detailed 1-5 star breakdown and a written justification. This rationale explains the "why" behind the grade. For example, if a candidate receives a lower score on a language model, the AI will specify whether it found significant grammatical errors or if the tone was simply inconsistent with professional standards. This allows you to understand the AI's perspective before you decide to accept or challenge it.

Interpreting the Custom AI Agent

For those using the Custom Agent options, the interface becomes even more granular. Instead of just a general star rating, you will see a list of specific, binary rules that you defined for the role.

Next to each rule, a green checkmark () or a red X () will clearly indicate whether the candidate met that specific requirement. Just like the general models, each rule includes a detailed explanation of the rationale used to reach that conclusion. This format makes it incredibly easy to see exactly where a candidate excelled and where they fell short of your specific benchmarks.


The Power to Override: Human-in-the-Loop

While the AI provides a strong baseline, we believe the final hiring decision should always remain in human hands. If you disagree with an AI determination, overriding it is simple and intuitive.

For general star ratings: You can simply click on the number of stars (1–5) you believe is correct. This will immediately update the score to reflect your judgment.

For the Custom AI Agent: Because the score is built on specific rules, overrides are done at the rule level. If you disagree with the AI's "Pass" or "Fail" for a specific point, click the Thumbs Down icon. This will flip the result (e.g., changing a "Fail" to a "Pass") and automatically recalculate the total weighted score. You can always click the Thumbs Up icon to revert to the AI's original logic.

The star rating system converts your rating into a fixed percentage of the question’s total points. Each star corresponds to a percentage of the score, and that percentage is applied to the total points allotted for the question.

For example, if a question is worth 20 points:

  • 1 star = 0% → 0 points

  • 2 stars = 25% → 5 points

  • 3 stars = 50% → 10 points

  • 4 stars = 75% → 15 points

  • 5 stars = 100% → 20 points

This ensures consistent scoring across all manually reviewed questions, such as open-text and video responses.

For more information on reviewing candidates, click here.

Did this answer your question?