SUN, APRIL 12, 2026
Independent · In‑Depth · Unsponsored
Scoring & Review Process

Our Methodology

Every score on AIToolsRecap is the result of a structured evaluation process. Here is exactly how we test, score, and publish AI tool reviews.

Scoring Dimensions

Each tool is scored across four dimensions, equally weighted to produce an overall score out of 10.

Accuracy
25%
Does the tool do what it claims? How consistent and reliable is its output quality?
Ease of Use
25%
How quickly can a new user get productive? Quality of documentation, UI clarity, onboarding.
Value
25%
Does the pricing reflect the capability delivered? How does it compare to alternatives?
Support
25%
Quality of documentation, community, customer support response, and issue resolution.

Rating Scale

What each score range means in plain terms.

9–10ExceptionalBest-in-class. Sets the standard for the category. Recommended without reservation.
8–8.9ExcellentStrong performer with minor limitations. Recommended for most use cases.
7–7.9GoodSolid tool with notable trade-offs. Worth considering depending on your needs.
6–6.9AverageFunctional but outperformed by alternatives. Situational recommendation.
Below 6Below AverageSignificant weaknesses. Consider alternatives before committing.

Our Review Process

How a review goes from assignment to publication.

Tool Selection

We prioritise tools with significant user bases, active development, and relevance to our community. Community nominations are considered. We do not accept review requests from vendors.

Hands-on Testing Period

Reviewers use the tool for a minimum of two weeks across real-world tasks relevant to the category. For LLMs and coding tools, this includes structured benchmark tasks. For creative tools, this includes production-grade usage.

Benchmark Cross-reference

Where available, we cross-reference our qualitative findings against published benchmarks (MMLU, HumanEval, SWE-bench, etc.) and note any discrepancies between benchmark performance and real-world behaviour.

Structured Scoring

Each reviewer completes a structured scoring rubric independently. Scores are averaged and reviewed editorially. Any score that differs significantly from benchmark evidence requires a written justification.

Editorial Review

All reviews are checked by a second editor for factual accuracy, scoring consistency, and editorial tone before publication. We do not share reviews with vendors prior to publication.

Living Documents

AI tools evolve rapidly. We revisit and update reviews when major model versions are released, pricing changes significantly, or community feedback identifies factual errors. Update dates are shown on all reviews.

Independence Policy

AIToolsRecap accepts no payment, gifted access, or other consideration in exchange for reviews or favourable coverage. Where we use free tiers or trial accounts for testing, this is disclosed in the review.

We have no affiliate relationships with any AI tool vendor. Our only revenue comes from membership subscriptions from our reader community.

If you believe a review contains a factual error or conflicts with our stated methodology, please contact us. We take correction requests seriously and respond to all substantive claims.