1. Who Can Contribute
AIToolsRecap accepts reviews from anyone with genuine hands-on experience with the tool they are reviewing. You do not need to be a professional AI researcher or engineer — but you must have used the tool substantively.
All contributors must create a free account. Reviews submitted by new accounts are held for editorial review before publication.
You may not review a tool if you are employed by, have a financial interest in, or have received compensation from the tool's developer.
2. What to Review
We accept reviews of publicly available AI tools across our six categories: Large Language Models, Image AI, Code Tools, Voice & Audio, AI Agents, and Data & Analytics.
The tool must be accessible to others (not internal/private tooling), and you must have used it for at least two weeks before submitting a review.
We do not accept reviews of tools that are not yet publicly available, or reviews written primarily based on marketing materials.
3. Review Structure
All reviews should follow this structure:
- Introduction — What the tool is, who makes it, and what problem it solves. (1–2 paragraphs)
- Core Capabilities — What it does well. Specific, concrete examples. (2–4 paragraphs)
- Limitations — What it does poorly, or where it falls short of claims. Honest assessment. (1–3 paragraphs)
- Pricing — Current pricing tiers with concrete context. Is it worth the cost? (1 paragraph)
- Verdict — Who should use it, and under what conditions. (1 paragraph)
4. Scoring Your Review
Each review requires four sub-scores (1.0–10.0 in 0.1 increments) which are averaged to produce the overall score:
- Accuracy — Does the tool do what it claims, consistently?
- Ease of Use — Onboarding, documentation, UI clarity
- Value — Price-to-capability ratio vs. alternatives
- Support — Documentation quality, community, customer service
Scores must be justified by the content of your review. A score of 9+ requires explicit evidence of exceptional performance. Editorial reviewers may adjust scores that are inconsistent with the written content.
5. Writing Standards
Tone
Write like a knowledgeable practitioner briefing a colleague — direct, precise, and honest. Avoid hype and avoid unnecessarily harsh dismissals. If a tool is mediocre, say so clearly and explain why.
Specificity
Vague statements like "the AI responses are impressive" are not useful. Cite specific tasks, outputs, or benchmarks. "On a 10-task coding evaluation, it successfully completed 8 without intervention" is useful. "The coding is great" is not.
Currency
Specify the version or date of the tool you tested. AI tools change rapidly — a review of GPT-4 from 2023 may not reflect current performance.
6. Dos and Don'ts
✓ Do
- Test the tool yourself for at least 2 weeks
- Include specific examples of outputs
- Reference published benchmarks where relevant
- Disclose if you used a free trial
- Compare to direct alternatives
- Update your review when the tool changes
- State your use case and experience level
✗ Don't
- Review tools you haven't personally used
- Copy from official documentation or press releases
- Make claims you can't support with evidence
- Review tools from your employer
- Submit AI-generated review content
- Use affiliate links in your review
- Attack developers personally
7. Submission Process
Once you have a free contributor account, you can submit reviews through the member dashboard. All reviews are submitted with status "Pending" and enter our editorial queue.
Editorial review typically takes 5–10 business days. You will be notified by email with the outcome — published, returned for revision, or rejected with feedback.
If your review is returned for revision, you will receive specific notes from an editor. Most revisions are minor (adding specificity, adjusting score justification, or clarifying a claim).
Questions about the review process? Contact us.