AI evaluation platform - AI tools

  • Freeplay
    Freeplay The All-in-One Platform for AI Experimentation, Evaluation, and Observability

    Freeplay provides comprehensive tools for AI teams to run experiments, evaluate model performance, and monitor production, streamlining the development process.

    • Paid
    • From 500$
  • Arize
    Arize Unified Observability and Evaluation Platform for AI

    Arize is a comprehensive platform designed to accelerate the development and improve the production of AI applications and agents.

    • Freemium
    • From 50$
  • Future AGI
    Future AGI World’s first comprehensive evaluation and optimization platform to help enterprises achieve 99% accuracy in AI applications across software and hardware.

    Future AGI is a comprehensive evaluation and optimization platform designed to help enterprises build, evaluate, and improve AI applications, aiming for high accuracy across software and hardware.

    • Freemium
    • From 50$
  • Evidently AI
    Evidently AI Collaborative AI observability platform for evaluating, testing, and monitoring AI-powered products

    Evidently AI is a comprehensive AI observability platform that helps teams evaluate, test, and monitor LLM and ML models in production, offering data drift detection, quality assessment, and performance monitoring capabilities.

    • Freemium
    • From 50$
  • Lisapet.ai
    Lisapet.ai AI Prompt testing suite for product teams

    Lisapet.ai is an AI development platform designed to help product teams prototype, test, and deploy AI features efficiently by automating prompt testing.

    • Paid
    • From 9$
  • Coherence
    Coherence AI-Augmented Testing and Deployment Platform

    Coherence provides AI-augmented testing for evaluating AI responses and prompts, alongside a platform for streamlined cloud deployment and infrastructure management.

    • Freemium
    • From 35$
  • Braintrust
    Braintrust The end-to-end platform for building world-class AI apps.

    Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.

    • Freemium
    • From 249$
  • Gentrace
    Gentrace Intuitive evals for intelligent applications

    Gentrace is an LLM evaluation platform designed for AI teams to test and automate evaluations of generative AI products and agents. It facilitates collaborative development and ensures high-quality LLM applications.

    • Usage Based
  • Humanloop
    Humanloop The LLM evals platform for enterprises to ship and scale AI with confidence

    Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.

    • Freemium
  • HoneyHive
    HoneyHive AI Observability and Evaluation Platform for Building Reliable AI Products

    HoneyHive is a comprehensive platform that provides AI observability, evaluation, and prompt management tools to help teams build and monitor reliable AI applications.

    • Freemium
  • LastMile AI
    LastMile AI Ship generative AI apps to production with confidence.

    LastMile AI empowers developers to seamlessly transition generative AI applications from prototype to production with a robust developer platform.

    • Contact for Pricing
    • API
  • Autoblocks
    Autoblocks Improve your LLM Product Accuracy with Expert-Driven Testing & Evaluation

    Autoblocks is a collaborative testing and evaluation platform for LLM-based products that automatically improves through user and expert feedback, offering comprehensive tools for monitoring, debugging, and quality assurance.

    • Freemium
    • From 1750$
  • Hegel AI
    Hegel AI Developer Platform for Large Language Model (LLM) Applications

    Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.

    • Contact for Pricing
  • Scorecard.io
    Scorecard.io Testing for production-ready LLM applications, RAG systems, Agents, Chatbots.

    Scorecard.io is an evaluation platform designed for testing and validating production-ready Generative AI applications, including LLMs, RAG systems, agents, and chatbots. It supports the entire AI production lifecycle from experiment design to continuous evaluation.

    • Contact for Pricing
  • Langtrace
    Langtrace Transform AI Prototypes into Enterprise-Grade Products

    Langtrace is an open-source observability and evaluations platform designed to help developers monitor, evaluate, and enhance AI agents for enterprise deployment.

    • Freemium
    • From 31$
  • Maxim
    Maxim Simulate, evaluate, and observe your AI agents

    Maxim is an end-to-end evaluation and observability platform designed to help teams ship AI agents reliably and more than 5x faster.

    • Paid
    • From 29$
  • Distributional
    Distributional The Modern Enterprise Platform for AI Testing

    Distributional is an enterprise platform for AI testing, designed to give teams confidence in the reliability of their AI and ML applications. It offers a proactive approach to mitigate the risks associated with unpredictable AI systems.

    • Contact for Pricing
  • aixblock.io
    aixblock.io Productize AI using Decentralized Resources with Flexibility and Full Privacy Control

    AIxBlock is a decentralized platform for AI development and deployment, offering access to computing power, AI models, and human validators. It ensures privacy, scalability, and cost savings through its decentralized infrastructure.

    • Freemium
    • From 69$
  • Basalt
    Basalt Integrate AI in your product in seconds

    Basalt is an AI building platform that helps teams quickly create, test, and launch reliable AI features. It offers tools for prototyping, evaluating, and deploying AI prompts.

    • Freemium
  • teammately.ai
    teammately.ai The AI Agent for AI Engineers that autonomously builds AI Products, Models and Agents

    Teammately is an autonomous AI agent that self-iterates AI products, models, and agents to meet specific objectives, operating beyond human-only capabilities through scientific methodology and comprehensive testing.

    • Freemium
  • TradingPlatforms.ai
    TradingPlatforms.ai Your Guide to AI Trading Platforms, Bots, and Tools Reviews

    TradingPlatforms.ai is a comprehensive review platform that provides detailed analysis and evaluations of AI trading platforms, bots, and tools to support traders and investors in making informed decisions.

    • Free
  • GenesisAI
    GenesisAI Global AI API Marketplace: Discover, Test, and Connect

    GenesisAI is a global marketplace enabling users to discover, compare, test, and integrate state-of-the-art AI APIs for various applications.

    • Usage Based
  • Okareo
    Okareo Error Discovery and Evaluation for AI Agents

    Okareo provides error discovery and evaluation tools for AI agents, enabling faster iteration, increased accuracy, and optimized performance through advanced monitoring and fine-tuning.

    • Freemium
    • From 199$
  • AI2 Playground
    AI2 Playground Explore and interact with AI models from the Allen Institute for AI.

    AI2 Playground offers an interactive platform to experiment with various artificial intelligence models developed by the Allen Institute for AI.

    • Free
  • EvalsOne
    EvalsOne Evaluate LLMs & RAG Pipelines Quickly

    EvalsOne is a platform for rapidly evaluating Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines using various metrics.

    • Freemium
    • From 19$
  • Conviction
    Conviction The Platform to Evaluate & Test LLMs

    Conviction is an AI platform designed for evaluating, testing, and monitoring Large Language Models (LLMs) to help developers build reliable AI applications faster. It focuses on detecting hallucinations, optimizing prompts, and ensuring security.

    • Freemium
    • From 249$
  • Relari
    Relari Trusting your AI should not be hard

    Relari offers a contract-based development toolkit to define, inspect, and verify AI agent behavior using natural language, ensuring robustness and reliability.

    • Freemium
    • From 1000$
  • OpenLIT
    OpenLIT Open Source Platform for AI Engineering

    OpenLIT is an open-source observability platform designed to streamline AI development workflows, particularly for Generative AI and LLMs, offering features like prompt management, performance tracking, and secure secrets management.

    • Other
  • Contentable.ai
    Contentable.ai End-to-end Testing Platform for Your AI Workflows

    Contentable.ai is an innovative platform designed to streamline AI model testing, ensuring high-performance, accurate, and cost-effective AI applications.

    • Free Trial
    • From 20$
    • API
  • User Evaluation
    User Evaluation Streamline your data discovery with AI-curated user interviews

    User Evaluation is an AI-powered platform that transforms customer data into actionable insights through advanced transcription, analysis, and reporting tools, supporting 57+ languages and multiple data formats.

    • Freemium
    • From 19$
  • Didn't find tool you were looking for?

    Be as detailed as possible for better results
    EliteAi.tools logo

    Elite AI Tools

    EliteAi.tools is the premier AI tools directory, exclusively featuring high-quality, useful, and thoroughly tested tools. Discover the perfect AI tool for your task using our AI-powered search engine.

    Subscribe to our newsletter

    Subscribe to our weekly newsletter and stay updated with the latest high-quality AI tools delivered straight to your inbox.

    © 2025 EliteAi.tools. All Rights Reserved.