Arize PhoenixArize AI
|
||||||
Related Products
|
||||||
About
Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI engineers and data scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors. Phoenix works with OpenTelemetry and OpenInference instrumentation. The main Phoenix package is arize-phoenix. We offer several helper packages for specific use cases. Our semantic layer is to add LLM telemetry to OpenTelemetry. Automatically instrumenting popular packages. Phoenix's open-source library supports tracing for AI applications, via manual instrumentation or through integrations with LlamaIndex, Langchain, OpenAI, and others. LLM tracing records the paths taken by requests as they propagate through multiple steps or components of an LLM application.
|
About
LayerLens is an independent AI model evaluation platform for understanding how models perform through verified results across benchmarks, prompt-level results, agentic benchmarks, and audit-ready comparisons across vendors. It helps teams compare more than 200 AI models side by side, with transparent benchmarks, model comparison tools, and consistent evaluation methods for accuracy, latency, behavior, and real-world applicability. LayerLens is built for deep model analysis through Spaces, where teams can group benchmarks and evaluations, explore task strengths, and track performance patterns in context. It supports continuous evaluation by running ongoing evals across model versions, prompt changes, judge updates, and live traces, helping teams detect quality regressions, drift, silent failures, contamination, and policy issues before they affect production.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
AI engineers seeking a tool to enhance the observability and performance of their AI and LLM applications
|
Audience
AI engineering and governance teams that need transparent, continuous evaluations to compare models, monitor production behavior, and reduce risk before deployment
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
Free
Free Version
Free Trial
|
Pricing
No information available.
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationArize AI
United States
docs.arize.com/phoenix
|
Company InformationLayerLens
United States
stratix.layerlens.ai/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
||||||
|
|
||||||
|
|
||||||
Categories |
Categories |
|||||
Integrations
Databricks
Mistral AI
OpenAI
APIFuzzer
Amazon Web Services (AWS)
Anthropic
Cohere
Conda
DeepSeek
Google AI Mode
|
Integrations
Databricks
Mistral AI
OpenAI
APIFuzzer
Amazon Web Services (AWS)
Anthropic
Cohere
Conda
DeepSeek
Google AI Mode
|
|||||
|
|
|