1. evaluate_llm_response
Evaluate an LLM's response to a prompt using a given evaluation criteria.
2. evaluate_llm_response_on_multiple_criteria
Evaluate an LLM's response to a prompt across multiple evaluation criteria.
An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API for state-of-the-art LLM evaluation using evaluation models to score and critique responses.