Solo open-source projects address challenges of agentic AI
The rush to adopt agentic AI presents significant challenges for enterprises, particularly around governance, security, and ensuring reliability in production environments. Two new open-source projects created by Solo.io, Agent Registry and Agent Evals, aim to solve these critical adoption hurdles.
The Agent Registry was open-sourced at KubeCon Atlanta and subsequently donated to the Cloud Native Computing Foundation as a sandbox project in Amsterdam. Its creation addresses the enterprise need to curate and govern approved AI agents, MCP tools, and Agent skills. By serving as a central hub for hosting these artifacts, it provides governance, intelligent searching, and allows developers to easily build, push, and run agents in environments like Kubernetes. The agents themselves are highly customizable, supporting multiple frameworks including the declarative YAML-based Key Agent, as well as Agent Core, Azure, and Google ADK. Customization allows users to configure agent instructions, skills, MCP tools, and model settings. That, said Lin Sun, the director of open source at Solo.io, which created the projects, is how and why Agent Registry was built.
“As we were running agents at Solo, we use Kagent a lot to help us troubleshoot Kubernetes environment deployment issues, networking configuration issues. Because they are not deterministic, some agents are a little bit more reliable with certain models with certain prompts,” Lin said. “So we feel there’s a strong need to be able to ship agents with reliability and confidence in mind.”
A separate project, Agent Evals, was announced to enable the reliable shipping of agents. This project was born out of internal experience where agents were found to be non-deterministic, creating a strong need for reliability and confidence. Agent Evals provides tooling to benchmark agents by leveraging open standards like OpenTelemetry. It collects real-time metrics and tracing as the agent runs to score performance and inference quality, producing a report that helps users understand their agent’s reliability. This assessment is crucial for determining the level of human intervention required, whether fully autonomous, human-in-the-loop, or human-outer-loop. Agent Evals works in conjunction with other observability tools that support OpenTelemetry standards.
Moving beyond individual developer laptops into full production requires robust security and governance. Solo is addressing this by solving problems such as securing agent communication with LLMs and MCP tools. The Agent Gateway provides a critical solution, offering centralized policy, enforcement, security, and observability for traffic. This includes “context layer enforcement,” which can be configured to put guardrails on responses, for instance, stripping out sensitive data like credit card or bank account numbers as traffic travels through the gateway. Furthermore, Agent Gateway is being integrated into Istio as an experimental data plane option in Istio Ambient mode, helping mediate agent traffic without requiring changes to the agents or MCP tools themselves.
Collectively, these tools—Agent Registry for governance, Agent Evals for reliability, and Agent Gateway for security—are filling in the puzzles needed to run agentic AI in production with confidence. However, for critical work, human involvement remains a necessary component, as the philosophy suggests viewing the agent like a growing co-worker that still benefits from supervision and peer review.
The post Solo open-source projects address challenges of agentic AI appeared first on SD Times.
Tech Developers
No comments