The opportunities are significant, but so are the new safety, security, and trust challenges. Our approach to innovation has always been clear: responsible, not reactionary. We design for people first, creating AI that lightens the cognitive load and works in partnership with human expertise. With this in mind, we’re evolving our AI by Design framework in response to recent developments in the field of agentic AI. In the process, we hope to create an environment where autonomy and safety grow in lockstep, helping ensure that as AI takes on more responsibility, trust and accountability remain uncompromised.
AI by Design Until Now
This update to our principles for developing responsible AI marks the latest step in a multiphase journey that began with our Secure by Design framework, which guides how we approach security and cyber resiliency at SolarWinds. Consisting of several key tenets, it works to create a more secure environment and build systems centered around transparency and visibility. The first iteration of AI by Design, rolled out in 2024, applied the ideas of Secure by Design to the unique challenges of developing artificial intelligence. The original structure consisted of four distinct principles:
- Principle 1: Privacy and Security
- Principle 2: Accountability and Fairness
- Principle 3: Transparency and Trust
- Principle 4: Simplicity and Accessibility
Now, we’re revising and expanding the framework to address the shifting imperatives of autonomous AI systems, integrating the latest insights from the OWASP Agentic AI Security Initiative, the EU AI Act, and NIST’s AI Risk Management Framework.
The Updated Principles
As agentic AI evolves, the foundations of responsible design must evolve with it. Our updated principles build on what we’ve already established, revisiting familiar concepts like privacy, security, and fairness in light of the new realities of autonomous systems.
Privacy and Security now goes beyond protecting data to governing agent behavior. That means limiting what agents can access, monitoring their actions in real time, and having clear recovery protocols in place when outcomes don’t go as expected.
Accountability and Fairness extends past bias checks during model training to real-world behavioral oversight. In an agentic world, fairness is monitored at runtime, with defined escalation paths and controls whenever autonomous decisions require human review.
Transparency and Trust moves from explaining single-model outputs to tracing entire decision chains. Every plan, tool usage, and step an agent takes is logged, giving customers the clarity to understand not just what happened but why.
Simplicity and Accessibility continues to guide how we deliver advanced capabilities. Features roll out in guided modes so users can choose to opt in, opt out, or override actions at any stage, ensuring autonomy is never at odds with usability.
Finally, Autonomy Boundaries and Safety is a new principle focused on defining and enforcing the limits of agentic systems. By setting strict capability constraints and validating them continuously, we can ensure agents act reliably without slowing the pace of innovation.
Looking Ahead
This is just an overview. In the coming months, we’ll publish in-depth analyses of each principle, exploring how we’re implementing them, what we’ve learned, and how we’re adapting as both the technology and the corresponding regulations evolve. Agentic AI represents a profound shift in how machines and humans work together. By revisiting our principles now, we’re ensuring that the shift benefits everyone, regardless of the challenges involved.