our company
Building trusted agents
WHY VIJIL NOW
The most important problem in enterprises today is that 95% of AI projects fail to reach production because AI developers do not deliver agents that stakeholders can trust. Vijil is the trust infrastructure that enterprises need to use AI agents with reliability, security, and safety. The platform enables developers to build agents with hardened components, cutting time-to-trust by 4x. It helps risk officers enforce policies to ensure governance. And business owners use Vijil to continuously improving resilience.
Founded in 2023 by senior leaders from AWS, Vijil is backed by legendary investors at BrightMind, Gradient, and Mayfield. The company was named a Gartner® Cool Vendor™ 2025 and CB Insights Most Innovative AI Startups 2025. Vijil is used in production by SmartRecruiters, DuploCloud, and agent developers at DigitalOcean.
Leadership

AI Researcher | Professor at University of Toronto | Rhodes Scholar
Assistant Professor of Statistical Sciences at the University of Toronto, a Faculty Member at the Vector Institute for Artificial Intelligence, and a Faculty Affiliate at the Schwartz Reisman Institute for Technology and Society.
Team

Previously at CapitalOne, evaluating LLMs for company-wide use. Working in the field of responsible AI since 2019, including building explainability solutions, establishing responsible AI processes, and publishing interdisciplinary research at venues like FAccT. Tries to spend at least one week a year walking in the mountains.
Advisors
Values
Three principles that define our approach:
Custom
Rapid customization for enterprise-specific trust requirements, not generic security solutions
Continuous
Trust throughout the entire agent lifecycle, from development through production deployment
Inside-Out
Building trust by design within agents, not just applying external protection
Testimonials
Join Us
Evolve Trusted Agents
We're building a platform to accelerate the evolution of AI agents with reliability, security, and safety. If you're curious, relentless, and contrarian, we'd love to hear from you.
Open positions
You will build upon the latest research methods in trustworthy ML and generative AI to build and deliver cloud services for AI developers who are customizing agents and LLMs. You will collaborate with applied scientists to build capabilities for testing, tuning, and deploying agents and LLMs with security and safety. Your day-to-day responsibilities include:
- Solve problems facing enterprises that want to deploy agents and LLMs in production
- Implement ML techniques to detect LLM vulnerabilities to attacks
- Implement ML techniques to assess LLM propensities for harm
You will be responsible for research and development of novel machine learning techniques to improve the security and safety of large language models. You will continuously survey SOTA research in adversarial ML, red-teaming, data curation, training, and fine-tuning LLMs; you will pursue independent research to develop patents and write papers for peer-reviewed publication at top-tier conferences; you will develop small-scale prototypes (production-quality code) and collaborate with ML engineers and cloud services developers to transform your prototypes into large-scale cloud services. Importantly, you will focus obsessively on the needs of customers -- AI developers at enterprises and startups who are customizing LLMs to build agents -- and let them guide the direction of your research.




.png)
.png)











