our company
Trusted AI guardians building the future of agent security
how it started
Before Vijil, our founders spent over a decade building large-scale AI systems at AWS (Amazon SageMaker and Bedrock) and developing responsible AI practices at companies like AT&T, Splunk, and Twitch. When generative AI agents emerged, they immediately recognized a critical gap: enterprises had powerful new technology but no way to trust it in production. They founded Vijil in 2024 to bridge that trust gap.
Leadership
Our Team

Previously atCapitalOne, evaluating LLMs for company-wide use. Working in the field of responsible AI since 2019, including building explainability solutions, establishing responsible AI processes, and publishing interdisciplinary research at venues like FAccT. Tries to spend at least one week a year walking in the mountains.
Advisors
Values
Three principles that define our approach:
Custom
Rapid customization for enterprise-specific trust requirements, not generic security solutions
Continuous
Trust throughout the entire agent lifecycle, from development through production deployment
Inside-Out
Building trust by design within agents, not just applying external protection
Testimonials
Join Our Mission
Help us end the intern era for AI agents
We're building the platform that transforms AI agents from supervised interns into autonomous professionals. If you're passionate about accelerating time to trust for enterprise AI, we'd love to hear from you.
Open positions
You will build upon the latest research methods in trustworthy ML and generative AI to build and deliver cloud services for AI developers who are customizing agents and LLMs. You will collaborate with applied scientists to build capabilities for testing, tuning, and deploying agents and LLMs with security and safety. Your day-to-day responsibilities include:
- Solve problems facing enterprises that want to deploy agents and LLMs in production
- Implement ML techniques to detect LLM vulnerabilities to attacks
- Implement ML techniques to assess LLM propensities for harm
You will be responsible for research and development of novel machine learning techniques to improve the security and safety of large language models. You will continuously survey SOTA research in adversarial ML, red-teaming, data curation, training, and fine-tuning LLMs; you will pursue independent research to develop patents and write papers for peer-reviewed publication at top-tier conferences; you will develop small-scale prototypes (production-quality code) and collaborate with ML engineers and cloud services developers to transform your prototypes into large-scale cloud services. Importantly, you will focus obsessively on the needs of customers -- AI developers at enterprises and startups who are customizing LLMs to build agents -- and let them guide the direction of your research.




.png)
.png)








