our company

Trusted AI guardians building the future of agent security

how it started

Before Vijil, our founders spent over a decade building large-scale AI systems at AWS (Amazon SageMaker and Bedrock) and developing responsible AI practices at companies like AT&T, Splunk, and Twitch. When generative AI agents emerged, they immediately recognized a critical gap: enterprises had powerful new technology but no way to trust it in production. They founded Vijil in 2024 to bridge that trust gap.

Our Mission

contact us

We accelerate your time to trust. 



Our mission is simple: graduate AI agents from supervised interns to autonomous professionals.
 


We believe the POC era has ended. 



It's time for agents that are hardened from the inside out, not just protected from the outside in.

Leadership

Vin Sharma
Co-Founder & CEO

Previously GM & Director of Engineering at Amazon SageMaker. 30y across AI/ML, Data, Cloud, OS, Security; 11 AWS AI services, 30 products, 10 patents, 5 papers

Subho Majumdar
Co-Founder & Head of AI

Responsible AI leader; 10y+ in data science; co-author Trustworthy ML (O'Reilly book); 40 papers, 20 patents; key contributor to OSS (Garak, AVID, AI Village)

Zdravko Pantic
Co-Founder & Head of Engineering

AWS AI senior leader; 20y in ML systems and graphics; led PyTorch, TensorFlow, and AWS SageMaker Training teams

Radina Mihaleva
Head of Business Development

Previously COO at Astronomer; helped scale Lacework from $1M to $100M ARR; 20y GTM strategy & partnerships for cybersecurity; consulting and investment banking; Harvard alum

Our Team

Pradeep Das
Senior Staff Machine Learning Engineer

Previously at Amazon Music,Oracle, andViiv Labs; co-founder & CTO of Adya (acquired by Qualys). Passionate about designing and building large-scale ML systems with a focus on NLP/LLMs. Enjoys reading, hiking, cooking, & doing nothing.

Akrura Gordillo
Staff Software Engineer

Previously atRiva Health,Viiv Labs,Solvvy, andPolycom. Over 20 years of software engineering experience. Most recently, led threat modeling and cybersecurity analysis of medical device to prepare for FDA approval.University of California, Berkeley

Leif Hancox-Li
Senior Applied Scientist

Previously atCapitalOne, evaluating LLMs for company-wide use. Working in the field of responsible AI since 2019, including building explainability solutions, establishing responsible AI processes, and publishing interdisciplinary research at venues like FAccT. Tries to spend at least one week a year walking in the mountains.

Giuliana Gesto
Frontend Developer

UX/UI design and front-end developer, previously atbitlogic.io. Based in Cordoba, Argentina.Instituto Superior Politécnico de Córdoba.

Subaru Ueno
Appsec Engineer

Previously atAmazon,Oracle, andAccenture. Working on AI/ML security engineering since 2019. Most recently, led red-teaming for Amazon AI models. Indiana University

Varun Cherukuri
Software Development Engineer

Cloud infrastructure engineer. Most recently atMIST(acquired byJuniper), built the conversational interface to Marvis Virtual Network Assistant, designed to diagnose and resolve networking issues. University of Illinois at Urbana-Champaign

Anuj Tambwekar
Machine Learning Engineer

Previously atMicrosoft. Research interest in trustworthy AI, ML for human safety, and autonomous vehicles.University of Michigan

Vele Tosevski
Senior Applied Scientist

Senior Applied Scientist. Previously atLorica Cybersecurity, designed and deployed privacy-preserving machine learning products; expertise in the use of fully-homomorphic encryption and trusted execution environment for LLMs. University of Toronto

Advisors

Leon Derczynski
LLM Security pioneer Professor, ITU Copenhagen Principal Scientist, Nvidia Lead - OWASP LLM Top 10

Founder - Garak & Nemo Guardrails

Joe Spisak

Product Director & Head of Generative AI Open Source, Meta

Previously Google, Amazon

Ruslan Salakhtudinov
Deep Learning Pioneer UPMC Professor of CS at CMU

Senior Fellow of CIFAR, Alfred P. Sloan Research Fellow, Microsoft

Research Faculty Fellow

Google Faculty Award, Nvidia Pioneers of AI award

Bratin Saha

Chief Product and Technical Officer, Digital Ocean

Former VP and GM, AWS AI

Values

Three principles that define our approach:

Custom

Rapid customization for enterprise-specific trust requirements, not generic security solutions​

Continuous

Trust throughout the entire agent lifecycle, from development through production deployment​

Inside-Out

Building trust by design within agents, not just applying external protection​

Testimonials

“Our enterprise customers demand trust verification before deploying AI in hiring workflows. Vijil helps us ship AI agents in six weeks instead of six months while dramatically lowering compliance costs.”

Michal Nowak
{ Senior Vice President, Engineering, SmartRecruiters }

“By adapting the Google Responsible Generative AI Toolkit to the needs of enterprises in various industries, Vijil provides critical capabilities for AI developers to preserve the privacy, security and safety of custom models downstream with the same rigor that went into their original release.”

Manvinder Singh
{ Director of Product Management, Google. }

Join Our Mission

Help us end the intern era for AI agents

We're building the platform that transforms AI agents from supervised interns into autonomous professionals. If you're passionate about accelerating time to trust for enterprise AI, we'd love to hear from you.​

Contact Us
Group of circles in a v shape

Open positions

Full-time
US$150K - US$200K + Equity + Benefits
Remote - US/Canada

You will build upon the latest research methods in trustworthy ML and generative AI to build and deliver cloud services for AI developers who are customizing agents and LLMs. You will collaborate with applied scientists to build capabilities for testing, tuning, and deploying agents and LLMs with security and safety. Your day-to-day responsibilities include:

  • Solve problems facing enterprises that want to deploy agents and LLMs in production
  • Implement ML techniques to detect LLM vulnerabilities to attacks
  • Implement ML techniques to assess LLM propensities for harm  
Apply Now

Full-time
US$150K - US$200K + Equity + Benefits
Remote - US/Canada

You will be responsible for research and development of novel machine learning techniques to improve the security and safety of large language models. You will continuously survey SOTA research in adversarial ML, red-teaming, data curation, training, and fine-tuning LLMs; you will pursue independent research to develop patents and write papers for peer-reviewed publication at top-tier conferences; you will develop small-scale prototypes (production-quality code) and collaborate with ML engineers and cloud services developers to transform your prototypes into large-scale cloud services. Importantly, you will focus obsessively on the needs of customers -- AI developers at enterprises and startups who are customizing LLMs to build agents -- and let them guide the direction of your research.

Apply Now