
Risk-Aware AI
The objective of this research is to develop safe and reliable AI models by leveraging advanced reasoning and autonomy algorithms for risk-aware decision-making under uncertainty. In particular, such algorithms—designed to handle non-Gaussian and nonlinear models—are well-suited to enhance the robustness and reliability of AI models, as these AI models inherently exhibit nonlinear behavior and produce non-Gaussian output distributions.


Risk-Aware Autonomy: The goal is to design system parameters—such as control inputs, trajectories, or policies—to bound the probability of failure. Failure is defined as the likelihood of violating safety constraints or failing to achieve planning objectives in the presence of uncertainty.
Risk-Aware AI: The goal is to design model parameters—such as the architecture and learned parameters of neural networks—to bound the probability of failure. Here, failure refers to the likelihood of violating safety constraints or failing to meet intended design objectives when deploying AI models.
Risk-Aware AI

Risk-Aware AI systems reason about uncertainty and assess risk to provide formal safety assurances in the presence of imperfect information, perturbations, and adversarial conditions. For example, risk-aware AI enables a deep reinforcement learning agent to quantify the likelihood that its neural policy maintains safety despite input noise and external disturbances. It also allows a generative model to estimate the risk of producing undesirable outputs in the presence of adversarial perturbations.
What Do Risk-Aware Autonomy and Generative AI Have in Common?
Both aim to achieve desired probability distributions.
-
Risk-Aware Autonomy focuses on steering the distributions of autonomous systems to ensure safe and optimal behavior under uncertainty (e.g., stochastic optimal control). This involves optimizing trajectories, actions, or policies to remain within acceptable risk bounds.
-
Generative AI models data distributions to generate realistic outputs, transforming simple initial distributions (e.g., Gaussian noise) into complex data distributions through learned mappings.
In both cases, the core challenge is controlling the evolution of probability distributions—whether to ensure safety in autonomous systems or to synthesize high-quality, diverse data in generative AI.

[Right] Generative AI systems are at the forefront of the recent AI revolution, driving advancements across multiple domains. The primary objective of GenAI is to model complex data distributions, either explicitly—through methods such as variational autoencoders (VAEs), autoregressive models, normalizing flows, and diffusion models—or implicitly, as seen in generative adversarial networks (GANs). For instance, denoising diffusion and flow matching approaches have emerged as state-of-the-art techniques in various fields, including image generation (e.g., Stable Diffusion), video generation (e.g., Sora), and scientific applications (e.g., AlphaFold3); These models generate new data by transforming simple initial distributions (e.g., Gaussian noise) into complex data distributions, which are then used in the sampling process to produce realistic outputs. To achieve this, they construct/learn a mapping—represented by neural ordinary differential equations (ODEs) or stochastic differential equations (SDEs)—that iteratively refines noise into meaningful data. [Left] Risk-aware autonomy involves transforming the initial distribution of an autonomous system's states—represented by ODEs or SDEs— into desired probability distributions that represent the system's safe and optimal behavior. Under these probability distributions, the autonomous system must satisfy safety constraints and achieve optimal behavior with high probability, ensuring risk-bounded performance.
Related Links:
-
Co-Chair of Special Track on Safe, Robust and Responsible AI, 38th Annual AAAI Conference on Artificial Intelligence, 2024
Co-Chairs: Ashkan Jasour (NASA/Caltech JPL), Prof. Chuchu Fan (MIT), Prof. Tatsunori Hashimoto (Stanford University), Prof. Reid Simmons (Carnegie Mellon University), Prof. Balaraman Ravindran (Indian Institute of Technology Madras)
-
Co-Chair of Special Track on Safe and Robust AI, 37th Annual AAAI Conference on Artificial Intelligence, 2023
Co-Chairs: Ashkan Jasour (NASA/Caltech JPL), Prof. Chuchu Fan (MIT), Prof. Reid Simmons (Carnegie Mellon University)
-
Guest Editor of Special Issue on Risk-Aware Autonomous Systems: Theory and Practice, Artificial Intelligence Journal (AIJ), 2023
Editors: Ashkan Jasour (NASA/Caltech JPL), Prof. George Pappas (University of Pennsylvania), Prof. Luca Carlone (MIT), Prof. Sara Bernardini (University of Oxford), Prof. Andreas Krause (ETH), Prof. Brian Williams (MIT), Prof. Yisong Yue (Caltech)
-
"Generative AI Foundations: Algorithms and Architectures", Ashkan Jasour, 2024-2025
Course Website: https://jasour.github.io/generative-ai-course