top of page

Risk-Aware AI

Safe AI: The Intersection of Risk-aware Autonomy and AI
Risk-Aware-AI.jpg

Risk-Aware Autonomy: The goal is to design system parameters—such as control inputs, trajectories, or policies—to bound the probability of failure. Failure is defined as the likelihood of violating safety constraints or failing to achieve planning objectives in the presence of uncertainty.

​Risk-Aware AI: The goal is to design model parameters—such as the architecture and learned parameters of neural networks—to bound the probability of failure. Here, failure refers to the likelihood of violating safety constraints or failing to meet intended design objectives when deploying AI models.

Risk-Aware AI

RiskAwareAI

Risk-Aware AI systems reason about uncertainty and assess risk to provide formal safety assurances in the presence of imperfect information, perturbations, and adversarial conditions. For example, risk-aware AI enables a deep reinforcement learning agent to quantify the likelihood that its neural policy maintains safety despite input noise and external disturbances. It also allows a generative model to estimate the risk of producing undesirable outputs in the presence of adversarial perturbations.

What Do Risk-Aware Autonomy and Generative AI Have in Common?
 

Both aim to achieve desired probability distributions.
 

  • Risk-Aware Autonomy focuses on steering the distributions of autonomous systems to ensure safe and optimal behavior under uncertainty (e.g., stochastic optimal control). This involves optimizing trajectories, actions, or policies to remain within acceptable risk bounds.
     

  • Generative AI models data distributions to generate realistic outputs, transforming simple initial distributions (e.g., Gaussian noise) into complex data distributions through learned mappings.
     

In both cases, the core challenge is controlling the evolution of probability distributions—whether to ensure safety in autonomous systems or to synthesize high-quality, diverse data in generative AI.

Risk-Aware Autonomy and AI

[Right] Generative AI systems are at the forefront of the recent AI revolution, driving advancements across multiple domains. The primary objective of GenAI is to model complex data distributions, either explicitly—through methods such as variational autoencoders (VAEs), autoregressive models, normalizing flows, and diffusion models—or implicitly, as seen in generative adversarial networks (GANs). For instance, denoising diffusion and flow matching approaches have emerged as state-of-the-art techniques in various fields, including image generation (e.g., Stable Diffusion), video generation (e.g., Sora), and scientific applications (e.g., AlphaFold3); These models generate new data by transforming simple initial distributions (e.g., Gaussian noise) into complex data distributions, which are then used in the sampling process to produce realistic outputs. To achieve this, they construct/learn a mapping—represented by neural ordinary differential equations (ODEs) or stochastic differential equations (SDEs)—that iteratively refines noise into meaningful data. [Left] Risk-aware autonomy involves transforming the initial distribution of an autonomous system's states—represented by ODEs or SDEs— into desired probability distributions that represent the system's safe and optimal behavior. Under these probability distributions, the autonomous system must satisfy safety constraints and achieve optimal behavior with high probability, ensuring risk-bounded performance.

bottom of page