Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis

Probabilistic software analysis methods extend classic static analysis techniques to consider the effects of probabilistic uncertainty, whether explicitly embedded within the code – as in probabilistic programs – or externalized in a probabilistic input distribution. Analogously to their classic counterparts, these analyses aim at inferring the probability of a target event to occur during execution, e.g., reaching a program state or triggering an exception. Besides helping understand the behaviour of software programs that interact with the uncertain, real-world inputs, they also have a profound impact as verification tools for safety-critical systems. Providing efficient, scalable algorithms that are general to various applications remain one of the main challenges that prevent the wide adoption of these analysis tools. In our recent paper published in the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE ‘21), we introduce a new algorithm that combines advances in symbolic execution and probabilistic inference to enable these analysis tools to be applied to more complex and general problems.


Probabilistic Symbolic Execution

Probabilistic symbolic execution (PSE) is a static analysis technique aiming at quantifying the probability of a target event occurring during execution. It uses a symbolic execution engine to extract conditions on the values of inputs or specified random variables that lead to the occurrence of the target event. It then computes the probability of such constraints being satisfied given a probability distribution over the inputs or specified random variables. These constraints are called path conditions because they uniquely identify the execution path induced by an input satisfying them.

To illustrate with an example, consider the following snippet for a probabilistic program modelling the behaviour of a safety controller for a flying vehicle.

// Probabilistic profile p(x)
altitude ::= Gaussian (8000, 100);
obstacle_x, obstacle_y ::= Gaussian (
    [ -2 , -2],
    [[0.2 , 0.1] , [0.1 , 0.2]]) ;
// Controller program
if (altitude <= 9000) { ...
    if (Math.pow(obstacle_x, 2) + Math.pow (obstacle_y, 2) <= 1) {
        callSupervisor ();
} else { callSupervisor (); }

The purpose of the program is to detect environmental conditions – excessive altitude or collision distance of an obstacle – that may compromise the crew’s safety and call for a supervisor’s intervention. The purpose of the analysis is to estimate the probability of invoking callSupervisor at any point in the code. Safety-critical applications may require this probability to be very small (e.g., $< 10^{-7}$) and to be estimated with high accuracy. The symbolic execution of the snippet, where random variables are marked as symbolic, would return the following two path conditions (PCs), corresponding to the possible invocations of callSupervisor:

  • $PC_0$: $\texttt{altitude} > 9000$; and
  • $PC_1$: $\texttt{altitude} \leq 9000 \land \texttt{obstacle_x}^2 + \texttt{obstacle_y}^2 \leq 1$.

The probability of satisfying a path condition $PC$ can be computed based as $$ \begin{aligned} p_{PC} &:= \Pr(\mathbf{x} \models PC) = \int_{\mathbf{x}} \mathbb{1}{PC}(\mathbf{x}) p(\mathbf{x}) \, d\mathbf{x} \\ &\approx \frac{1}{N} \sum{i=1}^N \mathbb{1}_{PC}(\mathbf{x}^{(i)}) =: \hat{p}^{DMC}, \; \text{where} \, \mathbf{x}^{(i)} \sim p(\mathbf{x}) \end{aligned} $$

where $\mathbb{1}_{PC}(\mathbf{x})$ denotes the indicator function. For clarity, we will use $\bar{p}(\mathbf{x})$ to denote the truncated distribution satisfying the constraints, i.e., $\bar{p} := \mathbb{1} (\mathbf{x}) p(\mathbf{x})$.

Illustration of the probablistic program analysis pipeline.
Illustration of the probablistic program analysis pipeline.

Monte Carlo Methods

Because analytical solutions to the integral are in general intractable or infeasible, Monte Carlo methods are used to approximate $p_{PC}$. A naive approach to this problem would be Direct Monte Carlo. In DMC, we generate independent samples $\mathbf{x}^{(i)}$ from the prior $p(\mathbf{x})$ and obtain an estimate of $p_{PC}$ by counting the number of samples that satisfy the constraints. Unfortunately, this method is very inefficient for rare-event problems. When the target event has a probability of less than $10^{-6}$, obtaining a single instance that satisfies the constraints require millions of samples. It is impractical to obtain accurate estimates of the event probability. Symbolic execution methods, on the other hand, can find feasible solutions without brute force sampling. However, symbolic methods alone are unable to provide probabilistic quantification. In our work, we are interested in designing algorithms that can combine the best of both worlds.


Borges et al.1 proposed qCoral, a compositional Monte Carlo method to estimate the probability of satisfying a path condition over non-linear numerical constraints with arbitrary small estimation variance. qCoral combines insights from program analysis, interval constraint propagation, and stratified sampling to mitigate the complexity of the integration problem and reduce the variance of its estimates. To further reduce the variance of the probability estimates of each independent constraint, qCoral uses interval constraint propagation and branch-and-bound methods to find a disjoint union of n-dimensional boxes that reliably enclose all the solutions of a constraint – where $n$ is the number of variables in the constraint. Regions of the input domain outside the boxes are guaranteed to contain no solutions of the constraint ($\mathbb{1}{\cdot}(\cdot)=0$). A box is classified as either an inner box, which contains only solutions, or an outer box, which may contain both solutions and non-solutions.

Stratified sampling with interval constraint propagation can lead to a significant variance reduction in the aggregated estimate by reducing the uncertainty only to the regions of the domain enclosed within the outer boxes, potentially avoiding sampling from large regions of the domain that can be analytically determined as including only or no solutions. However, qCoral can only produce scalable and accurate estimates for the satisfaction probability for constraints that

  • have low dimensionality or can be reduced to low-dimensional subproblems via constraints slicing,
  • are amenable to scalable and effective interval constraint propagation, and
  • whose input distribution have CDFs in analytical form and allows efficient sampling from their truncated distributions.


In our paper, we take an alternative approach based on importance sampling2 (IS) to reduce the variance in our estimates of rare event probability. IS has been successfully applied to many applications such as reliability analysis3. An advantage of IS over the stratified sampling approach as used in qCoral is that we are not restricted to only input distributions which tractable CDFs. An importance-sampling based approach can be applied to any problem with tractable PDFs, including distributions with correlated dimensions. With a properly chosen $q(\mathbf{x})$, IS also offers better variance reduction compared to stratified sampling.

While any distribution with $q(\mathbf{x}) > 0$ over the entire domain guarantees the estimate will eventually converge to the correct value, an optimal choice of $q(\mathbf{x})$ determines the convergence rate of the process and its practical efficiency. In our context of estimating the probability of satisfying path conditions PC, the optimal proposal distribution $q^(\mathbf{x})$ is exactly the truncated, normalized distribution $p(\mathbf{x})$ satisfying PC, $$ q^(\mathbf{x}) = \frac{1}{p_{PC}}p(\mathbf{x})\mathbb{1}{PC}(\mathbf{x}). $$ In general, it is infeasible to sample from $q^*(\mathbf{x})$ as it requires the calculation of $p{PC}$, which is exactly our target. For practical applications, specialized proposals are proposed to achieve high sample efficiency in domain-specific applications. In our work, we are interested in designing an automated procedure that iteratively finds a proposal distribution that is tailored to the given probabilistic program.

To do this, we adapt the PI-MAIS algorithm4 and tailor it to the problem of probabilistic program analysis. PI-MAIS is a probabilistic inference algorithm that runs a Markov Chain Monte Carlo (MCMC) algorithm (e.g. with the Random-Walk Rosenbluth-Metropolis-Hastings or Hamiltonian Monte Carlo) to sample from an intractable posterior distribution. In this case, the intractable posterior is our intractable $\bar{p}(\mathbf{x})$. The samples from the Markov chains are used to construct an adaptive proposal distribution that can be used for the program analysis above. Concretely, given samples $\mathbf{\mu}i$ from the $N$ parallel chains at the $t$‘th iteration, we use a Mixture-of-Gaussians (MoG) $$ \begin{equation*} q(\cdot) = \frac{1}{N} \sum{i=1}^{N} \mathcal{N}(\cdot, \mathbf{\mu}_i, \Sigma), \end{equation*} $$ as the importance proposal. The proposal distribution is multi-modal and, with enough number of chains, can be used to approximate arbitrary $q^*(\mathbf{x})$. The figure below gives a graphical illustration of SYMPAIS.

Graphical illustration of SYMPAIS on the torus problem
Graphical illustration of SYMPAIS on the torus problem

SYMPAIS further utilizes information from the symbolic execution of the program to further improve sample efficiency.

First, the MCMC algorithm requires feasible solutions to initialize the chains. This is typically done in the Bayesian inference literature by sampling from the prior $p(\mathbf{x})$. However, generating diverse initial solutions that cover the support of $\bar{p}(\mathbf{x})$ by sampling from prior is prohibitively expensive. In SYMPAIS, we address this by combing an Interval Constraint Propagator (ICP) to generate diverse initial solutions. The diverse initial solutions are further re-sampled by an importance-reweighting procedure that further reduces the warm-up iterations required for the MCMC samples to converge to the typical set. In addition, we further use the contracted interval bounds from ICP to truncate the modes in the MoG proposal distribution.

These optimizations significantly improve SYMPAIS’s sample efficiency, making it a practical algorithm for high-dimensional applications without sacrificing convergence guarantees offered from the original PI-MAIS paper.


We provide an open-source implementation of SYMPAIS. The open-source code includes a general implementation of PI-MAIS as well as code specializing PI-MAIS to perform probabilistic program analysis as done in SYMPAIS. We also provide notebooks for you to interactively explore SYMPAIS. Open In Colab

The reference Python implementation uses Google’s JAX library. JAX provides excellent performance that allows us to implement a parallel-chain MCMC sampler that can take advantage of the SPMD parallelism that enables us to run hundreds of Markov chains easily with a laptop CPU.

We evaluated SYMPAIS on a range of benchmark tasks, including artificial examples such as high-dimensional spheres; tasks processing non-convex constraints such as torus. We also reported performance on problems from the volComp benchmark as well as the performance on analyzing constraints from quantifying the adversarial robustness of ACAS Xu neural networks. Please refer to our paper for a complete discussion of the result and evaluation.


Performance of SYMPAIS compared to baselines on the sphere task.
In this task we consider the $d$-dimensional sphere: $C := \left\lVert\mathbf{x} - \mathbf{c}\right\rVert^2 \le 1 $, where $\mathbf{x} \in [-10, 10]^d \cap \mathbb{R}^d $ is the input domain, and $\mathbf{c} \in \mathbb{R}^d$ is the center of the sphere. We use $p(\mathbf{x}) = \mathcal{N}(0, I)$ -- \ie uncorrelated Gaussian -- as the input distribution and set $\mathbf{c} = \mathbf{1}$. Despite its simplicity, this problem illustrates the challenges faced by direct Monte Carlo methods as well as qCoral in high-dimensionality problems where $C$'s satisfaction probability is small. We evaluate all methods under the same sampling budget and compare them by computing the Relative Absolute Error (RAE) with respect to an accurate estimate of the constraint probability. As can be seen from the figures, as $d$ increases, the probability of the event decreases, which makes estimation by DMC increasingly challenging. Moreover, the increase in $d$ also leads to coarser paving of $d$-dimensional boxes, which reduces the effectiveness of variance reduction via stratified sampling.

The RAE results are illustrated in the figure above. As expected: DMC achieves the worst performance throughout all tests. For low-dimensional problems ($d \leq 4$), qCoral is the most efficient, while its performance deteriorates significantly when the $d$ increases and RealPaver fails to prune out large portions of the domain that contain no solutions. SYMPAIS’s performance is comparable to qCoral in low dimensions, but up to one order of magnitude more accurate when the dimensionality grows ($d \geq 8$).

The figure on the right shows the convergence rate of RAE for different methods over sample size for $d=8$. SYMPAIS achieves the final RAE of DMC with $<5%$ of the sampling budget and the final RAE of qCoral with $<10%$. The improvement in sample efficiency becomes more significant for $d > 8$.


Performance of SYMPAIS compared to baselines on the torus task.
We also consider torus, a non-convex constraint which we used in the paper as a running example. We evaluate the different methods for both independent and correlated inputs. qCoral cannot be applied to the correlated input since the CDF is analytically intractable. In both cases, SYMPAIS achieves an order of magnitude lower RAE compared to the baselines.


We introduce SYMbolic Parallel Adaptive Importance Sampling (SYMPAIS), a general algorithm for probabilistic program analysis that lifts some of the assumptions of prior work and is more scalable for higher-dimensional and non-linear problems. We believe that our work provides an interesting combination of symbolic execution with statistical inference for practical applications. While statistical methods provide better scalability compared to symbolic methods, incorporating information from a symbolic execution method can further improve sample efficiency. We look forward to exploring more interesting hybrid approaches that can combine the best of both worlds.


  1. Mateus Borges, Antonio Filieri, Marcelo D’Amorim, and Corina S. Păsăreanu. 2015. Iterative Distribution-Aware Sampling for Probabilistic Symbolic Execution. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. ACM, 866–877. ↩︎

  2. Art B. Owen. 2013. Monte Carlo theory, methods and examples. ↩︎

  3. Beck, J. and K. Zuev. Rare Event Simulation. arXiv, ↩︎

  4. L. Martino, V. Elvira, D. Luengo, and J. Corander. 2017. Layered Adaptive Importance Sampling. Statistics and Computing 27, 3 (May 2017), 599–623. ↩︎

Yicheng Luo
Yicheng Luo
PhD Student