
Polynomial chaos expansions
Each of the quantities and auxiliary variables is considered to be random with a certain
parametrized distribution, except for σ
2
, which is chosen beforehand (see below), and ν = 0.
More precisely, the assumptions on parameters and distributions are as follows. The model
evaluations Y are considered as i.i.d. realizations of a random variable parametrized by coef-
ficients y and a noise variance parameter σ
2
: p(Y|y, σ
2
) = N(Y|Ψy, σ
2
1
N
) (likelihood).
The noise variance is a given (fixed) hyperparameter of the algorithm, while the coeffi-
cients y are again random variables, each normally distributed with its own variance γ
i
:
p(y
i
|γ
i
) = N(y
i
|0, γ
i
). The γ
i
’s are i.i.d. following an exponential distribution with shared
parameter λ: p(γ
i
|λ) = Exp(γ
i
|
λ
2
). Finally, λ is the last layer of the hierarchical framework,
drawn from a Gamma-distribution p(λ|ν) = Γ(λ|
ν
2
,
ν
2
) with ν → 0 (resulting in an uninfor-
mative, improper prior p(λ) ∝
1
|λ|
).
The goal of the algorithm is to compute the posterior distribution p(y, γ, λ|Y) for a set of
given model evaluations Y. From this, the MAP estimate for y can be determined. However,
the prior distributions of BCS are chosen to encourage sparsity, but are not conjugate, so
that an analytical solution of the problem is not feasible. Instead, a fast approximate algo-
rithm is employed, which iteratively updates the different auxiliary quantities as well as the
coefficient vector estimate y.
It has been found that BCS performs especially well for high-dimensional problems and in
cases where only a small set of data is available (?).
γ
y
σ
2
Y
p(y|γ) =
Q
i
N(y
i
|0, γ
i
)
p(Y|y, σ
2
) = N(Y|Ψy, σ
2
1)
λ
p(γ
i
|λ) =
λ
2
exp
−
λ
2
γ
i
p(λ|ν) = Γ
λ|
ν
2
,
ν
2
ν
Figure 4: Illustration of the BCS setup by ?. All quantities are random variables with distribu-
tions parametrized by hyperparameters. The hierarchical structure and the choice of priors
and hyperpriors results in a sparsity-encouraging posterior distribution for the coefficients y.
1.5.6.1 The BCS algorithm
The idea of the BCS algorithm as proposed by ? (called Fast Laplace) is to maintain a vector
γ of coefficient variances. If γ
i
= 0, the associated coefficient y
i
must be zero as well, and
the corresponding term in the basis is inactive. Else, if γ
i
> 0, the term is active. In each
iteration, one regressor is chosen, and to be either added to the basis (γ
i
is set to a value > 0),
deleted from the basis (γ
i
= 0), or reassessed by means of a re-estimation of its variance γ
i
.
UQ[PY]LAB-V1.0-104 - 17 -