PC-KRIGING METAMODELING: MULTIPLE INPUT DIMENSIONS¶
This example illustrates different aspects of polynomial chaos-Kriging (PC-Kriging) metamodel construction using the three-dimensional Ishigami function as the full computational model.
Package imports¶
from uqpylab import sessions
import numpy as np
import matplotlib.pyplot as plt
Start a remote UQCloud session¶
# Start the session
mySession = sessions.cloud()
# (Optional) Get a convenient handle to the command line interface
uq = mySession.cli
# Reset the session
mySession.reset()
Processing .
.
done! uqpylab.sessions :: INFO :: This is UQ[py]Lab, version 1.00, running on https://uqcloud.ethz.ch. UQ[py]Lab is free software, published under the open source BSD 3-clause license. To request special permissions, please contact: - Stefano Marelli (marelli@ibk.baug.ethz.ch). A new session (51525ba01676490f8d997d9820319786) started.
uqpylab.sessions :: INFO :: Reset successful.
Set the random seed for reproducibility¶
uq.rng(1,'twister');
Computational model¶
The Ishigami function is defined as:
$$Y(\mathbf{x}) = \sin(x_1) + 7 \sin^2(x_2) + 0.1 x_3^4 \sin(x_1)$$
where $x_i \in [-\pi, \pi], \; i = 1, 2, 3$.
This computation is carried out by the function in ishigami.py
supplied with UQCloud. The function evaluates the inputs gathered in the $N \times M$ matrix X
, where $N$ and $M$ are the numbers of realizations and input variables, respectively.
Create a MODEL object from ishigami.py
:
ModelOpts = {
'Type': 'Model',
'ModelFun': 'ishigami.model'
}
myModel = uq.createModel(ModelOpts)
Probabilistic input model¶
The probabilistic input model consists of three independent uniform random variables:
$$ X_i \sim \mathcal{U}(-\pi,\pi), \; i = 1, 2, 3$$
Specify these marginals:
InputOpts = {'Marginals': uq.Marginals(3, 'Uniform', Parameters=[-np.pi, np.pi])}
Create an INPUT object based on the specified marginals:
myInput = uq.createInput(InputOpts)
Experimental design¶
To compare different PC-Kriging metamodeling techniques, create a single experimental design of size $80$ using the Latin Hypercube Sampling:
X = uq.getSample(myInput, 80, 'LHS')
Evaluate the computational model at the experimental design points:
Y = uq.evalModel(myModel, X)
PC-Kriging metamodels¶
Two different modes of PC-Kriging metamodel are considered below: Sequential mode and Optimal mode.
Sequential PC-Kriging metamodel¶
Select PCK
as the metamodeling tool in UQCloud and sequential
as its mode:
SeqPCKOpts = {
'Type': 'Metamodel',
'MetaType': 'PCK',
'Mode': 'sequential'
}
Provide the experimental design generated before:
SeqPCKOpts['ExpDesign'] = {
'X': X.tolist(),
'Y': Y.tolist()
}
Specify the options for the polynomial trend of the PC-Kriging metamodel; the trend is determined by LARS with the adaptive maximum polynomial degree between $3$ to $15$:
SeqPCKOpts['PCE'] = {
'Method': 'LARS',
'Degree': np.arange(3,16).tolist()
}
Use the Gaussian correlation family for the underlying Gaussian process model in the PC-Kriging metamodel:
SeqPCKOpts['Kriging'] = {
'Corr': {
'Family': 'Gaussian'
}
}
Create the Sequential PC-Kriging metamodel:
mySeqPCK = uq.createModel(SeqPCKOpts)
Processing .
done!
Optimal PC-Kriging metamodel¶
Use all the previous metamodeling options and select optimal PC-Kriging as the mode:
OptPCKOpts = SeqPCKOpts
OptPCKOpts['Mode'] = 'optimal'
Create the Optimal PC-Kriging metamodel:
myOptPCK = uq.createModel(OptPCKOpts)
Processing .
.
.
.
.
.
done!
Validation¶
The accuracy of the two metamodels is compared based on a validation set. The validation set consists of a large number of Monte Carlo sample:
Xval = uq.getSample(myInput, 1e5)
Yval = uq.evalModel(myModel, Xval)
Xval[:5,:]
array([[-2.29527245, -2.99815866, -2.25637933], [ 0.97843287, -0.10009995, -2.96401361], [-1.26749897, -0.91870456, 0.35749473], [ 2.19588405, -1.94203018, 2.83170417], [ 1.36506581, 1.43574385, -1.14310414]])
The accuracy of the metamodels is compared in terms of the relative generalization error, which is defined as:
$$\widehat{err}_{gen} = \frac{\frac{1}{N}\sum_i \left( y^{(i)}-\widehat{y}^{(i)} \right)^2}{\text{Var}\left[ Y \right]}$$
where $y^{(i)}$ and $\widehat{y}^{(i)}$ are the true response value and the corresponding metamodel prediction value evaluated at the validation set points, respectively; and $\text{Var}$ denotes the variance.
Hence, predict the response values of the validation set for each metamodel:
YvalSeqPCK = uq.evalModel(mySeqPCK, Xval)
YvalOptPCK = uq.evalModel(myOptPCK, Xval)
Plot the true vs. predicted values:
f, (ax1, ax2) = plt.subplots(1, 2)
plt.subplots_adjust(left=0.1,bottom=0.1,right=0.9,top=0.9,wspace=0.4)
ax1.scatter(Yval, YvalSeqPCK, marker = '+')
ax1.plot([np.min(Yval), np.max(Yval)], [np.min(Yval), np.max(Yval)], color="black")
ax1.set_title('Sequential PCK', fontsize=14)
ax1.set_xlabel('$Y$')
ax1.set_ylabel('$Y_{PCK}$')
ax1.set_aspect('equal', 'box')
ax1.grid(True)
ax2.scatter(Yval, YvalOptPCK, marker = '+')
ax2.plot([np.min(Yval), np.max(Yval)], [np.min(Yval), np.max(Yval)], color="black")
ax2.set_title('Optimal PCK', fontsize=14)
ax2.set_xlabel('$Y$')
ax2.set_ylabel('$Y_{PCK}$')
ax2.set_aspect('equal', 'box')
ax2.grid(True)
Finally, compute the relative generalization error and print the results:
errGenSeqPCK = np.mean(np.power((Yval-YvalSeqPCK),2))/np.var(Yval,ddof=1)
errGenOptPCK = np.mean(np.power((Yval-YvalOptPCK),2))/np.var(Yval,ddof=1)
print('Sequential PC-Kriging model: {:5.4e}'.format(errGenSeqPCK))
print('Optimal PC-Kriging model: {:5.4e}'.format(errGenOptPCK))
Sequential PC-Kriging model: 4.5982e-05 Optimal PC-Kriging model: 1.6129e-04
Terminate the remote UQCloud session¶
mySession.quit()
uqpylab.sessions :: INFO :: Session 51525ba01676490f8d997d9820319786 terminated.
True