We consider an example for the application of the fuzzy extension principle to fuzzy input uncertainties to obtain a fuzzy output interval using three different methods: optimization of the objective function, optimization of a piecewise linear sparse grid surrogate, and optimization of a surrogate with B-splines on sparse grids.
This example is available in C++ and in Python.
First, we include all the necessary headers, including those of the sgpp::base and sgpp::optimization module.
#include <iostream>
#include <vector>
Now, we can start with the main
function.
void setVerbosity(int level)
Definition Printer.hpp:154
static Printer & getInstance()
Definition Printer.cpp:43
static RandomNumberGenerator & getInstance()
Definition RandomNumberGenerator.cpp:16
void setSeed()
Reseeds with time-dependent seed.
Definition RandomNumberGenerator.cpp:50
int main()
Definition densityMultiplication.cpp:22
Here, we define some parameters: objective function, dimensionality, B-spline degree, maximal number of grid points, and adaptivity.
const size_t p = 3;
const size_t b = 1;
const size_t n = 5;
const size_t numberOfAlphaSegments = 100;
base::ScalarFunction & f
Definition AugmentedLagrangian.cpp:75
size_t getNumberOfParameters() const
Definition ScalarFunction.hpp:67
Branin01 objective function.
Definition Branin01.hpp:27
We use regular sparse grids for the sparse grid surrogates.
std::unique_ptr<sgpp::base::Grid> gridBSpline;
std::unique_ptr<sgpp::base::Grid> gridLinear;
std::unique_ptr<sgpp::base::InterpolantScalarFunction> fInterpLinear;
std::cout << "Constructing the sparse grids...\n";
gridBSpline->getGenerator().regular(n);
gridLinear->getGenerator().regular(n);
const size_t N = gridBSpline->getSize();
for (size_t k = 0; k < N; k++) {
gridStorage[k].getStandardCoordinates(x);
functionValues[k] =
f.eval(x);
}
Grid with Bspline basis functions with boundaries, pentagon cut.
Definition BsplineBoundaryGrid.hpp:21
A class to store one-dimensional data.
Definition DataVector.hpp:25
Generic hash table based storage of grid points.
Definition HashGridStorage.hpp:42
grid with linear base functions with boundaries, pentagon cut
Definition LinearBoundaryGrid.hpp:20
For the hierarchization for the B-spline surrogate, we solve the corresponding system of linear equations and create the interpolant and its gradient.
std::cout << "Hierarchizing (B-spline coefficients)...\n";
std::unique_ptr<sgpp::base::InterpolantScalarFunction> fInterpBSpline;
std::unique_ptr<sgpp::base::InterpolantScalarFunctionGradient> fInterpBSplineGradient;
std::unique_ptr<sgpp::base::InterpolantScalarFunctionHessian> fInterpBSplineHessian;
{
if (!sleSolver.
solve(hierSLE, functionValues, surpluses)) {
std::cout << "Solving failed, exiting.\n";
return 1;
}
fInterpBSpline.reset(
fInterpBSplineGradient.reset(
}
Linear system of the hierarchization in a sparse grid.
Definition HierarchisationSLE.hpp:74
Sparse grid interpolant gradient of a scalar-valued function.
Definition InterpolantScalarFunctionGradient.hpp:26
Sparse grid interpolant of a scalar-valued function.
Definition InterpolantScalarFunction.hpp:36
Automatic choice of external linear solver.
Definition Auto.hpp:20
bool solve(SLE &system, DataVector &b, DataVector &x) const override
Definition Auto.cpp:46
The piecewise linear interpolant is only continuous, but not continuously differentiable. Therefore, we do not use the discontinuous gradient for gradient-based optimization, but only use gradient-free optimization methods.
std::cout << "Hierarchizing (linear coefficients)...\n";
{
if (!sleSolver.
solve(hierSLE, functionValues, surpluses)) {
std::cout << "Solving failed, exiting.\n";
return 1;
}
}
std::cout << "\n";
Now we define the fuzzy input intervals.
std::vector<sgpp::optimization::FuzzyInterval*> xFuzzy{&x0Fuzzy, &x1Fuzzy};
Quasi-Gaussian fuzzy number.
Definition QuasiGaussianFuzzyNumber.hpp:23
Triangular fuzzy interval; its membership function linearly increases from 0 to 1,...
Definition TriangularFuzzyInterval.hpp:24
Finally, we can apply the fuzzy extension principle. First, we apply it directly to the objective function to obtain a reference solution (fuzzy interval). Note that usually the objective function is too expensive to use it directly in real-world scenarios.
optimizerExact, numberOfAlphaSegments);
std::unique_ptr<sgpp::optimization::FuzzyInterval> yFuzzyExact(
extensionPrincipleExact.apply(xFuzzy));
std::cout << "L2 norm of exact solution: " << yFuzzyExact->computeL2Norm() << "\n";
Zadeh's fuzzy extension principle by solving optimization problems for each level.
Definition FuzzyExtensionPrincipleViaOptimization.hpp:29
Meta optimization algorithm calling local algorithm multiple times.
Definition MultiStart.hpp:26
For the piecewise linear and for the B-spline solution, we compute the relative \(L^2\) error to the exact solution computed previously.
optimizerLinear, numberOfAlphaSegments);
std::unique_ptr<sgpp::optimization::FuzzyInterval> yFuzzyLinear(
extensionPrincipleLinear.apply(xFuzzy));
std::cout << "Relative L2 error of piecewise linear solution: " <<
yFuzzyExact->computeRelativeL2Error(*yFuzzyLinear) << "\n";
For B-splines, we use gradient descent as our optimization method.
*fInterpBSpline, *fInterpBSplineGradient);
optimizerBSpline, numberOfAlphaSegments);
std::unique_ptr<sgpp::optimization::FuzzyInterval> yFuzzyBSpline(
extensionPrincipleBSpline.apply(xFuzzy));
std::cout << "Relative L2 error of B-spline solution: " <<
yFuzzyExact->computeRelativeL2Error(*yFuzzyBSpline) << "\n";
return 0;
}
Gradient descent with adaptive step size.
Definition AdaptiveGradientDescent.hpp:20
The example outputs something similar to the following:
Constructing the sparse grids...
Hierarchizing (B-spline coefficients)...
Hierarchizing (linear coefficients)...
L2 norm of exact solution: 0.500751
Relative L2 error of piecewise linear solution: 0.000161033
Relative L2 error of B-spline solution: 7.55287e-08
The exact output is not deterministic (despite setting the seed of the random number generator at the start of main
), since the MultiStart
optimizer calls are executed in parallel for the different confidence intervals.
We see that the relative \(L^2\) error is over three orders of magnitude smaller for the B-spline solution compared to the piecewise linear solution. This is due to the higher approximation quality of the B-splines and to the gradient-based optimization.