SlideShare a Scribd company logo
Low-rank tensor methods for PDEs with
uncertain coefficients and
Bayesian Update surrogate
Alexander Litvinenko
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko Low-rank tensor methods for PDEs with uncertain coefficien
4*
The structure of the talk
Part I (Stochastic forward problem):
1. Motivation
2. Elliptic PDE with uncertain coefficients
3. Discretization and low-rank tensor approximations
4. Tensor calculus to compute QoI
Part II (Bayesian update):
1. Bayesian update surrogate
2. Examples
13
13
17
17
14
14 17
13
17
14 15
13 13
17 29
13 48
15
13 13
13 13
15 13
13
13 16
23
8 8
13 15
28 29
8
8 15
8 15
8 15
19
18 18
61
57
23
17 17
17 17
23 35
57 60
61 117
17
17 17
17 17
14 14
14
7 7
14 14
34
21 14
17 14
28 28
10
10 13
17 17
17 17
11 11
17
11 11
69
40
17 11
17 11
36 28
69 68
10
10 11
9 9
10 11
9
9 12
14 14
21 21
14
14
11
11 11
42
14
11 11
11 11
14 22
38 36
12
12 13
12 12
10 10
12
10 10
23
12 10
10 10
15 15
13
10 10
15 15
69
97
49
28
16 15
12 12
21 21
48 48
83 132
48 91
16
12 12
13 12
8 8
13
8 8
26
13 8
13 8
22 21
13
13 13
9 9
13 13
9
9 13
49
26
9 12
9 13
26 22
49 48
12
12 14
12 14
12 14
15
9 9
18 18
26
15 15
14 14
26 35
15
14 14
15 14
15 14
16
16 19
97
68
29
16 18
16 18
29 35
65 64
97 132
18
18 18
15 15
18 18
15
15
14
7 7
33
15 16
15 17
32 32
16
16 17
14 14
16 17
14
14 18
64
33
11 11
14 18
31 31
72 65
11
11
8
8 14
11 18
11 13
18
13 13
33
18 13
15 13
33 31
20
15 15
19 15
18 15
19
18 18
53
87
136
64
35
19 18
14 14
35 35
64 66
82 128
61 90
33 62
8
8 13
14 14
17 14
18 14
17
17 18
29
17 18
10 10
35 35
19
10 10
13 10
19 10
13
13
10
10 14
70
28
13 15
13 13
29 37
56 56
15
13 13
15 13
15 13
19
19
10
10 15
23
11 11
12 12
28 33
11
11 12
11 12
11 12
18
15 15
115
66
23
18 15
18 15
23 30
49 49
121 121
18
18 18
12 12
18 18
12
12 18
22
11 11
11 11
27 27
11
11 11
11 11
10 10
17
10 10
62
22
17 10
17 10
21 21
59 49
13
10 10
18 18
10 10
11 11
10
10 11
27
10 11
10 11
32 21
12
12 15
12 13
12 15
13
13 19
88
115
62
27
13 19
13 14
27 32
62 59
115 121
61 90
10
10 11
14 14
21 14
12 12
14
10 10
12 12
29
14 12
15 12
35 35
14
14 15
11 11
14 15
11
11
8
8 16
69
29
11 18
11 23
28 28
62 62
18
18
8
8 15
15 15
13 13
15
13 13
29
15 13
13 13
33 28
16
13 13
16 13
15 13
18
15 15
135
62
29
18 15
18 15
22 22
69 62
101 101
10
10 11
19 19
15 15
7 7
15
7 7
40
15 7
15 7
40 22
19
19
9
9 13
18 18
19 22
18
18
11
10 10
11 11
62
31
18 20
11 11
31 31
39 39
20
11 11
19 11
12 11
19
12 12
26
12 12
14 12
13 13
12
12 14
13 13
4*
KAUST
I received very rich collaboration experience as a co-organizator of:
3 UQ workshops,
2 Scalable Hierarchical Algorithms for eXtreme Computing
(SHAXC) workshops
1 HPC Conference (www.hpcsaudi.org, 2017)
4*
My interests and collaborations
4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce the
uncertainty in output quantities of computer simulations within
complex (multiscale-multiphysics) applications.
Typical challenges: classical sampling methods are often very
inefficient, whereas straightforward functional representations
are subject to the well-known Curse of Dimensionality.
My goal is systematic, mathematically founded, development of
UQ methods and low-rank algorithms relevant for applications.
Center for Uncertainty
Quantification
ation Logo Lock-up
-1 / 39
4*
UQ and its relevance
Nowadays computational predictions are used in critical
engineering decisions and thanks to modern computers we are
able to simulate very complex phenomena. But, how reliable
are these predictions? Can they be trusted?
Example: Saudi Aramco currently has a simulator,
GigaPOWERS, which runs with 9 billion cells. How sensitive
are the simulation results with respect to the unknown reservoir
properties?
Center for Uncertainty
Quantification
ation Logo Lock-up
0 / 39
4*
Part I: Stochastic forward problem
Part I: Stochastic Galerkin method to solve
elliptic PDE with uncertain coefficients
4*
PDE with uncertain coefficient and RHS
Consider
− div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2,
u = 0 on ∂G,
(1)
where κ(x, ω) - uncertain diffusion coefficient. Since κ positive,
usually κ(x, ω) = eγ(x,ω).
For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff
11, Ullmann 10].
Further we will assume that covκ(x, y) is given.
Center for Uncertainty
Quantification
ation Logo Lock-up
1 / 39
4*
My previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Compute max{u}, var(u), level sets of u, sign(u)
[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K
[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x, ω), stochastic Galerkin operator K in Tensor
Train (TT) format, solve for u, postprocessing
[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for Uncertainty
Quantification
ation Logo Lock-up
2 / 39
4*
Typical quantities of interest
Keeping all input and intermediate data in a tensor
representation one wants to perform different tasks:
evaluation for specific parameters (ω1, . . . , ωM),
finding maxima and minima,
finding ‘level sets’ (needed for histogram and probability
density).
Example of level set: all elements of a high dimensional tensor
from the interval [0.7, 0.8].
Center for Uncertainty
Quantification
ation Logo Lock-up
3 / 39
4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for Uncertainty
Quantification
ation Logo Lock-up
4 / 39
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
5 / 39
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: i ∈ I ] ∈ RI
, I = {1, ..., n }, = 1, .., d.
A is an element of the linear space
Vn =
d
=1
V , V = RI
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Center for Uncertainty
Quantification
ation Logo Lock-up
6 / 39
4*
Examples of rank-1 and rank-2 tensors
Rank-1:
f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d
j=1 exp(fj(xj))
Rank-2: f(x1, ..., xd ) = sin( d
j=1 xj), since
2i · sin( d
j=1 xj) = ei d
j=1 xj
− e−i d
j=1 xj
Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be
approximated by rank-2: with any prescribed accuracy:
f ≈
d
j=1(1 + εxj)
ε
−
d
j=1 1
ε
+ O(ε), as ε → 0
Center for Uncertainty
Quantification
ation Logo Lock-up
7 / 39
4*
Tensor and Matrices
Rank-1 tensor
A = u1 ⊗ u2 ⊗ ... ⊗ ud =:
d
µ=1
uµ
Ai1,...,id
= (u1)i1
· ... · (ud )id
Rank-1 tensor A = u ⊗ v, matrix A = uvT , A = vuT , u ∈ Rn,
v ∈ Rm,
Rank-k tensor A = k
i=1 ui ⊗ vi, matrix A = k
i=1 uivT
i .
Kronecker product of n × n and m × m matrices is a new block
matrix A ⊗ B ∈ Rnm×nm, whose ij-th block is [AijB].
Center for Uncertainty
Quantification
ation Logo Lock-up
8 / 39
4*
Discretization of elliptic PDE
Now let us discretize our diffusion equation with
uncertain coefficients
Center for Uncertainty
Quantification
ation Logo Lock-up
9 / 39
4*
Karhunen Lo´eve and Polynomial Chaos Expansions
Apply both
Karhunen Lo´eve Expansion (KLE):
κ(x, ω) = κ0(x) + ∞
j=1 κjgj(x)ξj(θ(ω)), where
θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),
ξj(θ) = 1
κj G (κ(x, ω) − κ0(x)) gj(x)dx.
Polynomial Chaos Expansion (PCE)
κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ
(α)
j Hα(θ),
where ξ
(α)
j = 1
κj G κ(α)(x)gj(x)dx.
Further compute ξ
(α)
j ≈ s
=1(ξ )j
∞
k=1(ξ , k )αk
.
Center for Uncertainty
Quantification
ation Logo Lock-up
10 / 39
4*
Final discretized stochastic PDE
Ku = f, where
K:= s
=1 K ⊗ M
µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ ,
u:= r
j=1 uj ⊗ M
µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ ,
f:= R
k=1 fk ⊗ M
µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ .
(Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011)
Examples of stochastic Galerkin matrices:
Center for Uncertainty
Quantification
ation Logo Lock-up
11 / 39
4*
Computing QoI in low-rank tensor format
Now, we consider how to
find maxima in a high-dimensional tensor
4*
Maximum norm and corresponding index
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , compute
u ∞ := max
i:=(i1,...,id )∈I
|ui| = max
i:=(i1,...,id )∈I
r
j=1
d
µ=1
ujµ iµ
.
Computing u ∞ is equivalent to the following e.v. problem.
Let i∗
:= (i∗
1 , . . . , i∗
d ) ∈ I, #I = d
µ=1 nµ.
u ∞ = |ui∗ | =
r
j=1
d
µ=1
ujµ i∗
µ
and e(i∗
)
:=
d
µ=1
ei∗
µ
,
where ei∗
µ
∈ Rnµ the i∗
µ-th canonical vector in Rnµ (µ ∈ N≤d ).
Center for Uncertainty
Quantification
ation Logo Lock-up
12 / 39
Then
u e(i∗
)
=


r
j=1
d
µ=1
ujµ




d
µ=1
ei∗
µ

 =
r
j=1
d
µ=1
ujµ ei∗
µ
=
r
j=1
d
µ=1
(ujµ)i∗
µ
ei∗
µ
=


r
j=1
d
µ=1
(ujµ)i∗
µ


ui∗ =
d
µ=1
e(i∗
µ) = ui∗ e(i∗
)
.
Thus, we obtained an “eigenvalue problem”:
u e(i∗
)
= ui∗ e(i∗
)
.
Center for Uncertainty
Quantification
ation Logo Lock-up
13 / 39
4*
Computing u ∞, u ∈ Rr by vector iteration
By defining the following diagonal matrix
D(u) :=
r
j=1
d
µ=1
diag (ujµ) µ µ∈N≤nµ
(2)
with representation rank r, obtain D(u)v = u v.
Now apply the well-known vector iteration method (with rank
truncation) to
D(u)e(i∗
)
= ui∗ e(i∗
)
,
obtain u ∞.
[Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05],
and [Espig, Hackbusch 2010]
Center for Uncertainty
Quantification
ation Logo Lock-up
14 / 39
4*
How to compute the mean value in CP format
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , then the mean value u can be
computed as a scalar product
u =


r
j=1
d
µ=1
ujµ

 ,


d
µ=1
1
nµ
˜1µ

 =
r
j=1
d
µ=1
ujµ, ˜1µ
nµ
=
(3)
=
r
j=1
d
µ=1
1
nµ
nµ
k=1
(ujµ)k , (4)
where ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Numerical cost is O r · d
µ=1 nµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
15 / 39
4*
Numerical Experiments
2D L-shape domain, N = 557 dofs.
Total stochastic dimension is Mu = Mk + Mf = 20, there are
|J | = 231 PCE coefficients
u =
231
j=1
uj,0 ⊗
20
µ=1
ujµ ∈ R557
⊗
20
µ=1
R3
.
Center for Uncertainty
Quantification
ation Logo Lock-up
16 / 39
4*
Level sets
Now we compute level sets
sign(b u ∞1 − u)
for b ∈ {0.2, 0.4, 0.6, 0.8}.
Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB of
memory.
The computing time of one level set was 10 minutes.
Intermediate ranks of sign(b u ∞1 − u) and of rank(uk )
were less than 24.
Center for Uncertainty
Quantification
ation Logo Lock-up
17 / 39
4*
Part II
Part II: Bayesian update
We will speak about Gauss-Markov-Kalman filter for the
Bayesian updating of parameters in comput. model.
4*
Mathematical setup
Consider
K(u; q) = f ⇒ u = S(f; q),
where S is solution operator.
Operator depends on parameters q ∈ Q,
hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y(q; u) = Y(q, S(f; q)).
Examples of measurements:
y(ω) = D0
u(ω, x)dx, or u in few points
Center for Uncertainty
Quantification
ation Logo Lock-up
18 / 39
4*
Random QoI
With state u a RV, the quantity to be measured
y(ω) = Y(q(ω), u(ω)))
is also uncertain, a random variable.
Noisy data: ˆy + (ω),
where ˆy is the “true” value and a random error .
Forecast of the measurement: z(ω) = y(ω) + (ω).
Center for Uncertainty
Quantification
ation Logo Lock-up
19 / 39
4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq) (or πq(q|z) =
p(z|q)
Zs
pq(q));
Expectation with this posterior measure is conditional
expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq
|Mz .
Center for Uncertainty
Quantification
ation Logo Lock-up
20 / 39
4*
Conditional expectation
The conditional expectation is defined as
orthogonal projection onto the closed subspace L2(Ω, P, σ(z)):
E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2
L2
The subspace Q∞ := L2(Ω, P, σ(z)) represents the available
information.
The update, also called the assimilated value
qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.
Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}.
Center for Uncertainty
Quantification
ation Logo Lock-up
21 / 39
4*
Numerical computation of NLBU
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ˜ϕ =
α∈Jp
ϕαΦα(z(ξ))
and minimize q(ξ) − ˜ϕ(z(ξ)) 2
L2
, where Φα are polynomials
(e.g. Hermite, Laguerre, Chebyshev or something else).
Taking derivatives with respect to ϕα:
∂
∂ϕα
q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp
Inserting representation for ˜ϕ, obtain:
Center for Uncertainty
Quantification
ation Logo Lock-up
22 / 39
4*
Numerical computation of NLBU
∂
∂ϕα
E

q2
(ξ) − 2
β∈J
qϕβΦβ(z) +
β,γ∈J
ϕβϕγΦβ(z)Φγ(z)


= 2E

−qΦα(z) +
β∈J
ϕβΦβ(z)Φα(z)


= 2


β∈J
E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]

 = 0 ∀α ∈ J .
Center for Uncertainty
Quantification
ation Logo Lock-up
23 / 39
4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linear
system of equations (=: A) to compute coefficients ϕβ:



... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]
...
... ... ...







...
ϕβ
...



 =




...
E [q(ξ)Φα(z(ξ))]
...



 ,
where α, β ∈ J , A is of size |J | × |J |.
Center for Uncertainty
Quantification
ation Logo Lock-up
24 / 39
4*
Numerical computation of NLBU
We can rewrite the system above in the compact form:
[Φ] [diag(...wi...)] [Φ]T




...
ϕβ
...



 = [Φ]


w0q(ξ0)
...
wNq(ξN)


[Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N.
Solving this system, obtain vector of coefficients (...ϕβ...)T for
all β.
Finally, the assimilated parameter qa will be
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (5)
z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp
ϕβΦβ(z(ξ))
Center for Uncertainty
Quantification
ation Logo Lock-up
25 / 39
4*
Example: Lorenz 1963 problem (chaotic system of ODEs)
˙x = σ(ω)(y − x)
˙y = x(ρ(ω) − z) − y
˙z = xy − β(ω)z
Initial state q0(ω) = (x0(ω), y0(ω), z0(ω)) are uncertain.
Solving in t0, t1, ..., t10, Noisy Measur. → UPDATE, solving in
t11, t12, ..., t20, Noisy Measur. → UPDATE,...
IDEA of the Bayesian Update (BU):
Take qf (ω) = q0(ω).
Linear BU: qa = qf + K · (z − y)
Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y).
Center for Uncertainty
Quantification
ation Logo Lock-up
26 / 39
Trajectories of x,y and z in time. After each update (new
information coming) the uncertainty drops. [O. Pajonk, B. V. Rosic, A.
Litvinenko, and H. G. Matthies, 2012]
Center for Uncertainty
Quantification
ation Logo Lock-up
27 / 39
4*
Example: Lorenz problem
10 0 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
x
20 0 20
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
y
0 10 20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
z
x
f
xa
y
f
ya
z
f
za
Figure: quadratic BU surrogate, measure the state (x(t), y(t), z(t)).
Prior and posterior after one update.
Center for Uncertainty
Quantification
ation Logo Lock-up
28 / 39
4*
Example: Lorenz Problem
10 5 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
x
x1
x2
15 10 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
y
y1
y2
5 10 15
0
0.1
0.2
0.3
0.4
0.5
0.6
z
z1
z2
Figure: Comparison of the posterior functions computed by linear and
quadratic BU after second update.
Center for Uncertainty
Quantification
ation Logo Lock-up
29 / 39
4*
Example: Lorenz Problem
20 0 20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
x
50 0 50
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
y
0 10 20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
z
x
f
xa
y
f
ya
z
f
za
Figure: Quadratic measurement (x(t)2
, y(t)2
, z(t)2
): Comparison of a
priori and a posterior for NLBU
Center for Uncertainty
Quantification
ation Logo Lock-up
30 / 39
4*
Example: 1D elliptic PDE with uncertain coeffs
− · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).
3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,
x(0.8) = 18, s.d. 0.3.
κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.
length 0.1, multi-variate Hermite polynomial of order pκ = 2;
RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,
multi-variate Hermite polynomial of order pf = 2;
b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,
multi-variate Hermite polynomial of order pg = 1;
pφ = 3 and pu = 3
Center for Uncertainty
Quantification
ation Logo Lock-up
31 / 39
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
32 / 39
4*
Example: Updating of the parameter
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
Figure: Original and updated parameter κ.
Center for Uncertainty
Quantification
ation Logo Lock-up
33 / 39
4*
Future plans and possible collaboration
Future plans and possible collaboration ideas
4*
Future plans, Idea N1
Possible collaboration work with Troy Butler: To develop a
low-rank adaptive goal-oriented Bayesian update technique. The
solution of the forward and inverse problems will be considered as a
whole adaptive process, controlled by error/uncertainty estimators.
z
(y - z) q
f ε
forward update
low-rank and adaptive
y
f z
(y - z)
ε
forward
y q.....
low-rank and adaptive
... q
update
Stochastic forward spatial discret.
stochastic discret.
low-rank approx.
Inverse problem
Errors
inverse operator approx.
4*
Future plans, Idea N2
Edge between Green functions in PDEs and covariance
matrices.
Possible collaboration with statistical group, Doug Nychka
(NCAR), Havard Rue
Center for Uncertainty
Quantification
ation Logo Lock-up
34 / 39
4*
Future plans, Idea N3
Data assimilation techniques, Bayesian update surrogare.
Develop non-linear, non-Gaussian Bayesian update
approximation for gPCE coefficients.
Possible collaboration with Jan Mandel, Troy Butler, Kody Law,
Y. Marzouk, H. Najm, TU Braunschweig and KAUST
4*
Collaborators
1. Uncertainty quantification and Bayesian Update: Prof. H.
Matthies, Bojana V. Rosic, Elmar Zander, Oliver Pajonk
from TU Braunschweig, Germany,
2. Low-rank tensor calculus: Mike Espig from RWTH Aachen,
Boris and Venera Khoromskij from MPI Leipzig
3. Spatial and environmental statistics: Marc Genton, Ying
Sun, Raphael Huser, Brian Reich, Ben Shaby and David
Bolin.
4. Some others: UQ, data assimilation, high-dimensional
problems/statistics
4*
Conclusion
Introduced low-rank tensor methods to solve elliptic PDEs
with uncertain coefficients,
Explained how to compute the maximum, the mean, level
sets,... in low-rank tensor format,
Derived Bayesian update surrogate ϕ (as a linear,
quadratic, cubic etc approximation), i.e. compute
conditional expectation of q, given measurement y.
Center for Uncertainty
Quantification
ation Logo Lock-up
34 / 39
4*
Example: Canonical rank d, whereas TT rank 2
d-Laplacian over uniform tensor grid. It is known to have the
Kronecker rank-d representation,
∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d
(6)
with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the
N × N identity. Notice that for the canonical rank we have rank
kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any
dimension due to the explicit representation
∆d = (∆1 I) ×
I 0
∆1 I
× ... ×
I 0
∆1 I
×
I
∆1
(7)
where the rank product operation ”×” is defined as a regular
matrix product of the two corresponding core matrices, their
blocks being multiplied by means of tensor product. The similar
bound is true for the Tucker rank rankTuck (∆d ) = 2.
4*
Advantages and disadvantages
Denote k - rank, d-dimension, n = # dofs in 1D:
1. CP: ill-posed approx. alg-m, O(dnk), hard to compute
approx.
2. Tucker: reliable arithmetic based on SVD, O(dnk + kd )
3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3),
truncation O(dnk2 + dk4)
4. TT: based on SVD, O(dnk2) or O(dnk3), stable
5. Quantics-TT: O(nd ) → O(dlogq
n)
4*
How to compute the variance in CP format
Let u ∈ Rr and
˜u := u − u
d
µ=1
1
nµ
1 =
r+1
j=1
d
µ=1
˜ujµ ∈ Rr+1, (8)
then the variance var(u) of u can be computed as follows
var(u) =
˜u, ˜u
d
µ=1 nµ
=
1
d
µ=1 nµ


r+1
i=1
d
µ=1
˜uiµ

 ,


r+1
j=1
d
ν=1
˜ujν


=
r+1
i=1
r+1
j=1
d
µ=1
1
nµ
˜uiµ, ˜ujµ .
Numerical cost is O (r + 1)2 · d
µ=1 nµ .
4*
Computing QoI in low-rank tensor format
Now, we consider how to
find ‘level sets’,
for instance, all entries of tensor u from interval [a, b].
4*
Definitions of characteristic and sign functions
1. To compute level sets and frequencies we need
characteristic function.
2. To compute characteristic function we need sign function.
The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi-
index i ∈ I pointwise defined as
(χI(u))i :=
1, ui ∈ I,
0, ui /∈ I.
Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined
by
(sign(u))i :=



1, ui > 0;
−1, ui < 0;
0, ui = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
36 / 39
4*
sign(u) is needed for computing χI(u)
Lemma
Let u ∈ T , a, b ∈ R, and 1 = d
µ=1
˜1µ, where
˜1µ := (1, . . . , 1)t ∈ Rnµ .
(i) If I = R<b, then we have χI(u) = 1
2 (1 + sign(b1 − u)).
(ii) If I = R>a, then we have χI(u) = 1
2(1 − sign(a1 − u)).
(iii) If I = (a, b), then we have
χI(u) = 1
2 (sign(b1 − u) − sign(a1 − u)).
Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration
with rank truncation after each iteration.
Center for Uncertainty
Quantification
ation Logo Lock-up
37 / 39
4*
Level Set, Frequency
Definition (Level Set, Frequency)
Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is
pointwise defined by
(LI(u))i :=
ui, ui ∈ I ;
0, ui /∈ I ,
for all i ∈ I.
The frequency FI(u) ∈ N of u respect to I is defined as
FI(u) := # supp χI(u).
Center for Uncertainty
Quantification
ation Logo Lock-up
38 / 39
4*
Computation of level sets and frequency
Proposition
Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have
LI(u) = χI(u) u
and rank(LI(u)) ≤ rank(χI(u)) rank(u).
The frequency FI(u) ∈ N of u respect to I is
FI(u) = χI(u), 1 ,
where 1 = d
µ=1
˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
39 / 39
Ad

More Related Content

What's hot (19)

Kernels and Support Vector Machines
Kernels and Support Vector  MachinesKernels and Support Vector  Machines
Kernels and Support Vector Machines
Edgar Marca
 
International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions (IJEI)
International Journal of Engineering Inventions www.ijeijournal.com
 
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
mathsjournal
 
Slides
SlidesSlides
Slides
Alexander Litvinenko
 
The Kernel Trick
The Kernel TrickThe Kernel Trick
The Kernel Trick
Edgar Marca
 
18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics
Andres Mendez-Vazquez
 
Introductory maths analysis chapter 10 official
Introductory maths analysis   chapter 10 officialIntroductory maths analysis   chapter 10 official
Introductory maths analysis chapter 10 official
Evert Sandye Taasiringan
 
BITS C464.doc
BITS C464.docBITS C464.doc
BITS C464.doc
butest
 
11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning
Andres Mendez-Vazquez
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural Network
Liwei Ren任力偉
 
2002 santiago et al
2002 santiago et al2002 santiago et al
2002 santiago et al
CosmoSantiago
 
Global Domination Set in Intuitionistic Fuzzy Graph
Global Domination Set in Intuitionistic Fuzzy GraphGlobal Domination Set in Intuitionistic Fuzzy Graph
Global Domination Set in Intuitionistic Fuzzy Graph
ijceronline
 
Gaussian Processes: Applications in Machine Learning
Gaussian Processes: Applications in Machine LearningGaussian Processes: Applications in Machine Learning
Gaussian Processes: Applications in Machine Learning
butest
 
Discrete-Chapter 02 Functions and Sequences
Discrete-Chapter 02 Functions and SequencesDiscrete-Chapter 02 Functions and Sequences
Discrete-Chapter 02 Functions and Sequences
Wongyos Keardsri
 
Introductory maths analysis chapter 17 official
Introductory maths analysis   chapter 17 officialIntroductory maths analysis   chapter 17 official
Introductory maths analysis chapter 17 official
Evert Sandye Taasiringan
 
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
Fast and efficient exact synthesis of single qubit unitaries generated by cli...Fast and efficient exact synthesis of single qubit unitaries generated by cli...
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
JamesMa54
 
jammer_placement
jammer_placementjammer_placement
jammer_placement
M. Reza Gholami
 
Introductory maths analysis chapter 08 official
Introductory maths analysis   chapter 08 officialIntroductory maths analysis   chapter 08 official
Introductory maths analysis chapter 08 official
Evert Sandye Taasiringan
 
presentation
presentationpresentation
presentation
Gábor Bakos
 
Kernels and Support Vector Machines
Kernels and Support Vector  MachinesKernels and Support Vector  Machines
Kernels and Support Vector Machines
Edgar Marca
 
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
Numerical solution of fuzzy differential equations by Milne’s predictor-corre...
mathsjournal
 
The Kernel Trick
The Kernel TrickThe Kernel Trick
The Kernel Trick
Edgar Marca
 
18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics18 Machine Learning Radial Basis Function Networks Forward Heuristics
18 Machine Learning Radial Basis Function Networks Forward Heuristics
Andres Mendez-Vazquez
 
Introductory maths analysis chapter 10 official
Introductory maths analysis   chapter 10 officialIntroductory maths analysis   chapter 10 official
Introductory maths analysis chapter 10 official
Evert Sandye Taasiringan
 
BITS C464.doc
BITS C464.docBITS C464.doc
BITS C464.doc
butest
 
11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning11 Machine Learning Important Issues in Machine Learning
11 Machine Learning Important Issues in Machine Learning
Andres Mendez-Vazquez
 
Introduction to Deep Neural Network
Introduction to Deep Neural NetworkIntroduction to Deep Neural Network
Introduction to Deep Neural Network
Liwei Ren任力偉
 
Global Domination Set in Intuitionistic Fuzzy Graph
Global Domination Set in Intuitionistic Fuzzy GraphGlobal Domination Set in Intuitionistic Fuzzy Graph
Global Domination Set in Intuitionistic Fuzzy Graph
ijceronline
 
Gaussian Processes: Applications in Machine Learning
Gaussian Processes: Applications in Machine LearningGaussian Processes: Applications in Machine Learning
Gaussian Processes: Applications in Machine Learning
butest
 
Discrete-Chapter 02 Functions and Sequences
Discrete-Chapter 02 Functions and SequencesDiscrete-Chapter 02 Functions and Sequences
Discrete-Chapter 02 Functions and Sequences
Wongyos Keardsri
 
Introductory maths analysis chapter 17 official
Introductory maths analysis   chapter 17 officialIntroductory maths analysis   chapter 17 official
Introductory maths analysis chapter 17 official
Evert Sandye Taasiringan
 
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
Fast and efficient exact synthesis of single qubit unitaries generated by cli...Fast and efficient exact synthesis of single qubit unitaries generated by cli...
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
JamesMa54
 
Introductory maths analysis chapter 08 official
Introductory maths analysis   chapter 08 officialIntroductory maths analysis   chapter 08 official
Introductory maths analysis chapter 08 official
Evert Sandye Taasiringan
 

Viewers also liked (19)

RS
RSRS
RS
Alexander Litvinenko
 
add_2_diplom_main
add_2_diplom_mainadd_2_diplom_main
add_2_diplom_main
Alexander Litvinenko
 
My PhD on 4 pages
My PhD on 4 pagesMy PhD on 4 pages
My PhD on 4 pages
Alexander Litvinenko
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
Alexander Litvinenko
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
Alexander Litvinenko
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
Alexander Litvinenko
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
Alexander Litvinenko
 
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Alexander Litvinenko
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
Scalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQScalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQ
Alexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
Alexander Litvinenko
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
Alexander Litvinenko
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
Alexander Litvinenko
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
Alexander Litvinenko
 
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Alexander Litvinenko
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
Scalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQScalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQ
Alexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
Alexander Litvinenko
 
Ad

Similar to Low-rank tensor methods for stochastic forward and inverse problems (20)

My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
Alexander Litvinenko
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Alexander Litvinenko
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
Alexander Litvinenko
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve Expansion
Alexander Litvinenko
 
Litvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an OverviewLitvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an Overview
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
Christian Robert
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
Lecture#9
Lecture#9Lecture#9
Lecture#9
Ali Shah
 
IRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET Journal
 
top school in india
top school in indiatop school in india
top school in india
Edhole.com
 
The Fundamental theorem of calculus
The Fundamental theorem of calculus The Fundamental theorem of calculus
The Fundamental theorem of calculus
AhsanIrshad8
 
Chapter 06 rsa cryptosystem
Chapter 06   rsa cryptosystemChapter 06   rsa cryptosystem
Chapter 06 rsa cryptosystem
Ankur Choudhary
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
Alexander Litvinenko
 
Fixed point and common fixed point theorems in complete metric spaces
Fixed point and common fixed point theorems in complete metric spacesFixed point and common fixed point theorems in complete metric spaces
Fixed point and common fixed point theorems in complete metric spaces
Alexander Decker
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
Ameen San
 
Unit-1 Basic Concept of Algorithm.pptx
Unit-1 Basic Concept of Algorithm.pptxUnit-1 Basic Concept of Algorithm.pptx
Unit-1 Basic Concept of Algorithm.pptx
ssuser01e301
 
Application of parallel hierarchical matrices and low-rank tensors in spatial...
Application of parallel hierarchical matrices and low-rank tensors in spatial...Application of parallel hierarchical matrices and low-rank tensors in spatial...
Application of parallel hierarchical matrices and low-rank tensors in spatial...
Alexander Litvinenko
 
Practical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient ApportionmentPractical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient Apportionment
Raphael Reitzig
 
Proyecto grupal algebra parcial ii
Proyecto grupal algebra parcial iiProyecto grupal algebra parcial ii
Proyecto grupal algebra parcial ii
JHANDRYALCIVARGUAJAL
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
Alexander Litvinenko
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Alexander Litvinenko
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
Alexander Litvinenko
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve Expansion
Alexander Litvinenko
 
Litvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an OverviewLitvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an Overview
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
Christian Robert
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
IRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET- Optimization of 1-Bit ALU using Ternary LogicIRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET- Optimization of 1-Bit ALU using Ternary Logic
IRJET Journal
 
top school in india
top school in indiatop school in india
top school in india
Edhole.com
 
The Fundamental theorem of calculus
The Fundamental theorem of calculus The Fundamental theorem of calculus
The Fundamental theorem of calculus
AhsanIrshad8
 
Chapter 06 rsa cryptosystem
Chapter 06   rsa cryptosystemChapter 06   rsa cryptosystem
Chapter 06 rsa cryptosystem
Ankur Choudhary
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
Alexander Litvinenko
 
Fixed point and common fixed point theorems in complete metric spaces
Fixed point and common fixed point theorems in complete metric spacesFixed point and common fixed point theorems in complete metric spaces
Fixed point and common fixed point theorems in complete metric spaces
Alexander Decker
 
Matlab polynimials and curve fitting
Matlab polynimials and curve fittingMatlab polynimials and curve fitting
Matlab polynimials and curve fitting
Ameen San
 
Unit-1 Basic Concept of Algorithm.pptx
Unit-1 Basic Concept of Algorithm.pptxUnit-1 Basic Concept of Algorithm.pptx
Unit-1 Basic Concept of Algorithm.pptx
ssuser01e301
 
Application of parallel hierarchical matrices and low-rank tensors in spatial...
Application of parallel hierarchical matrices and low-rank tensors in spatial...Application of parallel hierarchical matrices and low-rank tensors in spatial...
Application of parallel hierarchical matrices and low-rank tensors in spatial...
Alexander Litvinenko
 
Practical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient ApportionmentPractical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient Apportionment
Raphael Reitzig
 
Proyecto grupal algebra parcial ii
Proyecto grupal algebra parcial iiProyecto grupal algebra parcial ii
Proyecto grupal algebra parcial ii
JHANDRYALCIVARGUAJAL
 
Ad

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 
Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 

Recently uploaded (20)

Cyber security COPA ITI MCQ Top Questions
Cyber security COPA ITI MCQ Top QuestionsCyber security COPA ITI MCQ Top Questions
Cyber security COPA ITI MCQ Top Questions
SONU HEETSON
 
MICROBIAL GENETICS -tranformation and tranduction.pdf
MICROBIAL GENETICS -tranformation and tranduction.pdfMICROBIAL GENETICS -tranformation and tranduction.pdf
MICROBIAL GENETICS -tranformation and tranduction.pdf
DHARMENDRA SAHU
 
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
Dr. Nasir Mustafa
 
How to Change Sequence Number in Odoo 18 Sale Order
How to Change Sequence Number in Odoo 18 Sale OrderHow to Change Sequence Number in Odoo 18 Sale Order
How to Change Sequence Number in Odoo 18 Sale Order
Celine George
 
How to Manage Amounts in Local Currency in Odoo 18 Purchase
How to Manage Amounts in Local Currency in Odoo 18 PurchaseHow to Manage Amounts in Local Currency in Odoo 18 Purchase
How to Manage Amounts in Local Currency in Odoo 18 Purchase
Celine George
 
How To Maximize Sales Performance using Odoo 18 Diverse views in sales module
How To Maximize Sales Performance using Odoo 18 Diverse views in sales moduleHow To Maximize Sales Performance using Odoo 18 Diverse views in sales module
How To Maximize Sales Performance using Odoo 18 Diverse views in sales module
Celine George
 
Chemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptxChemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptx
Mayuri Chavan
 
GENERAL QUIZ PRELIMS | QUIZ CLUB OF PSGCAS | 4 MARCH 2025 .pdf
GENERAL QUIZ PRELIMS | QUIZ CLUB OF PSGCAS | 4 MARCH 2025 .pdfGENERAL QUIZ PRELIMS | QUIZ CLUB OF PSGCAS | 4 MARCH 2025 .pdf
GENERAL QUIZ PRELIMS | QUIZ CLUB OF PSGCAS | 4 MARCH 2025 .pdf
Quiz Club of PSG College of Arts & Science
 
Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...
parmarjuli1412
 
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-17-2025 .pptx
YSPH VMOC Special Report - Measles Outbreak  Southwest US 5-17-2025  .pptxYSPH VMOC Special Report - Measles Outbreak  Southwest US 5-17-2025  .pptx
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-17-2025 .pptx
Yale School of Public Health - The Virtual Medical Operations Center (VMOC)
 
114P_English.pdf114P_English.pdf114P_English.pdf
114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf
114P_English.pdf114P_English.pdf114P_English.pdf
paulinelee52
 
Rebuilding the library community in a post-Twitter world
Rebuilding the library community in a post-Twitter worldRebuilding the library community in a post-Twitter world
Rebuilding the library community in a post-Twitter world
Ned Potter
 
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdfAntepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Dr H.K. Cheema
 
INDIA QUIZ FOR SCHOOLS | THE QUIZ CLUB OF PSGCAS | AUGUST 2024
INDIA QUIZ FOR SCHOOLS | THE QUIZ CLUB OF PSGCAS | AUGUST 2024INDIA QUIZ FOR SCHOOLS | THE QUIZ CLUB OF PSGCAS | AUGUST 2024
INDIA QUIZ FOR SCHOOLS | THE QUIZ CLUB OF PSGCAS | AUGUST 2024
Quiz Club of PSG College of Arts & Science
 
2025 The Senior Landscape and SET plan preparations.pptx
2025 The Senior Landscape and SET plan preparations.pptx2025 The Senior Landscape and SET plan preparations.pptx
2025 The Senior Landscape and SET plan preparations.pptx
mansk2
 
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFAMCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
Dr. Nasir Mustafa
 
Conditions for Boltzmann Law – Biophysics Lecture Slide
Conditions for Boltzmann Law – Biophysics Lecture SlideConditions for Boltzmann Law – Biophysics Lecture Slide
Conditions for Boltzmann Law – Biophysics Lecture Slide
PKLI-Institute of Nursing and Allied Health Sciences Lahore , Pakistan.
 
The History of Kashmir Lohar Dynasty NEP.ppt
The History of Kashmir Lohar Dynasty NEP.pptThe History of Kashmir Lohar Dynasty NEP.ppt
The History of Kashmir Lohar Dynasty NEP.ppt
Arya Mahila P. G. College, Banaras Hindu University, Varanasi, India.
 
How to Add Button in Chatter in Odoo 18 - Odoo Slides
How to Add Button in Chatter in Odoo 18 - Odoo SlidesHow to Add Button in Chatter in Odoo 18 - Odoo Slides
How to Add Button in Chatter in Odoo 18 - Odoo Slides
Celine George
 
Peer Assesment- Libby.docx..............
Peer Assesment- Libby.docx..............Peer Assesment- Libby.docx..............
Peer Assesment- Libby.docx..............
19lburrell
 
Cyber security COPA ITI MCQ Top Questions
Cyber security COPA ITI MCQ Top QuestionsCyber security COPA ITI MCQ Top Questions
Cyber security COPA ITI MCQ Top Questions
SONU HEETSON
 
MICROBIAL GENETICS -tranformation and tranduction.pdf
MICROBIAL GENETICS -tranformation and tranduction.pdfMICROBIAL GENETICS -tranformation and tranduction.pdf
MICROBIAL GENETICS -tranformation and tranduction.pdf
DHARMENDRA SAHU
 
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
MCQ PHYSIOLOGY II (DR. NASIR MUSTAFA) MCQS)
Dr. Nasir Mustafa
 
How to Change Sequence Number in Odoo 18 Sale Order
How to Change Sequence Number in Odoo 18 Sale OrderHow to Change Sequence Number in Odoo 18 Sale Order
How to Change Sequence Number in Odoo 18 Sale Order
Celine George
 
How to Manage Amounts in Local Currency in Odoo 18 Purchase
How to Manage Amounts in Local Currency in Odoo 18 PurchaseHow to Manage Amounts in Local Currency in Odoo 18 Purchase
How to Manage Amounts in Local Currency in Odoo 18 Purchase
Celine George
 
How To Maximize Sales Performance using Odoo 18 Diverse views in sales module
How To Maximize Sales Performance using Odoo 18 Diverse views in sales moduleHow To Maximize Sales Performance using Odoo 18 Diverse views in sales module
How To Maximize Sales Performance using Odoo 18 Diverse views in sales module
Celine George
 
Chemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptxChemotherapy of Malignancy -Anticancer.pptx
Chemotherapy of Malignancy -Anticancer.pptx
Mayuri Chavan
 
Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...Classification of mental disorder in 5th semester bsc. nursing and also used ...
Classification of mental disorder in 5th semester bsc. nursing and also used ...
parmarjuli1412
 
114P_English.pdf114P_English.pdf114P_English.pdf
114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf114P_English.pdf
114P_English.pdf114P_English.pdf114P_English.pdf
paulinelee52
 
Rebuilding the library community in a post-Twitter world
Rebuilding the library community in a post-Twitter worldRebuilding the library community in a post-Twitter world
Rebuilding the library community in a post-Twitter world
Ned Potter
 
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdfAntepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Antepartum fetal surveillance---Dr. H.K.Cheema pdf.pdf
Dr H.K. Cheema
 
2025 The Senior Landscape and SET plan preparations.pptx
2025 The Senior Landscape and SET plan preparations.pptx2025 The Senior Landscape and SET plan preparations.pptx
2025 The Senior Landscape and SET plan preparations.pptx
mansk2
 
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFAMCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
MCQS (EMERGENCY NURSING) DR. NASIR MUSTAFA
Dr. Nasir Mustafa
 
How to Add Button in Chatter in Odoo 18 - Odoo Slides
How to Add Button in Chatter in Odoo 18 - Odoo SlidesHow to Add Button in Chatter in Odoo 18 - Odoo Slides
How to Add Button in Chatter in Odoo 18 - Odoo Slides
Celine George
 
Peer Assesment- Libby.docx..............
Peer Assesment- Libby.docx..............Peer Assesment- Libby.docx..............
Peer Assesment- Libby.docx..............
19lburrell
 

Low-rank tensor methods for stochastic forward and inverse problems

  • 1. Low-rank tensor methods for PDEs with uncertain coefficients and Bayesian Update surrogate Alexander Litvinenko Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/ Extreme Computing Research Center, KAUST Alexander Litvinenko Low-rank tensor methods for PDEs with uncertain coefficien
  • 2. 4* The structure of the talk Part I (Stochastic forward problem): 1. Motivation 2. Elliptic PDE with uncertain coefficients 3. Discretization and low-rank tensor approximations 4. Tensor calculus to compute QoI Part II (Bayesian update): 1. Bayesian update surrogate 2. Examples 13 13 17 17 14 14 17 13 17 14 15 13 13 17 29 13 48 15 13 13 13 13 15 13 13 13 16 23 8 8 13 15 28 29 8 8 15 8 15 8 15 19 18 18 61 57 23 17 17 17 17 23 35 57 60 61 117 17 17 17 17 17 14 14 14 7 7 14 14 34 21 14 17 14 28 28 10 10 13 17 17 17 17 11 11 17 11 11 69 40 17 11 17 11 36 28 69 68 10 10 11 9 9 10 11 9 9 12 14 14 21 21 14 14 11 11 11 42 14 11 11 11 11 14 22 38 36 12 12 13 12 12 10 10 12 10 10 23 12 10 10 10 15 15 13 10 10 15 15 69 97 49 28 16 15 12 12 21 21 48 48 83 132 48 91 16 12 12 13 12 8 8 13 8 8 26 13 8 13 8 22 21 13 13 13 9 9 13 13 9 9 13 49 26 9 12 9 13 26 22 49 48 12 12 14 12 14 12 14 15 9 9 18 18 26 15 15 14 14 26 35 15 14 14 15 14 15 14 16 16 19 97 68 29 16 18 16 18 29 35 65 64 97 132 18 18 18 15 15 18 18 15 15 14 7 7 33 15 16 15 17 32 32 16 16 17 14 14 16 17 14 14 18 64 33 11 11 14 18 31 31 72 65 11 11 8 8 14 11 18 11 13 18 13 13 33 18 13 15 13 33 31 20 15 15 19 15 18 15 19 18 18 53 87 136 64 35 19 18 14 14 35 35 64 66 82 128 61 90 33 62 8 8 13 14 14 17 14 18 14 17 17 18 29 17 18 10 10 35 35 19 10 10 13 10 19 10 13 13 10 10 14 70 28 13 15 13 13 29 37 56 56 15 13 13 15 13 15 13 19 19 10 10 15 23 11 11 12 12 28 33 11 11 12 11 12 11 12 18 15 15 115 66 23 18 15 18 15 23 30 49 49 121 121 18 18 18 12 12 18 18 12 12 18 22 11 11 11 11 27 27 11 11 11 11 11 10 10 17 10 10 62 22 17 10 17 10 21 21 59 49 13 10 10 18 18 10 10 11 11 10 10 11 27 10 11 10 11 32 21 12 12 15 12 13 12 15 13 13 19 88 115 62 27 13 19 13 14 27 32 62 59 115 121 61 90 10 10 11 14 14 21 14 12 12 14 10 10 12 12 29 14 12 15 12 35 35 14 14 15 11 11 14 15 11 11 8 8 16 69 29 11 18 11 23 28 28 62 62 18 18 8 8 15 15 15 13 13 15 13 13 29 15 13 13 13 33 28 16 13 13 16 13 15 13 18 15 15 135 62 29 18 15 18 15 22 22 69 62 101 101 10 10 11 19 19 15 15 7 7 15 7 7 40 15 7 15 7 40 22 19 19 9 9 13 18 18 19 22 18 18 11 10 10 11 11 62 31 18 20 11 11 31 31 39 39 20 11 11 19 11 12 11 19 12 12 26 12 12 14 12 13 13 12 12 14 13 13
  • 3. 4* KAUST I received very rich collaboration experience as a co-organizator of: 3 UQ workshops, 2 Scalable Hierarchical Algorithms for eXtreme Computing (SHAXC) workshops 1 HPC Conference (www.hpcsaudi.org, 2017)
  • 4. 4* My interests and collaborations
  • 5. 4* Motivation to do Uncertainty Quantification (UQ) Motivation: there is an urgent need to quantify and reduce the uncertainty in output quantities of computer simulations within complex (multiscale-multiphysics) applications. Typical challenges: classical sampling methods are often very inefficient, whereas straightforward functional representations are subject to the well-known Curse of Dimensionality. My goal is systematic, mathematically founded, development of UQ methods and low-rank algorithms relevant for applications. Center for Uncertainty Quantification ation Logo Lock-up -1 / 39
  • 6. 4* UQ and its relevance Nowadays computational predictions are used in critical engineering decisions and thanks to modern computers we are able to simulate very complex phenomena. But, how reliable are these predictions? Can they be trusted? Example: Saudi Aramco currently has a simulator, GigaPOWERS, which runs with 9 billion cells. How sensitive are the simulation results with respect to the unknown reservoir properties? Center for Uncertainty Quantification ation Logo Lock-up 0 / 39
  • 7. 4* Part I: Stochastic forward problem Part I: Stochastic Galerkin method to solve elliptic PDE with uncertain coefficients
  • 8. 4* PDE with uncertain coefficient and RHS Consider − div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2, u = 0 on ∂G, (1) where κ(x, ω) - uncertain diffusion coefficient. Since κ positive, usually κ(x, ω) = eγ(x,ω). For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11, Ullmann 10]. Further we will assume that covκ(x, y) is given. Center for Uncertainty Quantification ation Logo Lock-up 1 / 39
  • 9. 4* My previous work After applying the stochastic Galerkin method, obtain: Ku = f, where all ingredients are represented in a tensor format Compute max{u}, var(u), level sets of u, sign(u) [1] Efficient Analysis of High Dimensional Data in Tensor Formats, Espig, Hackbusch, A.L., Matthies and Zander, 2012. Research which ingredients influence on the tensor rank of K [2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013. Approximate κ(x, ω), stochastic Galerkin operator K in Tensor Train (TT) format, solve for u, postprocessing [3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016. Center for Uncertainty Quantification ation Logo Lock-up 2 / 39
  • 10. 4* Typical quantities of interest Keeping all input and intermediate data in a tensor representation one wants to perform different tasks: evaluation for specific parameters (ω1, . . . , ωM), finding maxima and minima, finding ‘level sets’ (needed for histogram and probability density). Example of level set: all elements of a high dimensional tensor from the interval [0.7, 0.8]. Center for Uncertainty Quantification ation Logo Lock-up 3 / 39
  • 11. 4* Canonical and Tucker tensor formats Definition and Examples of tensors Center for Uncertainty Quantification ation Logo Lock-up 4 / 39
  • 12. 4* Canonical and Tucker tensor formats [Pictures are taken from B. Khoromskij and A. Auer lecture course] Storage: O(nd ) → O(dRn) and O(Rd + dRn). Center for Uncertainty Quantification ation Logo Lock-up 5 / 39
  • 13. 4* Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I1 × · · · × Id , A = [ai1...id : i ∈ I ] ∈ RI , I = {1, ..., n }, = 1, .., d. A is an element of the linear space Vn = d =1 V , V = RI equipped with the Euclidean scalar product ·, · : Vn × Vn → R, defined as A, B := (i1...id )∈I ai1...id bi1...id , for A, B ∈ Vn. Center for Uncertainty Quantification ation Logo Lock-up 6 / 39
  • 14. 4* Examples of rank-1 and rank-2 tensors Rank-1: f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d j=1 exp(fj(xj)) Rank-2: f(x1, ..., xd ) = sin( d j=1 xj), since 2i · sin( d j=1 xj) = ei d j=1 xj − e−i d j=1 xj Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be approximated by rank-2: with any prescribed accuracy: f ≈ d j=1(1 + εxj) ε − d j=1 1 ε + O(ε), as ε → 0 Center for Uncertainty Quantification ation Logo Lock-up 7 / 39
  • 15. 4* Tensor and Matrices Rank-1 tensor A = u1 ⊗ u2 ⊗ ... ⊗ ud =: d µ=1 uµ Ai1,...,id = (u1)i1 · ... · (ud )id Rank-1 tensor A = u ⊗ v, matrix A = uvT , A = vuT , u ∈ Rn, v ∈ Rm, Rank-k tensor A = k i=1 ui ⊗ vi, matrix A = k i=1 uivT i . Kronecker product of n × n and m × m matrices is a new block matrix A ⊗ B ∈ Rnm×nm, whose ij-th block is [AijB]. Center for Uncertainty Quantification ation Logo Lock-up 8 / 39
  • 16. 4* Discretization of elliptic PDE Now let us discretize our diffusion equation with uncertain coefficients Center for Uncertainty Quantification ation Logo Lock-up 9 / 39
  • 17. 4* Karhunen Lo´eve and Polynomial Chaos Expansions Apply both Karhunen Lo´eve Expansion (KLE): κ(x, ω) = κ0(x) + ∞ j=1 κjgj(x)ξj(θ(ω)), where θ = θ(ω) = (θ1(ω), θ2(ω), ..., ), ξj(θ) = 1 κj G (κ(x, ω) − κ0(x)) gj(x)dx. Polynomial Chaos Expansion (PCE) κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ (α) j Hα(θ), where ξ (α) j = 1 κj G κ(α)(x)gj(x)dx. Further compute ξ (α) j ≈ s =1(ξ )j ∞ k=1(ξ , k )αk . Center for Uncertainty Quantification ation Logo Lock-up 10 / 39
  • 18. 4* Final discretized stochastic PDE Ku = f, where K:= s =1 K ⊗ M µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ , u:= r j=1 uj ⊗ M µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ , f:= R k=1 fk ⊗ M µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ . (Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011) Examples of stochastic Galerkin matrices: Center for Uncertainty Quantification ation Logo Lock-up 11 / 39
  • 19. 4* Computing QoI in low-rank tensor format Now, we consider how to find maxima in a high-dimensional tensor
  • 20. 4* Maximum norm and corresponding index Let u = r j=1 d µ=1 ujµ ∈ Rr , compute u ∞ := max i:=(i1,...,id )∈I |ui| = max i:=(i1,...,id )∈I r j=1 d µ=1 ujµ iµ . Computing u ∞ is equivalent to the following e.v. problem. Let i∗ := (i∗ 1 , . . . , i∗ d ) ∈ I, #I = d µ=1 nµ. u ∞ = |ui∗ | = r j=1 d µ=1 ujµ i∗ µ and e(i∗ ) := d µ=1 ei∗ µ , where ei∗ µ ∈ Rnµ the i∗ µ-th canonical vector in Rnµ (µ ∈ N≤d ). Center for Uncertainty Quantification ation Logo Lock-up 12 / 39
  • 21. Then u e(i∗ ) =   r j=1 d µ=1 ujµ     d µ=1 ei∗ µ   = r j=1 d µ=1 ujµ ei∗ µ = r j=1 d µ=1 (ujµ)i∗ µ ei∗ µ =   r j=1 d µ=1 (ujµ)i∗ µ   ui∗ = d µ=1 e(i∗ µ) = ui∗ e(i∗ ) . Thus, we obtained an “eigenvalue problem”: u e(i∗ ) = ui∗ e(i∗ ) . Center for Uncertainty Quantification ation Logo Lock-up 13 / 39
  • 22. 4* Computing u ∞, u ∈ Rr by vector iteration By defining the following diagonal matrix D(u) := r j=1 d µ=1 diag (ujµ) µ µ∈N≤nµ (2) with representation rank r, obtain D(u)v = u v. Now apply the well-known vector iteration method (with rank truncation) to D(u)e(i∗ ) = ui∗ e(i∗ ) , obtain u ∞. [Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05], and [Espig, Hackbusch 2010] Center for Uncertainty Quantification ation Logo Lock-up 14 / 39
  • 23. 4* How to compute the mean value in CP format Let u = r j=1 d µ=1 ujµ ∈ Rr , then the mean value u can be computed as a scalar product u =   r j=1 d µ=1 ujµ   ,   d µ=1 1 nµ ˜1µ   = r j=1 d µ=1 ujµ, ˜1µ nµ = (3) = r j=1 d µ=1 1 nµ nµ k=1 (ujµ)k , (4) where ˜1µ := (1, . . . , 1)T ∈ Rnµ . Numerical cost is O r · d µ=1 nµ . Center for Uncertainty Quantification ation Logo Lock-up 15 / 39
  • 24. 4* Numerical Experiments 2D L-shape domain, N = 557 dofs. Total stochastic dimension is Mu = Mk + Mf = 20, there are |J | = 231 PCE coefficients u = 231 j=1 uj,0 ⊗ 20 µ=1 ujµ ∈ R557 ⊗ 20 µ=1 R3 . Center for Uncertainty Quantification ation Logo Lock-up 16 / 39
  • 25. 4* Level sets Now we compute level sets sign(b u ∞1 − u) for b ∈ {0.2, 0.4, 0.6, 0.8}. Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB of memory. The computing time of one level set was 10 minutes. Intermediate ranks of sign(b u ∞1 − u) and of rank(uk ) were less than 24. Center for Uncertainty Quantification ation Logo Lock-up 17 / 39
  • 26. 4* Part II Part II: Bayesian update We will speak about Gauss-Markov-Kalman filter for the Bayesian updating of parameters in comput. model.
  • 27. 4* Mathematical setup Consider K(u; q) = f ⇒ u = S(f; q), where S is solution operator. Operator depends on parameters q ∈ Q, hence state u ∈ U is also function of q: Measurement operator Y with values in Y: y = Y(q; u) = Y(q, S(f; q)). Examples of measurements: y(ω) = D0 u(ω, x)dx, or u in few points Center for Uncertainty Quantification ation Logo Lock-up 18 / 39
  • 28. 4* Random QoI With state u a RV, the quantity to be measured y(ω) = Y(q(ω), u(ω))) is also uncertain, a random variable. Noisy data: ˆy + (ω), where ˆy is the “true” value and a random error . Forecast of the measurement: z(ω) = y(ω) + (ω). Center for Uncertainty Quantification ation Logo Lock-up 19 / 39
  • 29. 4* Conditional probability and expectation Classically, Bayes’s theorem gives conditional probability P(Iq|Mz) = P(Mz|Iq) P(Mz) P(Iq) (or πq(q|z) = p(z|q) Zs pq(q)); Expectation with this posterior measure is conditional expectation. Kolmogorov starts from conditional expectation E (·|Mz), from this conditional probability via P(Iq|Mz) = E χIq |Mz . Center for Uncertainty Quantification ation Logo Lock-up 20 / 39
  • 30. 4* Conditional expectation The conditional expectation is defined as orthogonal projection onto the closed subspace L2(Ω, P, σ(z)): E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2 L2 The subspace Q∞ := L2(Ω, P, σ(z)) represents the available information. The update, also called the assimilated value qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV and represents new state of knowledge after the measurement. Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}. Center for Uncertainty Quantification ation Logo Lock-up 21 / 39
  • 31. 4* Numerical computation of NLBU Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω): ϕ ≈ ˜ϕ = α∈Jp ϕαΦα(z(ξ)) and minimize q(ξ) − ˜ϕ(z(ξ)) 2 L2 , where Φα are polynomials (e.g. Hermite, Laguerre, Chebyshev or something else). Taking derivatives with respect to ϕα: ∂ ∂ϕα q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp Inserting representation for ˜ϕ, obtain: Center for Uncertainty Quantification ation Logo Lock-up 22 / 39
  • 32. 4* Numerical computation of NLBU ∂ ∂ϕα E  q2 (ξ) − 2 β∈J qϕβΦβ(z) + β,γ∈J ϕβϕγΦβ(z)Φγ(z)   = 2E  −qΦα(z) + β∈J ϕβΦβ(z)Φα(z)   = 2   β∈J E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]   = 0 ∀α ∈ J . Center for Uncertainty Quantification ation Logo Lock-up 23 / 39
  • 33. 4* Numerical computation of NLBU Now, rewriting the last sum in a matrix form, obtain the linear system of equations (=: A) to compute coefficients ϕβ:    ... ... ... ... E [Φα(z(ξ))Φβ(z(ξ))] ... ... ... ...        ... ϕβ ...     =     ... E [q(ξ)Φα(z(ξ))] ...     , where α, β ∈ J , A is of size |J | × |J |. Center for Uncertainty Quantification ation Logo Lock-up 24 / 39
  • 34. 4* Numerical computation of NLBU We can rewrite the system above in the compact form: [Φ] [diag(...wi...)] [Φ]T     ... ϕβ ...     = [Φ]   w0q(ξ0) ... wNq(ξN)   [Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N. Solving this system, obtain vector of coefficients (...ϕβ...)T for all β. Finally, the assimilated parameter qa will be qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (5) z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp ϕβΦβ(z(ξ)) Center for Uncertainty Quantification ation Logo Lock-up 25 / 39
  • 35. 4* Example: Lorenz 1963 problem (chaotic system of ODEs) ˙x = σ(ω)(y − x) ˙y = x(ρ(ω) − z) − y ˙z = xy − β(ω)z Initial state q0(ω) = (x0(ω), y0(ω), z0(ω)) are uncertain. Solving in t0, t1, ..., t10, Noisy Measur. → UPDATE, solving in t11, t12, ..., t20, Noisy Measur. → UPDATE,... IDEA of the Bayesian Update (BU): Take qf (ω) = q0(ω). Linear BU: qa = qf + K · (z − y) Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y). Center for Uncertainty Quantification ation Logo Lock-up 26 / 39
  • 36. Trajectories of x,y and z in time. After each update (new information coming) the uncertainty drops. [O. Pajonk, B. V. Rosic, A. Litvinenko, and H. G. Matthies, 2012] Center for Uncertainty Quantification ation Logo Lock-up 27 / 39
  • 37. 4* Example: Lorenz problem 10 0 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 x 20 0 20 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 y 0 10 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 z x f xa y f ya z f za Figure: quadratic BU surrogate, measure the state (x(t), y(t), z(t)). Prior and posterior after one update. Center for Uncertainty Quantification ation Logo Lock-up 28 / 39
  • 38. 4* Example: Lorenz Problem 10 5 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 x x1 x2 15 10 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 y y1 y2 5 10 15 0 0.1 0.2 0.3 0.4 0.5 0.6 z z1 z2 Figure: Comparison of the posterior functions computed by linear and quadratic BU after second update. Center for Uncertainty Quantification ation Logo Lock-up 29 / 39
  • 39. 4* Example: Lorenz Problem 20 0 20 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 x 50 0 50 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 y 0 10 20 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 z x f xa y f ya z f za Figure: Quadratic measurement (x(t)2 , y(t)2 , z(t)2 ): Comparison of a priori and a posterior for NLBU Center for Uncertainty Quantification ation Logo Lock-up 30 / 39
  • 40. 4* Example: 1D elliptic PDE with uncertain coeffs − · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1] + Dirichlet random b.c. g(0, ξ) and g(1, ξ). 3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3, x(0.8) = 18, s.d. 0.3. κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov. length 0.1, multi-variate Hermite polynomial of order pκ = 2; RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03, multi-variate Hermite polynomial of order pf = 2; b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10, multi-variate Hermite polynomial of order pg = 1; pφ = 3 and pu = 3 Center for Uncertainty Quantification ation Logo Lock-up 31 / 39
  • 41. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 32 / 39
  • 42. 4* Example: Updating of the parameter 0 0.5 1 0 0.5 1 1.5 0 0.5 1 0 0.5 1 1.5 Figure: Original and updated parameter κ. Center for Uncertainty Quantification ation Logo Lock-up 33 / 39
  • 43. 4* Future plans and possible collaboration Future plans and possible collaboration ideas
  • 44. 4* Future plans, Idea N1 Possible collaboration work with Troy Butler: To develop a low-rank adaptive goal-oriented Bayesian update technique. The solution of the forward and inverse problems will be considered as a whole adaptive process, controlled by error/uncertainty estimators. z (y - z) q f ε forward update low-rank and adaptive y f z (y - z) ε forward y q..... low-rank and adaptive ... q update Stochastic forward spatial discret. stochastic discret. low-rank approx. Inverse problem Errors inverse operator approx.
  • 45. 4* Future plans, Idea N2 Edge between Green functions in PDEs and covariance matrices. Possible collaboration with statistical group, Doug Nychka (NCAR), Havard Rue Center for Uncertainty Quantification ation Logo Lock-up 34 / 39
  • 46. 4* Future plans, Idea N3 Data assimilation techniques, Bayesian update surrogare. Develop non-linear, non-Gaussian Bayesian update approximation for gPCE coefficients. Possible collaboration with Jan Mandel, Troy Butler, Kody Law, Y. Marzouk, H. Najm, TU Braunschweig and KAUST
  • 47. 4* Collaborators 1. Uncertainty quantification and Bayesian Update: Prof. H. Matthies, Bojana V. Rosic, Elmar Zander, Oliver Pajonk from TU Braunschweig, Germany, 2. Low-rank tensor calculus: Mike Espig from RWTH Aachen, Boris and Venera Khoromskij from MPI Leipzig 3. Spatial and environmental statistics: Marc Genton, Ying Sun, Raphael Huser, Brian Reich, Ben Shaby and David Bolin. 4. Some others: UQ, data assimilation, high-dimensional problems/statistics
  • 48. 4* Conclusion Introduced low-rank tensor methods to solve elliptic PDEs with uncertain coefficients, Explained how to compute the maximum, the mean, level sets,... in low-rank tensor format, Derived Bayesian update surrogate ϕ (as a linear, quadratic, cubic etc approximation), i.e. compute conditional expectation of q, given measurement y. Center for Uncertainty Quantification ation Logo Lock-up 34 / 39
  • 49. 4* Example: Canonical rank d, whereas TT rank 2 d-Laplacian over uniform tensor grid. It is known to have the Kronecker rank-d representation, ∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d (6) with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the N × N identity. Notice that for the canonical rank we have rank kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any dimension due to the explicit representation ∆d = (∆1 I) × I 0 ∆1 I × ... × I 0 ∆1 I × I ∆1 (7) where the rank product operation ”×” is defined as a regular matrix product of the two corresponding core matrices, their blocks being multiplied by means of tensor product. The similar bound is true for the Tucker rank rankTuck (∆d ) = 2.
  • 50. 4* Advantages and disadvantages Denote k - rank, d-dimension, n = # dofs in 1D: 1. CP: ill-posed approx. alg-m, O(dnk), hard to compute approx. 2. Tucker: reliable arithmetic based on SVD, O(dnk + kd ) 3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3), truncation O(dnk2 + dk4) 4. TT: based on SVD, O(dnk2) or O(dnk3), stable 5. Quantics-TT: O(nd ) → O(dlogq n)
  • 51. 4* How to compute the variance in CP format Let u ∈ Rr and ˜u := u − u d µ=1 1 nµ 1 = r+1 j=1 d µ=1 ˜ujµ ∈ Rr+1, (8) then the variance var(u) of u can be computed as follows var(u) = ˜u, ˜u d µ=1 nµ = 1 d µ=1 nµ   r+1 i=1 d µ=1 ˜uiµ   ,   r+1 j=1 d ν=1 ˜ujν   = r+1 i=1 r+1 j=1 d µ=1 1 nµ ˜uiµ, ˜ujµ . Numerical cost is O (r + 1)2 · d µ=1 nµ .
  • 52. 4* Computing QoI in low-rank tensor format Now, we consider how to find ‘level sets’, for instance, all entries of tensor u from interval [a, b].
  • 53. 4* Definitions of characteristic and sign functions 1. To compute level sets and frequencies we need characteristic function. 2. To compute characteristic function we need sign function. The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi- index i ∈ I pointwise defined as (χI(u))i := 1, ui ∈ I, 0, ui /∈ I. Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined by (sign(u))i :=    1, ui > 0; −1, ui < 0; 0, ui = 0. Center for Uncertainty Quantification ation Logo Lock-up 36 / 39
  • 54. 4* sign(u) is needed for computing χI(u) Lemma Let u ∈ T , a, b ∈ R, and 1 = d µ=1 ˜1µ, where ˜1µ := (1, . . . , 1)t ∈ Rnµ . (i) If I = R<b, then we have χI(u) = 1 2 (1 + sign(b1 − u)). (ii) If I = R>a, then we have χI(u) = 1 2(1 − sign(a1 − u)). (iii) If I = (a, b), then we have χI(u) = 1 2 (sign(b1 − u) − sign(a1 − u)). Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration with rank truncation after each iteration. Center for Uncertainty Quantification ation Logo Lock-up 37 / 39
  • 55. 4* Level Set, Frequency Definition (Level Set, Frequency) Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is pointwise defined by (LI(u))i := ui, ui ∈ I ; 0, ui /∈ I , for all i ∈ I. The frequency FI(u) ∈ N of u respect to I is defined as FI(u) := # supp χI(u). Center for Uncertainty Quantification ation Logo Lock-up 38 / 39
  • 56. 4* Computation of level sets and frequency Proposition Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have LI(u) = χI(u) u and rank(LI(u)) ≤ rank(χI(u)) rank(u). The frequency FI(u) ∈ N of u respect to I is FI(u) = χI(u), 1 , where 1 = d µ=1 ˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ . Center for Uncertainty Quantification ation Logo Lock-up 39 / 39
  翻译: