• No results found

Using the Imprint Method

N/A
N/A
Protected

Academic year: 2021

Share "Using the Imprint Method"

Copied!
54
0
0

Loading.... (view fulltext now)

Full text

(1)

M A S T E R’S T H E S I S

2005:184 CIV

MAGNUS SVEDJEBRATT

Semi-analytical Solution of Physical Initial-value Problems

Using the Imprint Method

MASTER OF SCIENCE PROGRAMME Engineering Physics

Luleå University of Technology

Department of Mathematics

(2)

Sammanfattning

Numeriska l¨ osningar av partiella differentialekvationer begr¨ ansas ofta till att in- kludera parametriskt beroende genom en serie ber¨ akningar. Avtrycksmetoden ¨ ar en semi-analytisk spektralmetod. Den tar till vara dagens kraftfulla datorer med symbolhanterande ber¨ akningsmetoder och implementerar parametriskt beroen- de genom att till˚ ata ett godtyckligt antal variabler och parametrar i l¨ osningen.

I detta examensarbete behandlas avtrycksmetoden f¨ or b˚ ade polynom- och Chebyshevbaser. F¨ or utv¨ ardering anv¨ ands fr¨ amst Burgers ekvation, eftersom den till˚ ater j¨ amf¨ orande tester med en analytisk l¨ osning.

Mycket av detta arbete syftar ocks˚ a till att skapa en flexibel implementation under Maple.

Abstract

Numerical solution of partial differential equations is often restricted to include parametrical dependence through a series of calculations. The imprint method, which is a semi-analytical spectral method taking advantage of todays powerful symbolic computer mathematics, include parametrical dependence by allowing an arbitrary number of variables and parameters within the solution.

In this Master’s thesis the imprint method is evaluated for both polynomi- al and Chebyshev basis functions. Benchmarking is done mainly against the Burgers equation, which allows comparisons with exact solutions.

Much of this thesis work is also dedicated to produce a versatile implemen-

tation for the Maple platform.

(3)

Contents

1 Introduction 1

1.1 Background . . . . 1

1.2 Purpose . . . . 1

1.3 Problem . . . . 1

2 Theory 3 2.1 The imprint . . . . 3

2.2 Approximation and boundary conditions . . . . 4

2.2.1 Convergence of Chebyshev approximation . . . . 4

2.2.2 Discrete Chebyshev approximation of a multivariate poly- nomial . . . . 4

2.2.3 Stop criterion . . . . 6

2.2.4 Approximation of a polynomial . . . . 6

2.3 Other time advancement schemes . . . . 6

3 Imprint with Chebyshev expansion 8 3.1 The imprint . . . . 8

3.2 Chebyshev approximation of a Chebyshev series . . . . 9

4 Example - The viscous Burgers equation 11 4.1 Exact solution . . . . 11

4.2 Approximate solution with polynomial expansion for one variable and two parameters . . . . 12

4.2.1 Stability . . . . 13

4.3 Approximate solution with Chebyshev expansion for one variable and one parameter . . . . 15

4.4 Conclusion - Polynomial or Chebyshev basis . . . . 16

5 Solution of a system of differential equations 18 5.1 Example - The wave equation . . . . 18

5.2 Example - The two-dimensional Burgers equation . . . . 19

5.2.1 Approximate solution for two variables . . . . 21

6 Discussion 24 6.1 Limitations . . . . 24

6.2 Implementation . . . . 24

6.3 Code . . . . 25

A Schematic diagram over the procedure structure 27

(4)

B Maple code reference - Polynomial expansion 30 B.1 Defined global variables . . . . 30 B.2 Defined procedures . . . . 31

C Maple code - Polynomial expansion 34

C.1 Passive code - Procedures . . . . 34

C.2 Active code - Choice of variables, parameters, equations etc. . . . 44

C.2.1 Code for the one-dimensional Burgers equation . . . . 44

C.2.2 Code for the wave-equation . . . . 46

C.2.3 Code for the two-dimensional Burgers equation . . . . 48

(5)
(6)

Chapter 1

Introduction

1.1 Background

We use differential equations to describe physical behaviour in nature. When the model becomes too complicated, analytical solutions may not exist and we are left to rely on numerical methods for finding solutions. But numerical simulation is often limited in that parametrical dependence only can be included by a series of calculations, requiring large amounts of computing time.

With todays computers that can handle symbolic calculations, a new method has been invented [1] where semi-analytical solutions of partial differential equa- tions (PDEs) are determined and expressed as polynomial expansions of physical variables and parameters, and thus has the possibility to include parametrical dependence in the solution. This method, the imprint method, is an extension of spectral methods [2] and it can be classified as a time-step method where the solution is analytical in space and where each PDE is solved as if it were an ordinary differential equation in time.

1.2 Purpose

The purpose of this thesis is to examine and improve the imprint method for semi-analytical solutions of physical initial value problems. The method could become important within, for example, fluid mechanics and fusion plasma physics when the physical behaviour has a strong parametrical dependence.

1.3 Problem

The imprint method needs to be evaluated and implemented to work with several variables and parameters in systems of differential equations, and for arbitrary explicit time advancement schemes. My goal has been to realize this by produc- ing executable code for the Maple platform that is using an arbitrary number of variables and parameters. I extended the imprint method to operate with Chebyshev basis and compared its accuracy against regular polynomial basis.

The approximation was modified from earlier implementations to work with an

(7)

unexpanded function, i.e. solely with the coefficients, as well as a multivariate stopping criterion.

The different variations was tested and compared, mainly against the viscous

Burgers equation, which is a PDE containing both the effects of a convective

nonlinearity and diffusion, with an exact analytical solution. Finally, a system

of differential equations is solved for a few time-steps.

(8)

Chapter 2

Theory

Commonly used numerical methods for solving initial-value PDEs include finite difference [3], finite element [4] and spectral [2] or weighted residual (WRM) [5]

methods.

Spectral methods can be divided into two groups, non-interpolating and interpolating or pseudospectral methods [2]. Both of these use global basis functions, but different ways of minimizing the residual. In our case, the im- print method has a lot in common with the pseudospectral methods where an approximate, analytical solution is assumed and the residual is minimized by Chebyshev approximation.

The system of PDEs that we wish to solve can be described symbolically as

∂U

∂t = D · U (2.1)

where U is the solution vector and D is any spatial linear or nonlinear operator, with a set of initial and boundary conditions to make the problem well-posed.

2.1 The imprint

We want to use a time-stepping technique and solve the PDE as an ordinary differential equation in time at each time-step. Thus no spatial grid needs to be introduced and the spatial algorithm would be exact. However, nonlinearity in the equation give unmanageable large expressions and a choice of a finite set of convergent expansion functions must be made.

Any time advancement scheme will work within the algorithm. Choosing an explicit scheme and the solution expanded in some basis functions, we can for a specific differential equation deduce an explicit expression that determines a new solution advanced one time-step from any previous state. This expression, transforming the solution from state i to i + 1 is called the imprint of the equation and is the heart of the algorithm.

One of the simplest (explicit) time advancement schemes is Euler’s forward scheme that for (2.1) looks like

U

i+1

= U

i

+ ∆t D · U

i

(2.2)

where i denotes time step and ∆t is the time-step length.

(9)

A polynomial basis may be used as representation of U. We may write it as U

i

(x

1

, x

2

, . . . ; p

1

, p

2

, . . . ) = X

j,k,... ,l,m,...

a

ij,k,... ,l,m,...

x

j1

x

k2

· · · p

l1

p

m2

· · · (2.3) where x

nα

denotes to spatial coordinates, p

nα

denotes parameters of the problem and a

ij,k,... ,l,m,...

are the coefficients at time t = i∆t.

When U

i

is inserted into (2.2) for a specific PDE and the expression for U

i+1

is expanded as a new polynomial expansion, we get the imprint as the coefficients a

i+1j,k,··· ,l,m,···

expressed as functions of a

ij,k,··· ,l,m,···

.

2.2 Approximation and boundary conditions

In the previous section we concluded that for nonlinear problems, the resulting series calculated at each time-step will increase in order. By simply neglecting higher order terms, information will be lost. A better way to reduce the order is by using an accurate spectral approximation, that can be cast in polynomial form, e.g. discrete Chebyshev approximation, at the end of each time-step. A Chebyshev approximation consists of a series of weighted Chebyshev polynomi- als [6].

Here we will also have an opportunity to include the boundary conditions.

By choosing a grid that exactly includes the boundary, the function value at these points can be set to the boundary value. The approximated function then satisfies the boundary conditions. It is then converted back to a polynomial which is used in the next time-step calculation. Other ways of implementing boundary conditions includes for example spline method [1].

2.2.1 Convergence of Chebyshev approximation

Uniform convergence of Chebyshev series expansion can be proved [7] for func- tions of one variable satisfying the Dini-Lipschitz condition. It is then also extended to include functions of a finite number of variables.

This means that Chebyshev approximation can be used successfully to ap- proximate a wide range of functions, as long as they are sufficiently smooth and satisfies the Dini-Lipschitz condition. Hence we will assume that Chebyshev approximation will suffice in our case.

2.2.2 Discrete Chebyshev approximation of a multivariate polynomial

Since the solution we deploy has a dependence of arbitrary number of variables, it is necessary to be able to approximate over several dimensions. We begin by studying the one dimensional discrete Chebyshev approximation, which is the one usually found in the literature. This is then generalized to two dimensions.

Generalization to even higher dimensions is straightforward.

The Chebyshev polynomial of degree n are denoted T

n

(x) and is defined as

T

n

(x) = cos(n · arccos x). (2.4)

They can also be written as polynomials where T

0

(x) = 1, T

1

(x) = x, T

2

=

2x

2

− 1, T

3

= 4x

3

− 3x etc. These polynomials form a complete set and satisfy

(10)

a continuous orthogonal relation relative each other [6] over a weight function w(x) = (1 − x

2

)

(−1/2)

, as well as a discrete one. If x

k

(k = 1.. . . . n) are the zeros of T

n

(x) found from (2.4) as

x

k

= cos( π(k −

12

)

n ) k = 1, 2, . . . , n (2.5) the discrete orthogonality relation states

n

X

k=1

T

i

(x

k

)T

j

(x

k

) =

0 i 6= j n/2 i = j 6= 0 n i = j = 0

. (2.6)

The discrete Chebyshev approximation is an interpolation of Chebyshev polynomials between a finite set of points. At these points the approxima- tion will be exact. We choose the points according to (2.5) and let f (x) be a function in the interval [1, −1] that we wish to approximate as

f (x) ≈

n−1

X

i=0

c

i

T

i

(x) (2.7)

where c

i

are the coefficients to be determined.

At the points {x

k

} we get

f (x

k

) =

n−1

X

i=0

c

i

T

i

(x

k

). (2.8)

Then multiplying with T

j

(x

k

) and summing over k gives us

n

X

k=1

f (x

k

)T

j

(x

k

) =

n−1

X

i=0

c

i n

X

k=1

T

i

(x

k

)T

j

(x

k

) =

= ( n

2 c

j

j > 0

n c

j

j = 0 . (2.9)

Remembering that the first coefficient is to be divided in half, c

j

becomes

c

j

= 2 n

n

X

k=1

f (x

k

)T

j

(x

k

) =

= 2

n

n

X

k=1

f (cos( π(k −

12

)

n )) cos( πj(k −

12

)

n ), j = 0, 1, . . . , n − 1.(2.10) When the approximation is between two arbitrary boundaries [α, β], we make a change of variable

s ≡ x −

12

(β + α)

1

2

(β − α) (2.11)

and approximate f (x) by a Chebyshev polynomial in s.

(11)

Now turning to a two-dimensional function, f (x, y), let N

x

and N

y

denote the number of interpolation points in x and y respectively. Firstly in the x- direction (2.7) becomes

f (x, y) ≈

Nx−1

X

i=0

c

i

(y)T

i

(x) − c

0

(y)

2 , (2.12)

and then expanding the functions c

i

(y) into a Chebyshev series in y-direction yields

c

i

(y) ≈

Ny−1

X

j=0

c

i,j

T

j

(y) − c

i,0

2 . (2.13)

The coefficients {c

i

(y), c

i,j

} are derived from (2.10):

c

i

(y) = 2 N

x

Nx

X

a=1

f (cos( π(a −

12

) N

x

), y)cos( πi(a −

12

) N

x

) (2.14)

c

i,j

= 2 N

y

Ny

X

b=1

c

i

(cos( π(b −

12

) N

y

))cos( πj(b −

12

) N

y

). (2.15)

2.2.3 Stop criterion

A stop criterion is used to end the approximation calculation when it has reached a given tolerance. An error estimate can be calculated since for all n we have

||T

n

(x)|| ≤ 1. In general, the coefficients are rapidly decreasing and the error is dominated by the first of the neglected coefficients. Hence a regular stopping criterion is to check for the first triple of consecutive coefficients whose sum of absolute values is below the given tolerance. In the case where the coefficients are functions we use the supremum norm of the sum of absolute values.

2.2.4 Approximation of a polynomial

If we know that the function, f (x), to approximate is a polynomial of finite order m and with coefficients d

j

(j = 0, 1, . . . , m), we can insert it in (2.10) as an unexpanded sum:

c

j

= 2 n

n

X

k=1 m

X

a=0

d

a

(cos( π(k −

12

)

n ))

a

cos( πj(k −

12

)

n ). (2.16)

Using this expression instead of (2.10) eliminates the need to expand the coef- ficients received from the imprint to a polynomial form, which is an advantage when programming in languages like ANSI C without symbolic mathematics or when an expanded polynomial becomes too large to handle.

2.3 Other time advancement schemes

The stability of Euler’s forward scheme is limited to the CFL-condition [8],

which means that the value of ∆t must not exceed a certain value. The Euler

(12)

scheme has an accuracy O(∆t). Other explicit schemes include for example

modified Euler. Here the accuracy is improved to O((∆t)

2

) but the stability

is still limited by the CFL-condition. The alternative is to use a fully implicit

or a semi-implicit scheme, enabling stability for large time-steps, but where we

also need to solve an additional system of linear equations at the end of each

time-step.

(13)

Chapter 3

Imprint with Chebyshev expansion

Although it is possible to convert Chebyshev functions to a corresponding poly- nomial approximation, this may not be a desired transformation. It requires many significant digits due to cancellation. A better solution ought to be work- ing directly with an imprint for Chebyshev series.

3.1 The imprint

The solution to (2.1) will now be represented as a Chebyshev series:

U

i

(x

1

, x

2

, . . . ; p

1

, p

2

, . . . ) = X

j,k,... ,l,m,...

a

ij,k,... ;l,m,...

T

j

(x

1

)T

k

(x

2

) · · · T

l

(p

1

)T

m

(p

2

) · · · (3.1) where T

n

is the nth Chebyshev polynomial as before, a

ij,k,... ,l,m,...

are the coeffi- cients at time t = i∆t, x

nα

denotes spatial coordinates and p

nα

denotes parameters of the problem.

Using the same procedure as for polynomial expansion we insert (3.1) into (2.2). The new expression must then be expanded to form the formal imprint, which means converting it back to a single series of the chosen basis functions, in our case Chebyshev polynomials.

To this end we need to be able to multiply Chebyshev polynomials with other Chebyshev polynomials as well as with the variable itself and express it as a sum of Chebyshev polynomials. We have the relations [7]:

x T

n

(x) = 1

2 (T

n+1

(x) + T

|n−1|

(x)) T

m

(x)T

n

(x) = 1

2 (T

m+n

(x) + T

|m−n|

(x)). (3.2) Further we also need an expression for the derivative of a Chebyshev series.

This is done by a transformation of the coefficients.

(14)

If a

r

, r = 0 . . . n + 1 are the coefficients that approximate a function, f (x), as the sum

f (x) =

n+1

X

r=0

a

r

T

r

(x) − a

0

2 (3.3)

and a

0i

are the coefficients of the Chebyshev series that approximate the deriva- tive of f (x), then the recurrence relation for transforming a

r

is given by [7]:

a

0r−1

= a

0r+1

+ 2ra

r

, (r = n + 1, n, . . . , 1) (3.4) with

a

0n+1

= a

0n+2

= 0. (3.5)

3.2 Chebyshev approximation of a Chebyshev series

The same situation as for polynomial expansion arises; any nonlinearity in the PDE causes an increase of the order in the resulting series. And as before we wish to perform a Chebyshev approximation to remedy this problem. In the continuous case, the best Chebyshev approximation of a Chebyshev series is a pure truncation since the Chebyshev polynomials are orthogonal. In the discrete case, however, an interpolation can be made and is shown below. This might be convenient for applying the boundary conditions.

In two dimensions, using the fact that f (x, y) now is a Chebyshev series of the form (3.1) with M

x

and M

y

as the highest degree of Chebyshev polynomials that results from the imprint, we can rewrite the formulas (2.14)-(2.15) for calculating the Chebyshev coefficients by replacing f (x, y) as an unexpanded sum:

c

j

(y) = 2 N

x

Nx

X

a=1



Mx

X

p=0 My

X

q=0

a

ip,q

T

p

( 2(cos(

π(a−N 12)

x

)

βx−α2 x

+

βx2 x

) − (β

x

+ α

x

)

β

x

− α

x

) ·

·T

q

( 2y − (β

y

+ α

y

)

β

y

− α

y

) cos( jπ(a −

12

) N x ) =

= 2

N

x Nx

X

a=1



Mx

X

p=0 My

X

q=0

a

ip,q

T

p

(cos( π(a −

12

)

N

x

))T

q

( 2y − (β

y

+ α

y

)

β

y

− α

y

) cos( jπ(a −

12

) N

x

) =

= 2

N

x My

X

q=0

T

q

( 2y − (β

y

+ α

y

) β

y

− α

y

)

Nx

X

a=1 Mx

X

p=0

a

ip,q

cos( pπ(a −

12

)

N

x

)cos( jπ(a −

12

)

N

x

) (3.6) which is just a new Chebyshev series in one dimension. Specifically, if we define

the coefficients d

j,q

as

d

j,q

= 2 N

x

Nx

X

a=1 Mx

X

p=0

a

ip,q

cos( pπ(a −

12

) N

x

)cos( jπ(a −

12

) N

x

) (3.7)

(15)

then (3.6) is

c

j

(y) =

My

X

q=0

d

j,q

T

q

( 2y − (β

y

+ α

y

)

β

y

− α

y

). (3.8)

Inserting (3.8) into (2.15) we get, for each j, an approximation in the y- direction,

c

j,k

= 2 N

y

Ny

X

b=1 My

X

q=0

d

j,q

T

q

( 2(cos(

π(a−Ny12)

)

βy−α2 y

+

βy2 y

) − (β

y

+ α

y

)

β

y

− α

y

)cos( kπ(b −

12

) N

y

) =

= 2

N

y Ny

X

b=1 My

X

q=0

d

j,q

T

q

(cos( π(b −

12

) N

y

))cos( kπ(b −

12

) N

y

) =

= 2

N

y Ny

X

b=1 My

X

q=0

d

j,q

cos( qπ(b −

12

) N

y

)cos( kπ(b −

12

) N

y

). (3.9)

As noted above, the Chebyshev term in the coefficients (3.8) and (3.9) re- duces to a single cosine term and both (3.8) and (3.9) will be easily and quickly determined in the implementation by summing values from a pre-calculated matrix containing the evaluated cosine terms cos(

iπ(j−N 12)

x

) and cos(

iπ(j−N 12)

y

).

(16)

Chapter 4

Example - The viscous Burgers equation

The viscous Burgers equation inhabits both a nonlinearity and an exact, ana- lytical solution. It therefore makes a good choice of problem to test the imprint method on.

For one dimension the viscous Burgers equation is

∂u

∂t = −u ∂u

∂x + ν ∂

2

u

∂x

2

(4.1)

where ν is the viscosity. The first term on the right hand side is a convective derivative with a quadratic nonlinearity and the second term is a second order dissipative term multiplied by the viscosity ν.

Let x be specified in the region 0 ≤ x ≤ 1 and assume homogeneous Dirichlet conditions on the boundary, u(0, t; ν) = u(1, t; ν) = 0. The initial state (t = 0) is given by u(x, 0; ν). We begin by deriving the exact solution for the time evolution and spatial variation in x, containing the variable ν [1]. This will then be compared with the solution derived from the imprint algorithm with polynomial or Chebyshev expansion and with ∆t, ν or both as parameters.

4.1 Exact solution

The exact solution can be obtained after a transformation of (4.1) to a sim- pler, linear PDE which is solved with the Fourier method [9]. The Cole-Hopf transformation [8]

u = −2ν ∂Φ

∂x /Φ (4.2)

where Φ = Φ(x, t; ν) converts (4.1) to the diffusion equation

∂Φ

∂t = ν ∂

2

Φ

∂x

2

(4.3)

and the boundary conditions become

∂Φ

∂x |

x=0

= ∂Φ

∂x |

x=1

= 0. (4.4)

(17)

Let ϕ(x; ν) denote the initial condition for (4.3). It is found from (4.2):

u(x, 0; ν) = −2ν ∂ϕ

∂x /ϕ. (4.5)

Applying the Fourier method to (4.3) with the boundary and initial conditions we get the solution

Φ(x, t; ν) = X

k

A

k

e

−νk2π2t

cos(kπx) (4.6)

where

A

0

= Z

1

0

ϕ(x)dx, A

k

= 2 Z

1

0

ϕ(x)cos(kπx)dx k ≥ 1. (4.7) The analytic solution to (4.1) becomes

u(x, t; ν) = 2πν P

k

kA

k

e

−νk2π2t

sin(kπx) P

k

A

k

e

−νk2π2t

cos(kπx) . (4.8) In numerical calculation one can typically evaluate the sum over the first 40 terms, somewhat depending on the value of ν.

4.2 Approximate solution with polynomial ex- pansion for one variable and two parameters

Let u depend on one spatial coordinate, x, and on the two parameters ν and

∆t. Expanded as a polynomial with {N

x

, N

ν

, N

t

} being the maximum powers for x, ν and ∆t respectively, u

i

will be

u

i

=

Nx

X

l=0 Nν

X

j=0 Nt

X

k=0

a

ij,k,l

x

j

ν

k

∆t

l

(4.9)

where a

ij,k,l

are the coefficients. Inserting (4.9) into (2.2) for Burgers equation we get:

u

i+1

= u

i

+ ∆t(−u

i

∂u

i

∂x + ν ∂

2

u

i

∂x

2

) =

= X

j,k,l

a

ij,k,l

x

j

ν

k

∆t

l

− X

j,k,l

a

ij,k,l

x

j

ν

k

∆t

l+1

X

J,K,L

Ja

iJ,K,L

x

J−1

ν

K

∆t

L

+

+ X

j,k,l

j(j − 1)a

ij,k,l

x

j−2

ν

k+1

∆t

l+1

=

= X

j,k,l

a

ij,k,l

x

j

ν

k

∆t

l

− X

j,k,l

X

J,K,L

Ja

ij,k,l

a

iJ,K,L

x

j+J−1

ν

k+K

∆t

l+L+1

+

+ X

j,k,l

j(j − 1)a

ij,k,l

x

j−2

ν

k+1

∆t

l+1

. (4.10)

where we have substituted the indices {j, k, l} by {J, K, L} in the second sum-

mation of the second term on the right hand side. Now we produce the imprint

(18)

0 0.2 0.4 0.6 0.8 1

x 0 0.2

0.4 0.6

0.8 1 1.2 1.4 t

0 0.05

0.1 0.15

0.2 0.25

Figure 4.1: Approximate solution for Burgers equation with polynomial basis and ν = 0.05.

by expressing each coefficient, a

i+1

, as a

i+1j,k,l

= a

ij,k,l

min(Nx,j+1)

X

J=max(0,j−Nx+1)

min(Nν,k)

X

K=max(0,k−Nν)

min(Nt,l−1)

X

L=max(0,l−Nt−1)

Ja

ij−J+1,k−K,l−L−1

a

iJ,K,L

+

+

( (j + 2)(j + 1)a

ij+2,k−1,l−1

−2 ≤ j ≤ N

x

− 2; 1 ≤ k ≤ N

ν

+ 1; 1 ≤ l ≤ N

t

+ 1

0 otherwise .

(4.11) We let ν vary between 0 and 0.05 and ∆t between 0 and 0.015 and set the

error tolerance for the three-dimensional Chebyshev approximation to 0.005.

Maximum degree for the polynomial solution is chosen for x, ν and ∆t as N

x

= N

ν

= N

t

= 6. The resulting function after 100 time-steps is shown for ν = 0.05 in figure 4.1 and the error, as compared to the exact solution (4.8), is shown in figure 4.2.

4.2.1 Stability

The CFL stability criterion can be derived for this example [3] [11], relating the maximum value of ∆t with the values of the viscosity ν and the resolution of the basis functions. Stability is achieved when the amplification for advancing in time is less than or equal to one, that is, if we have

u

i+1

u

i

≤ 1. (4.12)

(19)

0 0.2 0.4 0.6 0.8 1

x 0 0.2

0.4 0.6 0.8 1

1.2 1.4 t

–0.001 –0.0005

0 0.0005

Figure 4.2: Error for Burgers equation with polynomial basis and ν = 0.05.

We will investigate the stability for a linear approximation of the Burgers equation when inserted into the forward Euler time advancement scheme (4.10) [11],

u

i+1

= u

i

+ ∆t(−u

0

∂u

i

∂x + ν ∂

2

u

i

∂x

2

) (4.13)

where u

0

is a constant related to the initial condition, in our case u

0

= 0.25.

To analyze the stability we apply a Fourier mode to this equation of the form (note that i here denotes the imaginary unit and not the time state)

u

n

= ˆ u

n

e

ikx

(4.14)

which gives ˆ

u

n+1

e

ikx

= ˆ u

n

e

ikx

+ ∆t(−u

0

(ik)ˆ u

n

e

ikx

+ ν(−k

2

)ˆ u

n

e

ikx

). (4.15) The stability criterion (4.12) becomes

ˆ u

n+1

ˆ u

n

= |1 + ∆t(−u

0

ik − νk

2

)| ≡ g ≤ 1 (4.16) where g is called the amplification factor.

Since g is a complex expression we take into account the magnitude in the complex plane. The restriction for ∆t yields

gg

= (1 − νk

2

∆t)

2

+ (u

0

k∆t)

2

≤ 1 (4.17) or

∆t ≤ 2ν

ν

2

k

2

+ u

20

. (4.18)

When using a Fourier series expansion, the value for k would be 2πn, where

n = N

x

is the number of modes. But since the interchangeability between

(20)

Chebyshev and Fourier series (Chebyshev is a cosine Fourier series via a change of variable), we choose k in the same way.

Applying the values used in our example, N

x

= 6 and ν = 0.05, the maxi- mum value for ∆t is

∆t / 0.0276 (4.19)

which is well over the value of 0.015 used for ∆t in the calculation.

4.3 Approximate solution with Chebyshev ex- pansion for one variable and one parameter

As the title indicates, we use here only one variable, x, and one parameter, ∆t.

There is no restriction against working with multiple parameters, but because the expression for the imprint becomes enormous with both ν and ∆t as pa- rameters

1

the calculation becomes too time consuming, and we choose only to consider one parameter.

Expanded as a Chebyshev series with {N

x

, N

t

} the maximum power for x and ∆t respectively, u

i

will be

u

i

=

Nx

X

j=0 Nt

X

k=0

a

ij,k

T

j

(x)T

j

(∆t) (4.20)

where a

ij,k

are the coefficients. As before we insert (4.20) into (2.2) for Burgers equation:

u

i+1

= X

j,k

a

ij,k

T

j

(x)T

k

(∆t) −

− X

j,k

a

ij,k

T

j

(x)∆tT

k

(∆t) X

K

∂x X

J

a

iJ,K

T

J

(x)T

K

(∆t) +

+ ν X

k

2

∂x

2

X

j

a

ij,k

T

j

(x)∆tT

k

(∆t) . (4.21)

This is then simplified in accordance with (3.4) and (3.5) to u

i+1

= X

j,k

a

ij,k

T

j

(x)T

k

(∆t) −

− X

j,k

a

ij,k

T

j

(x)∆tT

k

(∆t) X

J,K

a

0iJ,K

T

J

(x)T

K

(∆t) + +ν X

j,k

a

00ij,k

T

j

(x)∆tT

k

(∆t) . (4.22)

where the prime here denotes the coefficients for the Chebyshev series differen-

1

The relations (3.2) has a tendency to enlarge the imprint when used too diligent.

(21)

tiated with respect to x. Now we expand using the relations (3.2):

u

i+1

= X

j,k

a

ij,k

T

j

(x)T

k

(∆t) −

− X

j,k

X

J,K

a

ij,k

a

0iJ,K

1

8 T

j+J

(x) + T

|j−J|

(x) 

T

k+1+K

(∆t) + +T

|k+1−K|

(∆t) + T

|k−1|+K

(∆t) + T

||k−1|−K|

(∆t) + +ν X

j,k

a

00ij,k

T

j

(x) 1

2 T

k+1

(∆t) + T

|k−1|

(∆t). (4.23) The imprint becomes, remembering that {J, K} are indices to be summed over, a

i+1j,k

= a

ij,k

− 1

8 a

ij−J,k−1−K

a

0iJ,K

− 1

8 a

ij−J,±k−1+K

a

0iJ,K

− 1

8 a

ij−J,±(k−K)+1

a

0iJ,K

− 1

8 a

ij−J,±(k+K)+1

a

0iJ,K

− 1

8 a

ij−J,±(−k+K)+1

a

0iJ,K

− 1

8 a

i±j+J,k−1−K

a

0iJ,K

− 1

8 a

i±j+J,±k−1+K

a

0iJ,K

− 1

8 a

i±j+J,±(k−K)+1

a

0iJ,K

− 1

8 a

i±j+J,±(k+K)+1

a

0iJ,K

− 1

8 a

i±j+J,±(−k+K)+1

a

0iJ,K

 −

18

a

ij−J,1

a

iJ,0

k = 0

0 otherwise −

 −

18

a

i±j+J,1

a

iJ,0

k = 0

0 otherwise + ν 1

2 a

00ij,k−1

+ ν 1

2 a

00ij,±k+1

. (4.24) This imprint can also be expressed in an explicit form like the one for poly-

nomial basis, (4.11), but would require countless numbers of pages which is why it will be left out here.

Using the same values for ∆t, ν and x as in the polynomial case we get after 100 time-steps the function shown in figure 4.3 and the error in figure 4.4.

4.4 Conclusion - Polynomial or Chebyshev basis

In the previous section we found an approximate solution to Burgers equation

based on either a polynomial or a Chebyshev expansion series. Comparing

the accuracy between the two methods, we found that the polynomial basis

provides an approximation that closer resembles the exact one. In addition

the calculation is faster with polynomial basis. The problem when working

with Chebyshev basis could lie within the approximation; the orthogonality

makes it hard to approximate higher order Chebyshev polynomials with lower

ones. The advantage is however that the conversion from a Chebyshev series

to a polynomial, that is strongly discouraged [6], is skipped. Considering the

amount of additional work to create an imprint for Chebyshev series, it proves

not to be worth it. Henceforth we will thus only concern ourselves with the

polynomial basis.

(22)

0 0.2 0.4 0.6 0.8 1

x 0 0.2

0.4 0.6

0.8 1 1.2 1.4 t

0 0.05

0.1 0.15

0.2

Figure 4.3: Approximate solution for Burgers equation with Chebyshev basis and ν = 0.05.

0 0.2 0.4 0.6 0.8 1

x 0.002 0

0.004 0.006

0.008 0.01 0.012

0.014 t

–0.01 0 0.01 0.02 0.03

Figure 4.4: Error for Burgers equation with Chebyshev basis and ν = 0.05.

(23)

Chapter 5

Solution of a system of differential equations

Solving a system of PDEs in parallel is just as easy as solving one at a time.

The difference is that each imprint now depends on the coefficients for the other equations. An explicit scheme for a system works as in the explicit case for one equation; only coefficients from the ith state is needed when calculating the (i + 1)st state, for Euler time advance.

5.1 Example - The wave equation

To demonstrate how the imprint method is applied for systems of differential equations we can consider the following problem, describing the wave movement in one dimension,

 

 

2

u

∂t

2

− c

2

2

u

∂x

2

= 0 0 ≤ x ≤ L, t ≥ 0 u(0, t) = u(L, t) = 0 t ≥ 0

u(x, 0) = g(x), u

t

(x, 0) = h(x) 0 ≤ x ≤ L .

(5.1)

With the simple change of variables v = ∂u/∂t we divide this differential equa- tion to a system of two coupled linear differential equations,

 

 

 

 

 

 

∂v

∂t − c

2

2

u

∂x

2

= 0 0 ≤ x ≤ L, t ≥ 0

∂u

∂t = v 0 ≤ x ≤ L, t ≥ 0

u(0, t) = u(L, t) = 0 t ≥ 0 u(x, 0) = g(x), v(x, 0) = h(x) 0 ≤ x ≤ L .

(5.2)

Physically this could describe the movement of a string fixed between the positions 0 and L. The function g gives the initial inclination of the string and h the initial velocity. There exists a unique analytical solution to this problem given by [10]

u(x, t) =

X

k=1

(a

k

cos kπct

L + b

k

sin kπct

L )sin( kπx

L ) (5.3)

(24)

where

a

k

= 2 L

Z

L 0

g(x)sin kπx

L dx , (5.4)

b

k

= L kπx · 2

L Z

L

0

h(x)sin kπx

L dx . (5.5)

For the numerical solution we choose both the constants c and L as 1. As initial condition we let g(x) =

1004

x(1 − x) and h(x) = 0, resulting in

 

  a

k

= 4

25

1 + (−1)

1+k

k

3

π

3

b

k

= 0.

(5.6)

The semi-analytical solution is derived with x as a variable and ∆t as an imprint parameter between the values 0 and 0.005. The maximum degree for both x and

∆t in the polynomial expansion is set to 10. It is displayed in figure 5.1 after 500 time steps and in figure 5.2 the approximation is compared to the exact analytical solution (5.3) evaluated with 30 summation terms.

5.2 Example - The two-dimensional Burgers equa- tion

The one-dimensional Burgers equation can be extended to multidimensions. For two dimensions we have

∂u

∂t = −u ∂u

∂x − v ∂u

∂y + ν( ∂

2

u

∂x

2

+ ∂

2

u

∂y

2

)

∂v

∂t = −u ∂v

∂x − v ∂v

∂y + ν( ∂

2

v

∂x

2

+ ∂

2

v

∂y

2

) (5.7)

which is the two-dimensional momentum equations for incompressible inviscid flow if the pressure terms are neglected.

A Cole-Hopf transformation can be applied to derive an exact solution, as in the one-dimensional case. We have

u =

−2ν ∂Φ

∂x Φ , v =

−2ν ∂Φ

∂y

Φ (5.8)

where Φ = Φ(x, y, t; ν). Inserting (5.8) into (5.7) yields the linear, two dimen- sional diffusion equation

∂Φ

∂t − ν( ∂

2

Φ

∂x

2

+ ∂

2

Φ

∂y

2

) = 0 (5.9)

which can be solved analytically, as before, with the Fourier method.

(25)

Figure 5.1: Approximate solution for the wave equation with polynomial basis after 500 timesteps.

Figure 5.2: Error for the wave equation with polynomial basis.

(26)

5.2.1 Approximate solution for two variables

We apply Burgers equation over the square 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1. The polynomial expansions becomes, with N

x

and N

y

as the maximum power for u and v for the variables x and y respectively,

u

i

=

Nx

X

j=0 Ny

X

k=0

a

ij,k

x

j

y

k

v

i

=

Nx

X

j=0 Ny

X

k=0

b

ij,k

x

j

y

k

. (5.10)

Using the explicit Euler scheme, (2.2), we can derive the imprint with {J, K}

as summation indices

a

i+1j,k

= a

ij,k

+ ∆t − Ja

ij−J+1,k−K

a

iJ,K

− Kb

ij−J,k−K+1

a

iJ,K

+ +ν((j + 2)(j + 1)a

ij+2,k

+ (k + 2)(k + 1)a

ij,k+2

) 

(5.11) b

i+1j,k

= b

ij,k

+ ∆t − Ja

ij−J+1,k−K

b

iJ,K

− Kb

ij−J,k−K+1

b

iJ,K

+

+ν((j + 2)(j + 1)b

ij+2,k

+ (k + 2)(k + 1)b

ij,k+2

). (5.12) To start the simulation we require an initial state and boundary conditions for u and v. A class of initial conditions for u and v that satisfy div ~u = 0 and

~u

normal

= ~v

tangent

= 0, where u and v are the components of ~u, are described by [11]

( u|

t=0

= x

K1

(1 − x)

K2

y

K3−1

(1 − y)

K4−1

(K

3

(1 − y) − K

4

y)

v|

t=0

= −x

K1−1

(1 − x)

K2−1

y

K3

(1 − y)

K4

(K

1

(1 − x) − K

2

x) (5.13) where K

i

≥ 2, i = 1, 2, 3, 4 are constants, preferably chosen equal, i.e. K

1

= K

2

= K

3

= K

4

, to create a symmetric initial condition. We choose homogeneous boundary conditions for simplicity. Let u(0, y) = u(1, y) = u(x, 0) = u(x, 1) = 0 and v(0, y) = v(1, y) = v(x, 0) = v(x, 1) = 0. The approximation evaluates the solution in the two variables x and y. ∆t is chosen as 1/100, ν as 1/100 and K

i

= 3, i = 1, 2, 3, 4.

The initial condition and approximation after 150 timesteps is shown for u in figure 5.3 and for v in figure 5.4.

To show if this solution is valid, we ultimately would prefer to compare it against a unique analytical solution. But with the chosen initial and boundary conditions, no unique analytical solution is found. Another way of verification is to check that the energy is preserved, which should be true for ν = 0. Hence, for low values of ν, we should expect the energy to remain almost constant. The energy is calculated as the kinetic energy given by

E =

Z Z u

2

+ v

2

2 dxdy. (5.14)

In figure 5.5 we see how the energy changes in time for three different values of

ν. As expected, the energy remains more constant at lower values of ν.

(27)

Figure 5.3: Initial condition and approximate solution for u for the two dimen- sional Burgers equation with polynomial basis after 150 timesteps. Note the change in time-scales.

Figure 5.4: Initial condition and approximate solution for v for the two dimen-

sional Burgers equation with polynomial basis after 150 timesteps. Note the

change in time-scales.

(28)

Figure 5.5: The energy for three different values of ν.

(29)

Chapter 6

Discussion

We have seen that the imprint method is a good complement to the pseudospec- tral and finite difference methods. The imprint method preserves the accuracy from the spectral methods while being simple to implement. Another useful ability is the fact that we can include any number of physical parameters in the solution. The idea of using Chebyshev approximation to collect informa- tion bounded in higher order polynomial terms is also interesting and gives as a direct consequence a possibility to easily include the boundary conditions.

6.1 Limitations

An advantage with the imprint method is its ability to handle nonlinearities, which we can see from the example of the Burgers equation. A problem arises though, when considering a system of differential equations like the incompress- ible Navier-Stokes equations [8]. The incompressibility results in the equation

∇ · ~v = 0

which has no time derivative and no obvious implementation within this method.

Further, we have only considered the case of Dirichlet boundary conditions.

The more complicated Neumann or Robin boundary conditions would require another way of implementation not discussed here. We have also restricted our- selves to explicit time advancement schemes, implying that an implementation for implicit schemes would be a natural place to continue this work.

6.2 Implementation

The simulation program for this report is written in the mathematical program Maple. Working with a program like Maple gives a lot of possibilities, and although the future lies within this branch of computer languages, capable of manipulating symbolic expressions, they are today not extremely fast. There- fore one might want to implement the imprint algorithm on a more “low-level”

language like ANSI C, FORTRAN or C++.

The optimal implementation might however be a hybrid between these two

programming languages. Since calculating the imprint is a difficult task only

(30)

made once, it is preferably done with Maple, storing vectors with information about the imprint used by the fast “low-level” language to advance in time.

How the approximation is done without Maple is described in section 2.2.4.

6.3 Code

The code written for this project can be found in appendix C and is divided into two parts. The first part, found in C.1, consists of passive code that defines the necessary procedures. The second part, found in C.2, is the active code where we can choose which variables and parameters to use along with time advancement scheme and differential equation. From here we also initiate the simulation. In C.2.1 the one-dimensional Burgers equation, (4.1), is solved, C.2.2 contains the code for solving the wave equation, 5.2, and in C.2.3 we solve the two-dimensional equation, (5.7).

In appendix A we use schematic block diagrams to visualize the code struc- ture. Fig A.1 shows how the simulation proceeds and fig A.2 takes a closer look at the Chebyshev approximation.

A code reference with brief overviews of the variables and procedures initially

defined can be found in appendix B.

(31)

Bibliography

[1] Scheffel, J. Analytical Simulation - an Analytical Approach to Solving Initial-value Problems, Royal Institute of Technology, 2003 (TRITA-ALF- 2003-7).

[2] Boyd, J.P. ed. Chebyshev & Fourier Spectral Methods, Springer-Verlag, 1989.

[3] Potter, D. Computational Physics, John Wiley & Sons, 1973.

[4] Ottosen, N. and Petersson, H. Introduction to the Finite Element Method, Prentice Hall, 1992.

[5] Finlayson, B.A. The Method of Weighted Residual and Variational Princi- ples, Academic Press, 1972.

[6] Press, W.H. et al. Numerical Recipes in Fortran - The Art of Scientific Computing, 2nd edition, Cambridge University Press, 1992.

[7] Mason, J.C. and Handscomb, D.C. Chebyshev Polynomials, Chapman &

Hall/CRC, 2003.

[8] Fletcher, C.A.J. Computational Techniques for Fluid Dynamics, Vol. 1, 2nd edition, Springer, 2000.

[9] Logan, J.D. Applied Mathematics, 2nd edition, John Wiley & Sons, 1997.

[10] Sparr, G. and Sparr, A. Kontinuerliga System, 2nd edition, Studentlitter- atur, 2000.

[11] Scheffel, J. private communication.

(32)

Appendix A

Schematic diagram over the

procedure structure

(33)

Figure A.1: Simulation structure.

(34)

Figure A.2: A closer look at the expandcheby() approximation.

(35)

Appendix B

Maple code reference - Polynomial expansion

B.1 Defined global variables

vars:

Names of variables/parameters.

sumvars:

Names of summation variables. The number of summation variables needed are dependent on the non-linearity of the PDE.

idxvars:

Names of varibles to use in imprint.

max pow :

Maximum powers of the variables in polynomial a (chosen carefully; may not be lower than maximum power the imprint gives (very dependent on the value N start)).

bound :

Boundary for variables.

boundval:

Boundary value for varibles. Empty vector = no boundary value.

epsilon:

Tolerance for Chebyshev approximation.

Nstart:

Maximum number of coefficients for the variables/parameters after ap- proximation.

Digits:

Number of digits digitsave:

Number of digits to use when saving coefficients.

u:

(36)

The expansion of the solution to the PDE. Expressed without the sum- mation symbols.

du:

The expression for advancing one time step. Written as a function of u, also without summation symbols.

coes:

Vector containing names of coefficients used for the expansions.

B.2 Defined procedures

calc ca2():

Global variables used: bound, boundval, vars, Nstart, coes.

Global variables defined: bma, bpa.

Description: Stores the values for

bound[variable],1−bound[variable],0

2

in

bma

[variable]

and

bound[variable],1+bound[variable],0

2

in bpa

[variable]

for each variable. If boundval is defined then the boundary is transformed to make the interpolation points include the boundary points.

calc precos():

Global variables used: vars, N start, max pow, bma, bpa, coes.

Global variables defined: precos, powcos.

Description: Precalculates cosine-terms used for approximation.

chebyev2(pos,cfac,idx):

Global variables used: N stop, vars, bma, bpa.

Description: Transforms the Chebyshev coefficients cf ac to a polyno- mial.

Returns: Polynomial.

Must be called with its second argument idx as an empty vector.

expandcheby(pos,asum,aout):

Global variables used: vars, N start, max pow, idxvars.

Description: Chebyshev approximates the polynomial asum repre- sents. The Chebyshev coefficients are stored in aout. Expands asum as a polynomial before approximating.

Internal procedures:

doapprox(pos,apol,aout,idx):

Global variables used: vars, boundval, precos, N start.

Global variables defined: N stop.

Description: Recursive procedure that does the actual approximation of apol. Stores the approximation coefficients in aout. Saves information in N stop about the truncation in all directions.

Internal procedures:

chkpol(facpol,numvar,idx):

Global variables used: vars, N start, epsilon, bound.

Description: Checks if the last three coefficients, stored in facpol, has

reached the given tolerance.

(37)

getCoeffsFromPoly(a, i, poly, idx):

Global variables used: vars, digitsave.

Description: Converts poly to coefficients and stores it in a at time i.

Must be called with its fourth argument idx as an empty vector.

getNextCoeffs(a, j, jend):

Global variables used: du, sumvars, vars, coes.

Description: Calculates the new coefficients from time j to jend.

Internal procedures:

calc coe(ops):

Global variables used: vars, sumvars, idxvars.

Description: Transforms ops to an imprint term, which can be evalu- ated by substituting idxvars to specific values.

Returns: The transformed term.

doimp(idx):

Global variables used: vars, max pow, idxvars.

Global variables defined: opstot.

Description: Substitutes every value on idxvars and stores the explicit expressions in opstot

sumup(pos,asum,idx):

Global variables used: max pow, vars, opstot.

Description: Uses the coefficients and opstot to evaluate the new co- efficients. These are stored in asum.

sea sumv(ops,var):

Global variables used: sumvars, vars.

Description: Searches through ops for sumvariables for variable var.

Returns: Number of sumvariables in ops.

transcheby(pos,asum,aout):

(38)

Description: Chebyshev approximates the polynomial asum repre- sents. The Chebyshev coefficients are stored in aout. Does not expand asum as a polynomial before approximating.

Internal procedures:

doapprox(pos,asum,aout,idx):

Global variables used: vars, N start, epsilon.

Global variables defined: N stop.

Description: Recursive procedure that handles the approximation of asum. Stores the approximation coefficients in aout. Saves information in N stop about the truncation in all directions.

Internal procedures:

chkser(cfac,numvar,idx):

Global variables used: vars, max pow.

Description: Checks if the last three coefficients, stored in cfac, has reached the given tolerance.

sumfac(pos,asum,cfac,numcoe,numvar,idx):

Global variables used: N start, max pow, precos, powcos, bound, boundval, idxvars.

Description: Recursive procedure that does the actual calculation of the coefficients. asum is the Chebyshev series and the numcoe-th Chebyshev series or coefficient is stored in cf ac.

Must be called with its last argument as an empty vector.

zeroize(a, i, idx):

Global variables used: Nstart, vars

Description: Fills matrix a with zeros up to the value defined in N start.

Must be called with its third argument idx as an empty vector.

(39)

Appendix C

Maple code - Polynomial expansion

C.1 Passive code - Procedures

(40)

> restart:

st:=time();

:=

st .020

Convert between polynom and coefficients

> zeroize:=proc(a,i,idx) # zeroize the coeffient matrix ’a’ at time ’i’.

global Nstart,vars;

local j,eidx,var;

eidx:=idx[j]$j=1..nops(idx);

var:=nops(idx)+1;

for j from 0 to Nstart[var] do if var < nops(vars) then zeroize(a,i,[eidx,j]);

else

a[i,eidx,j]:=0;

fi:

od:

end:

> getCoeffsFromPoly:=proc(a,i,poly,idx) global vars,digitsave;

local j,k,eidx,var;

Digits:=digitsave;

eidx:=idx[j]$j=1..nops(idx);

var:=nops(idx)+1;

for k from 0 to degree(poly,vars[var]) do if var < nops(vars) then

getCoeffsFromPoly(a,i,coeff(poly,vars[var],k),[eidx,k]);

else

a[i,eidx,k]:=convert(coeff(poly,vars[var],k),float);

#print(cat("a[",i,",",convert(eidx,string),",",k,"]=",convert(a[i,eidx,k],string)))

; fi:

od:

end:

> getCoeffsFromPolywotime:=proc(a,poly,idx) global vars;

local j,k,eidx,var;

eidx:=idx[j]$j=1..nops(idx);

var:=nops(idx)+1;

for k from 0 to degree(poly,vars[var]) do if var < nops(vars) then

getCoeffsFromPoly(a,coeff(poly,vars[var],k),[eidx,k]);

else

a[eidx,k]:=convert(coeff(poly,vars[var],k),float);

#print(cat("a[",i,",",convert(eidx,string),",",k,"]=",convert(a[i,eidx,k],string)))

; fi:

od:

end:

Convert Chebyshevcoefficients to polynom

> chebyev2:=proc(pos,cfac,idx) global Nstop,vars,bma,bpa;

local j,k,eidx,Nidx,d,dd,sv,depvar,depvar2,poly,var;

eidx:=idx[j]$j=1..nops(idx);

var:=nops(idx)+1;

Nidx:=var,eidx;

(41)

d:=0;

dd:=0;

depvar:=(vars[var]-bpa[vars[var]][pos])/bma[vars[var]][pos];

depvar2:=2*depvar;

#print("Nstop:",Nstop[pos][Nidx]," Nidx:",Nidx," eidx:",eidx," var:",var,"

idx:",idx," pos:",pos);

if var < nops(vars) then

for j from Nstop[pos][Nidx] by -1 to 1 do sv:=d;

d:=depvar2*d-dd+chebyev2(pos,cfac,[eidx,j]);

dd:=sv;

od:

poly:=depvar*d-dd+chebyev2(pos,cfac,[eidx,0]);

else

for j from Nstop[pos][Nidx] by -1 to 1 do sv:=d;

d:=depvar2*d-dd+cfac[eidx,j];

dd:=sv;

od:

poly:=depvar*d-dd+cfac[eidx,0];

fi:

RETURN(simplify(poly));

end:

Calculate the new coefficients

> getNextCoeffs:=proc(a,j,jend)

global du,sumvars,vars,imprint,coes,graphs,graphnum,W;

local

ops,asum,approx,k,m,n,sea_sumv,sums,calc_coe,sumup,doimp,opstot,g,Gamma,xa,ya,pa,qa

;

sea_sumv:=proc(ops,var) # Finds out how many summation variables ’ops’ contain.

global sumvars,vars;

local temp,i;

for i from nops(sumvars[vars[1]][var]) by -1 to 1 do:

if degree( subs( vars[1]**sumvars[vars[1]][var][i]=temp , expand(ops,vars[1]) ),temp ) <> 0 then # Hopefully ’temp’ is not already included in ops. Otherwise this will fail catastrophically.

RETURN(i);

fi od:

RETURN(0);

end: # End of procedure sea_sumv().

calc_coe:=proc(ops) # Manipulates ops to an imprint function global vars,sumvars,idxvars;

local j,k,n,temp,toka,svar,svartot,newidx,modops,hi,lo;

svartot:=[[]$s=1..nops(vars)];

for j from 1 to nops(sumvars[vars[1]]) do svar:=sea_sumv(ops,j);

for k from 1 to nops(vars) do

svartot[k]:=[svartot[k][s]$s=1..nops(svartot[k]),sumvars[vars[k]][j][s]$s=1..svar];

od:

od:

modops:=expand(ops);

#print("svartot: ",svartot[s]$s=1..nops(svartot));

if nops(svartot[1]) = 1 then # No supplementary summation indices. Just a regular term.

for j from 1 to nops(vars) do unassign(’toka’);

temp:=idxvars[j] = degree( subs(

vars[j]=toka,subs(vars[j]**svartot[j][1]=1,expand(ops)) ),toka ) + degree( subs(

vars[j]**svartot[j][1]=toka,ops ),toka )*svartot[j][1];

#print("temp: ",temp," j: ",j," exp-ops: ",expand(ops));

newidx:=solve(temp,svartot[j][1]);

(42)

newidx:=solve(temp,svartot[j][1]);

#print("newidx: ",newidx);

modops:=piecewise( newidx <= Nstart[j] and newidx >= 0,subs(

vars[j]=1,svartot[j][1]=newidx,modops ),0 );

#print("modops: ",modops);

od:

elif nops(svartot[1]) > 1 then # Nonlinear term; create correct formal summation.

for j from 1 to nops(vars) do temp:=0;

for k from 1 to nops(svartot[1]) do unassign(’toka’);

temp:=temp + degree( subs( vars[j]**svartot[j][k]=toka,expand(ops) ),toka )

* svartot[j][k];

modops:=subs(vars[j]**svartot[j][k]=1,modops);

od:

temp:=idxvars[j] = temp + degree(modops,vars[j]);

newidx[j]:=solve(temp,svartot[j][1]);

od:

for j from nops(vars) by -1 to 1 do

temp:=solve( newidx[j] = 0,svartot[j][2] );

if coeff(newidx[j],svartot[j][2]) > 0 then hi:=temp + Nstart[j];

lo:=temp;

else

hi:=temp;

lo:=temp - Nstart[j];

fi:

modops:=Sum(

subs(vars[j]=1,svartot[j][1]=newidx[j],modops),svartot[j][2]=max( 0,lo )..min(

Nstart[j],hi ) );

od:

for j from 3 to nops(svartot[1]) do for k from nops(vars) by -1 to 1 do

modops:=Sum( modops,svartot[k][j]=0..Nstart[k] );

od:

od:

else

for k from 1 to nops(vars) do

modops:=piecewise(idxvars[k]=degree(ops,vars[k]),modops,0);

od:

fi:

RETURN(modops);

end: # End of procedure calc_coe().

doimp:=proc(idx)

global vars,max_pow,idxvars,imprint,opstot;

local j,k,s,eidx,var;

eidx:=idx[j]$j=1..nops(idx);

var:=nops(idx)+1;

for j from 0 to max_pow[var] do if var < nops(vars) then doimp([eidx,j]);

else

for k from 1 to nops(sumvars[vars[1]]) do opstot[eidx,j][k]:=subs(

i=0,(idxvars[s]=[eidx,j][s])$s=1..nops(vars),imprint[k] );

od:

#print("opstot[",eidx,j,"]: ",opstot[eidx,j]);

fi:

od:

end: # End of procedure doimp().

sumup:=proc(pos,asum,idx) # Calculates the new coefficients global max_pow,vars,opstot;

local j,k,eidx,var;

#print("opstot: ",opstot," idx: ",idx);

var:=nops(idx)+1;

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Exakt hur dessa verksamheter har uppstått studeras inte i detalj, men nyetableringar kan exempelvis vara ett resultat av avknoppningar från större företag inklusive

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically