• No results found

Simulating Low Frequency Reverberation in Rooms

N/A
N/A
Protected

Academic year: 2021

Share "Simulating Low Frequency Reverberation in Rooms"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS

,

STOCKHOLM SWEDEN 2020

Simulating Low Frequency

Reverberation in Rooms

MATTIAS SVENSSON

(2)
(3)

3

Simulating Low Frequency

Reverberation in Rooms

Mattias Svensson

2020-06-07

TRITA-AVE ISSN

KTH Royal Institute of Technology School of Engineering Sciences

Department of Aeronautical and Vehicle Engineering SE-100 44 Stockholm, Sweden

(4)
(5)

i

Abstract

The aim of this thesis was to make a practical tool for low frequency analysis in room acous-tics. The need arises from Acad’s experience that their results from simulations using ray-tracing software deviate in the lower frequencies when compared to field measurements in rooms. The tool was programmed in Matlab and utilizes the Finite Difference Time Domain (FDTD) method, which is a form of rapid finite element analysis in the time domain.

A number of tests have been made to investigate the practical limitations of the FDTD method, such as numerical errors caused by sound sources, discretization and simulation time. Boundary conditions, with and without frequency dependence, have been analysed by comparing results from simulations of a virtual impedance tube and reverberation room to analytical solutions. These tests show that the use of the FDTD method appears well suited for the purpose of the tool.

A field test was made to verify that the tool enables easy and relatively quick simulations of real rooms, with results well in line with measured acoustic parameters. Comparisons of the results from using the FDTD method, ray-tracing and finite elements (FEM) showed good correlation. This indicates that the deviations Acad experience between simulated results and field measurements are most likely caused by uncertainties in the sound absorption data used for low frequencies rather than by limitations in the ray-tracing software. The FDTD tool might still come in handy for more complex models, where edge diffraction is a more important factor, or simply as a means for a “second opinion” to ray-tracing - in general FEM is too time consuming a method to be used on a daily basis.

Auxiliary tools made for importing models, providing output data in the of room acoustic parameters, graphs and audio files are not covered in detail here, as these lay outside the scope of this thesis.

Keywords:

(6)
(7)

iii

Sammanfattning

Målet för detta examensarbete var att undersöka möjligheten att programmera ett praktiskt användbart verktyg för lågfrekvensanalys inom rumsakustik. Behovet uppstår från Acads erfarenhet att resultat från simuleringar med hjälp av strålgångsmjukvara avviker i lågfre-kvensområdet i jämförelse med fältmätningar i färdigställda rum. Verktyget är programme-rat i Matlab och använder Finite Difference Time Domain (FDTD) metoden, vilket är en typ av snabb finita elementanalys i tidsdomänen.

En rad tester har genomförts för att se metodens praktiska begräsningar orsakade av nume-riska fel vid val av ljudkälla, diskretisering och simuleringstid. Randvillkor, med och utan frekvensberoende, har analyserats genom jämförelser av simulerade resultat i virtuella im-pedansrör och efterklangsrum mot analytiska beräkningar. Testerna visar att FDTD-meto-den tycks fungerar väl för verktygets tilltänkta användningsområde.

Ett fälttest genomfördes för att verifiera att det med verktyget är möjligt att enkelt och rela-tivt snabbt simulera resultat som väl matcher uppmätta rumsakustiska parametrar. Jämfö-relser mellan FDTD-metoden och resultat beräknade med strålgångsanalys och finita ele-mentmetoden (FEM) visade även på god korrelation. Detta indikerar att de avvikelser Acad erfar mellan simulerade resultat och fältmätningar troligen orsakas av osäkerheter i den in-gående ljudabsorptionsdata som används för låga frekvenser, snarare än av begränsningar i strålgångsmjukvaran. Verktyget kan fortfarande komma till användning för mer komplexa modeller, där kantdiffraktion är en viktigare faktor, eller helt enkelt som ett sätt att få ett ”andra utlåtande” till resultaten från strålgångsmjukvaran då FEM-analys generellt är en för tidskrävande metod för att användas på daglig basis.

Kringverktyg skapade för t.ex. import av modeller, utdata i form av rumsakustiska paramet-rar, grafer och ljudfiler redovisas inte i detalj i denna rapport eftersom dessa ligger utanför examensarbetet.

Nyckelord:

(8)
(9)

Contents

Notations ... 1

1. Introduction ... 2

2. Introduction to the Finite Difference Time Domain method ... 3

2.1. Governing equations ... 3

2.2. Discretization ... 5

2.2.1. Non-staggered example in 1-D ... 5

2.2.2. Staggered time scheme example in 1-D ... 8

2.2.3. Staggered spatial grid and time scheme example in 1-D ... 10

2.2.4. The FDTD method in 2-D and 3-D ... 11

2.3. Stability and the choice of temporal step and element size ... 14

2.3.1. Spatial step/element size ... 14

2.3.2. Temporal step size ... 15

2.4. Schemes of higher order ... 16

2.4.1. Sound source(s) ... 17

2.4.2. Numerical errors ... 21

3. Boundary conditions ... 26

3.1. 1-DOF mass, spring and damper boundary condition ... 27

4. Validations ... 31

4.1. 1-DOF BC at normal incidence ... 31

4.2. 1-DOF BC at random incidence... 36

4.3. Directivity of the Botteldooren boundary condition ... 42

4.4. PML and Botteldooren’s boundary condition ... 44

5. Modelling ... 47

6. Simulation of a classroom ...48

6.1. Field measurements ...48

6.2. Modelling and importing to Matlab ...48

6.3. Material parameters ... 50

6.4. Spatial step size and convergation ... 50

6.5. Comparison to field measurement results ... 51

(10)

7. Concluding remarks ... 53

7. References ... 55

Appendix A ... 57

(11)

1

Notations

p Sound pressure [Pa] u Particle velocity [m/s] ρ Density fluctuation [kg/m3]

ρ0 Density of an undisturbed medium [kg/m3]

β Adiabatic compression bulk modulus [Pa] c Speed of sound in the medium [m/s] t Time [s]

x, y, z Cartesian coordinates [m]

i, j, k Discrete cartesian coordinates [-] λ Wavelength [m]

f Frequency [Hz]

nelem Elements per wavelength [-] k Wavenumber [1/m]

Zn Specific acoustic surface normal impedance [Pa∙s/m] ZR Resistive impedance [Pa∙s/m]

ZI Reactive impedance [Pa∙s/m] ZK Stiffness [Pa∙s2/m]

ZM Area mass [kg/m2]

α0 Normal incidence sound absorption coefficient [-] αs Random/diffuse field absorption coefficient [-] T20 Reverberation time [s]

A Equivalent absorption area [m2Sabin]

(12)

2

1. Introduction

The ability to correctly predict the acoustics of a room and plan the acoustic treatment right from the start can save a lot of money, time and environmental impact. In Acad’s experience, results from predicted reverberation time from ray-tracing simulations usually match field measurements in the finished room at high frequencies, from the octave band 250 Hz and higher. However, at lower frequencies the measured results usually deviate from the pre-dicted ones. As the reverberation time in the 125 Hz octave band is important for speech intelligibility, a mean to better simulate low frequency behaviour is desired.

Ray-tracing simulations utilizes sound energy, not taking into account the phase infor-mation of the sound “rays”. While this provides good approxiinfor-mations at higher frequencies, especially where the sound field can be considered diffuse, the approach falls short in pre-dicting low frequency phenomena such as standing waves, phase cancellation and diffrac-tion. Commercial FEM software, such as Ansys, is well suited for these calculations, but be-comes unpractically complex in the role of a simple complement to a ray-tracing software FEM as new models must be made and meshed.

The aim of this thesis was to investigate the possibility of making a simple tool to extend the low frequency limit of room acoustics analysis made in ray-tracing software, such as Odeon, utilizing the same 3D-model and material selections.

(13)

3

2. Introduction to the Finite Difference Time Domain method

The Finite Difference Time Domain (FDTD) method is a well-known numerical analysis tech-nique, originally developed for simulating electromagnetic propagation. Formulated by Kane Yee (hence sometimes referred to as the Yee method) in 1966 to solve the Maxwell’s curl equa-tions [1], it has since been adapted for use in optics and acoustics. In its basic form, the method is a finite-difference approximation of both space and time derivatives of the governing equa-tions, using leapfrog scheme in time on staggered Cartesian grids in space for each discrete field component. The groundwork for acoustical applications by the FDTD method was laid by Botteldooren in the paper “Acoustical finite-difference time-domain simulation in a quasi-Car-tesian grid” in 1993 [2].

The strengths of the FDTD method are, for example: • Numerically simple and easy to understand.

• Uses the time domain, thus suitable for auralization and study of transients. • A single prediction can cover a wide range of frequencies.

• Capable of simulating low frequency phenomena, like diffraction and phase cancella-tion.

• Capable of simulating time variant systems.

• Simple ideally rigid or purely absorbing boundaries are easily implemented. The challenges of the FDTD method are, for example:

• Requires large numbers of elements/meshes for simulating high frequencies.

• Complex boundary conditions are hard to implement, especially if the they are fre-quency varying (as real boundaries often are).

2.1.

Governing equations

The FDTD method, for acoustics, is governed by two equations describing the relationship in space and time between two discrete field components; sound pressure p and particle velocity 𝑢𝑢�⃗. These governing equations origins in Euler’s linearized equations of inviscid motion (2.1) and continuity (2.2):

𝜌𝜌0𝜕𝜕𝑢𝑢�⃗𝜕𝜕𝜕𝜕 + ∇𝑝𝑝 = 0 (2.1)

𝜕𝜕𝜌𝜌

𝜕𝜕𝜕𝜕 + 𝜌𝜌0∇𝑢𝑢�⃗ = 0 (2.2)

where p [Pa] is the sound pressure scalar, 𝑢𝑢�⃗ [m/s] is the particle velocity vector, ρ [kg/m3] is

the density fluctuation and ρ0 [kg/m3] is the density of the undisturbed medium (ρ + ρ0 = total

density). If we assume adiabatic conditions (no heat transfer) and small disturbances (sound pressure level under 140 dB), the relationship between the sound pressure and density can be described using the following linear approximation [3]:

𝑝𝑝 = 𝛽𝛽𝜌𝜌/𝜌𝜌0 (2.3)

(14)

4

(𝛽𝛽 = 𝑐𝑐2𝜌𝜌

0) (2.4)

where c [m/s] is the speed of sound in the medium. Inserting (2.3) into (2.2) yields: 𝜕𝜕𝑝𝑝

𝜕𝜕𝜕𝜕 + 𝛽𝛽∇𝑢𝑢�⃗ = 0 (2.5)

This leaves us with two governing equations for the FDTD method, describing the relationship between the first order time derivative of the particle velocity and the first order spatial deriv-ative of the sound pressure (2.6), and vice versa (2.7):

𝜕𝜕𝑢𝑢�⃗ 𝜕𝜕𝜕𝜕 = − 1 𝜌𝜌0∇𝑝𝑝 (2.6) 𝜕𝜕𝑝𝑝 𝜕𝜕𝜕𝜕 = −𝛽𝛽∇𝑢𝑢�⃗ (2.7)

(15)

5

2.2.

Discretization

The discretization in space (meshing) can be performed in the same manner as in any finite element method by dividing the domain into elements of appropriate shapes and sizes. In the FDTD method as formulated by Yee, the domain is discretized using a structured grid, using quadrilateral elements in 2-D and hexahedron (cubic) elements in 3-D.

In addition to the spatial discretization, the FDTD method also requires discretization in time to simulate transient systems by discrete time stepping.

Yee’s FDTD method uses a staggered grid for the spatial and temporal discretization. The first order derivatives of the governing equations are discretized by central difference approxima-tion of second order accuracy. This choice (and implementaapproxima-tion) of a staggered discretizaapproxima-tion can be a bit hard to follow at a first glance, so a good approach for explaining this might be to start with a simple discretization of a 1-D model using a non-staggered grid (readers familiar with FD operators and their order of accuracy might want to jump ahead to 2.2.4 The FDTD method in 2-D and 3-D):

2.2.1. Non-staggered example in 1-D

The 1-D domain is spatially discretized into evenly spaced nodes, separated with the distance ∆x. Each node contains the discrete values of sound pressure p and particle velocity in x-direc-tion ux, as seen in Figure 1. Each gap between two nodes can be seen as one linear finite ele-ment, linked to its neighbours by the two nodes.

Figure 1. Spatial discretization in 1-D. (Note that position in time is constant along the x-axis).

Using Cartesian notation, x = i∙∆x, with discrete spatial steps i = 1, 2, 3… the system looks like in Figure 2:

Figure 2. Spatial discretization in 1-D, using Cartesian notation with discrete spatial steps. (Note that position in time is constant along the x-axis).

(16)

6

Figure 3. Temporal discretization. (Note that the spatial position is constant along the t-axis).

We then choose an appropriate finite difference operator (FD operator for short) to step through time. The simplest method is the Forward Euler, which is a two-point forward dif-ference FD operator, D+:

𝜕𝜕

𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕) ≈ 𝐷𝐷+𝑡𝑡𝑓𝑓(𝜕𝜕)

𝐷𝐷+𝑡𝑡𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕)∆𝜕𝜕 �⎯⎯⎯⎯� 𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕 ∙ 𝐷𝐷+𝑡𝑡𝑓𝑓(𝜕𝜕)

(2.12)

To determine the accuracy of the FD operator we can use Taylor expansion around f(t+∆t): 𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) = �∆𝜕𝜕 𝑛𝑛 𝜕𝜕𝑛𝑛 𝜕𝜕𝜕𝜕𝑛𝑛𝑓𝑓(𝜕𝜕) 𝑛𝑛! ∞ 𝑛𝑛=0 �⎯⎯� 𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕) +𝜕𝜕 12 ∆𝜕𝜕2 𝜕𝜕2 𝜕𝜕𝜕𝜕2𝑓𝑓(𝜕𝜕) + 1 6 ∆𝜕𝜕3 𝜕𝜕3 𝜕𝜕𝜕𝜕3𝑓𝑓(𝜕𝜕) + 𝑂𝑂(∆𝜕𝜕4) (2.13)

Inserting (2.13) into (2.12) yields: 𝐷𝐷+𝑡𝑡𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕)∆𝜕𝜕 =𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕 𝜕𝜕𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕) + 1 2 ∆𝜕𝜕2 𝜕𝜕 2 𝜕𝜕𝜕𝜕2𝑓𝑓(𝜕𝜕) + 16∆𝜕𝜕3 𝜕𝜕 3 𝜕𝜕𝜕𝜕3𝑓𝑓(𝜕𝜕) + 𝑂𝑂(∆𝜕𝜕4) − 𝑓𝑓(𝜕𝜕) ∆𝜕𝜕 =𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕)+𝜕𝜕 12 ∆𝜕𝜕𝜕𝜕𝜕𝜕𝜕𝜕22𝑓𝑓(𝜕𝜕) +16 ∆𝜕𝜕2 𝜕𝜕3 𝜕𝜕𝜕𝜕3𝑓𝑓(𝜕𝜕) + 𝑂𝑂(∆𝜕𝜕3) ��������������������������� 𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸 𝑡𝑡𝑡𝑡𝐸𝐸𝑡𝑡𝑡𝑡 (2.14)

We see that D+ is not equal to ∂f(t)/∂t but contains an infinite number of error terms, propor-tional to an increasing exponential of ∆t. If ∆t is kept smaller than unity (∆t<1) the error will be dominated by the first term (the one with the lowest exponential of ∆t) and this term defines the order of accuracy of the FD operator. In this case the lowest exponential of ∆t is 1 (∆t1=∆t), so the error will be proportional to ∆t (E(∆t) ∝ ∆t for small ∆t) and the FD operator is said to be first order accurate. The dominating error term is the second derivative of time, ∂2f(t)/∂t2, namely the acceleration term.

To reduce the error, one can either reduce the time step ∆t or use a FD operator with a higher order of accuracy. First degree accuracy is a bit blunt, so to increase the order of accuracy the second order midpoint (leapfrog) method, which is a three-point central difference FD opera-tor, D0, can be used:

𝜕𝜕

(17)

7

𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕 − ∆𝜕𝜕)2∆𝜕𝜕

�⎯⎯⎯⎯� 𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕 − ∆𝜕𝜕) + 2∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕)

The order of accuracy is checked the same way as previously, by Taylor expansion around f(t+∆t), as in (2.13), and around f(t-∆t): 𝑓𝑓(𝜕𝜕 − ∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕) − ∆𝜕𝜕𝜕𝜕𝜕𝜕 𝑓𝑓𝜕𝜕 (𝜕𝜕) +12 ∆𝜕𝜕2 𝜕𝜕2 𝜕𝜕𝜕𝜕2𝑓𝑓(𝜕𝜕) − 1 6 ∆𝜕𝜕3 𝜕𝜕3 𝜕𝜕𝜕𝜕3𝑓𝑓(𝜕𝜕) + 1 24 ∆𝜕𝜕4 𝜕𝜕4 𝜕𝜕𝜕𝜕4𝑓𝑓(𝜕𝜕) + 𝑂𝑂(∆𝜕𝜕5) (2.16)

Inserting (2.13) and (2.16) into (2.15) yields:

𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕 − ∆𝜕𝜕)2∆𝜕𝜕 =𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕 𝜕𝜕𝑓𝑓(𝜕𝜕)𝜕𝜕𝜕𝜕 + 1 2 ∆𝜕𝜕2𝜕𝜕 2𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕2 + 16∆𝜕𝜕3𝜕𝜕 3𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕3 + 124 ∆𝜕𝜕4𝜕𝜕 4𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕4 + 𝑂𝑂(∆𝜕𝜕5) − 𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕 𝜕𝜕𝑓𝑓(𝜕𝜕)𝜕𝜕𝜕𝜕 −12 ∆𝜕𝜕2𝜕𝜕 2𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕2 + 16∆𝜕𝜕3𝜕𝜕 3𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕3 − 1 24∆𝜕𝜕 4𝜕𝜕 4 𝑓𝑓(𝜕𝜕) 𝜕𝜕𝜕𝜕4 + 𝑂𝑂(∆𝜕𝜕 5) 2∆𝜕𝜕 =𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕)+𝜕𝜕 16 ∆𝜕𝜕2 𝜕𝜕3 𝜕𝜕𝜕𝜕3𝑓𝑓(𝜕𝜕) + 𝑂𝑂(∆𝜕𝜕4) ����������������� 𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸𝐸 𝑡𝑡𝑡𝑡𝐸𝐸𝑡𝑡𝑡𝑡 (2.17)

We see that the dominating error term in this FD operator is of second order (E(∆t) ∝ ∆t2). Its amplitude is relative to the third derivative of time (the rate of change of acceleration) and for the equivalent FD operator for spatial discretization it’s the third derivative of space, called the dispersive term. Its impact on a FDTD simulation will be covered under 2.4.2 Numerical errors later. If we settle for second order accuracy, we can use this FD operator (13) for time stepping of the acoustic pressure and velocity component (using discrete notation with 1-D Cartesian coordinates): 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+1= 𝑢𝑢 𝑥𝑥(𝑖𝑖)𝑛𝑛−1+ 2∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 (2.18) 𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝 (𝑖𝑖) 𝑛𝑛−1+ 2∆𝜕𝜕 ∙ 𝐷𝐷 0𝑡𝑡𝑝𝑝(𝑖𝑖)𝑛𝑛 (2.19)

Looking back at equations (2.8) and (2.11) in 1-D, the derivatives are replaced with three-point central difference FD operators, D0:

𝜕𝜕 𝜕𝜕𝜕𝜕 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 = − 1 𝜌𝜌0 𝜕𝜕 𝜕𝜕𝜕𝜕 𝑝𝑝(𝑖𝑖)𝑛𝑛 �⎯⎯� 𝐷𝐷0𝑡𝑡𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 = −𝜌𝜌1 0𝐷𝐷0 𝑥𝑥𝑝𝑝 (𝑖𝑖)𝑛𝑛 (2.20) 𝜕𝜕 𝜕𝜕𝜕𝜕 𝑝𝑝(𝑖𝑖)𝑛𝑛 = −𝛽𝛽 � 𝜕𝜕 𝜕𝜕𝜕𝜕 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 � �⎯⎯� 𝐷𝐷0𝑡𝑡𝑝𝑝(𝑖𝑖)𝑛𝑛 = −𝛽𝛽 �𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 � (2.21)

Replacing the FD operator term in (2.18) & (2.19) with (2.20) & (2.21) yields: 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+1= 𝑢𝑢 𝑥𝑥(𝑖𝑖)𝑛𝑛−1−2∆𝜕𝜕𝜌𝜌0 𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖)𝑛𝑛 (2.22) 𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝 (𝑖𝑖) 𝑛𝑛−1− 2∆𝜕𝜕𝛽𝛽 �𝐷𝐷 0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 � (2.23)

(18)

8 𝜕𝜕 𝜕𝜕𝜕𝜕 𝑓𝑓(𝜕𝜕) ≈ 𝐷𝐷0𝑥𝑥𝑓𝑓(𝜕𝜕) 𝐷𝐷0𝑥𝑥𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕 − ∆𝜕𝜕) 2∆𝜕𝜕 (2.24) which gives: 𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛 = 𝑢𝑢𝑥𝑥(𝑖𝑖+1)𝑛𝑛 − 𝑢𝑢 𝑥𝑥(𝑖𝑖−1)𝑛𝑛 2∆𝜕𝜕 (2.25) 𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖)𝑛𝑛 =𝑝𝑝(𝑖𝑖+1) 𝑛𝑛 − 𝑝𝑝 (𝑖𝑖−1)𝑛𝑛 2∆𝜕𝜕 (2.26)

Inserting (2.25) & (2.26) into (2.22) & (2.23) gives the two time-step equations needed for the FDTD method:

𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+1= 𝑢𝑢

𝑥𝑥(𝑖𝑖)𝑛𝑛−1−𝜌𝜌0∆𝜕𝜕∆𝜕𝜕 �𝑝𝑝(𝑖𝑖+1)𝑛𝑛 − 𝑝𝑝(𝑖𝑖−1)𝑛𝑛 � (2.27)

𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝(𝑖𝑖)𝑛𝑛−1−∆𝜕𝜕𝛽𝛽∆𝜕𝜕 �𝑢𝑢𝑥𝑥(𝑖𝑖+1)𝑛𝑛 − 𝑢𝑢

𝑥𝑥(𝑖𝑖−1)𝑛𝑛 � (2.28)

These two equations can then be used in succession inside a loop to step through time, with one discrete time step ∆t at the time. When used in a computational simulation, quite a lot of memory is required as the vectors (for 1-D, matrices for 2-D and 3-D) for ux and p at time n-1 needs to be stored in between each loop. In addition, ux and p at time n cannot be directly overwritten by ux and p at time n+1. The result of this is that 6 vectors are needed for the 1-D case (9 for 2-D and 12 for 3-D). Using generic code this 1-D case can be programmed as shown in Figure 4 (without any boundary conditions, as these will be covered later in this report).

for n=1 to TimeLength

for i=1 to SpatialLength

ux_next(i) = ux_old(i) – dt/roh0/dx*(p(i+1) - p(i-1)) p_next(i) = p_old(i) – dt*bulk/dx*(ux(i+1) - ux(i-1)) end ux_old = ux ux = ux_next clear ux_next p_old = p p = p_next clear p_next end

Figure 4. Generic code for a second order non-staggered 1-D FDTD scheme.

2.2.2. Staggered time scheme example in 1-D

(19)

9

needed to be kept in memory can be reduced. The velocity variable is shifted ½∆t as seen in Figure 5:

Figure 5. Staggered temporal discretization.

The three-point central difference FD operator for the velocity component is then re-defined for half time steps, ½∆t:

𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ½∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕 − ½∆𝜕𝜕)2 ∙ ½∆𝜕𝜕

�⎯⎯⎯⎯� 𝑓𝑓(𝜕𝜕 + ½∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕 − ½∆𝜕𝜕) + ∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕) (2.29)

We use the same FD operator and half time steps for the pressure variable, but shift it ½∆t forward in time so that its time discretization fits the definition in Figure 5:

𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕 + ½∆𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕)2 ∙ ½∆𝜕𝜕

�⎯⎯⎯⎯� 𝑓𝑓(𝜕𝜕 + ∆𝜕𝜕) = 𝑓𝑓(𝜕𝜕) + ∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑓𝑓(𝜕𝜕 + ½∆𝜕𝜕) (2.30)

Giving the following equations for the staggered/shifted velocity and pressure components: 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½ = 𝑢𝑢

𝑥𝑥(𝑖𝑖)𝑛𝑛−½+ ∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛

𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝(𝑖𝑖)𝑛𝑛 + ∆𝜕𝜕 ∙ 𝐷𝐷0𝑡𝑡𝑝𝑝(𝑖𝑖)𝑛𝑛+½

(2.31) (2.32)

Replacing the FD operator term in (2.31) & (2.32) with (2.20) & (2.21) yields: 𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½= 𝑢𝑢

𝑥𝑥(𝑖𝑖)𝑛𝑛−½−∆𝜕𝜕𝜌𝜌0𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖)𝑛𝑛 (2.33)

𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝

(𝑖𝑖)𝑛𝑛 − ∆𝜕𝜕𝛽𝛽 �𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½� (2.34)

Finally, inserting (2.25) & (2.26) into (2.33) & (2.34) gives the two time-step equations for the staggered time scheme (but still non-staggered spatial grid):

𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½ = 𝑢𝑢

𝑥𝑥(𝑖𝑖)𝑛𝑛−½−2𝜌𝜌∆𝜕𝜕0∆𝜕𝜕 �𝑝𝑝(𝑖𝑖+1)𝑛𝑛 − 𝑝𝑝(𝑖𝑖−1)𝑛𝑛 � (2.35)

𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝(𝑖𝑖)𝑛𝑛 −∆𝜕𝜕𝛽𝛽2∆𝜕𝜕 �𝑢𝑢𝑥𝑥(𝑖𝑖+1)𝑛𝑛+½ − 𝑢𝑢

𝑥𝑥(𝑖𝑖−1)𝑛𝑛+½� (2.36)

Equation (2.35) and (2.36) are used in a leapfrog scheme (also called a Yee scheme) to calculate the sound pressure and particle velocity at all spatial points, one discrete time step at the time. During each time step, n∆t;

(20)

10

• A new sound pressure distribution (at time n+1) is calculated by taking the previous one (at time n) subtracted by a function of the current particle velocity distribution (at time n+½).

The scheme is looped until the desired time duration of the simulation is reached. In compu-tational simulations, the sound pressure and/or particle velocity distribution (or any other quantity based on the knowledge of these two) can be visualized in each time step. It should be noted that it is not necessary to keep the previous distributions in memory, so only 2 vectors are needed for the 1-D case (3 for 2-D and 4 for 3-D). Using generic code this 1-D case can be programmed like in Figure 6 (again without any boundary conditions).

for n=1 to TimeLength

for i=1 to SpatialLength

ux(i) = ux(i) – dt/roh0/2/dx*(p(i+1) - p(i-1)) p(i) = p(i) – dt*bulk/2/dx*(ux(i+1) - ux(i-1)) end

end

Figure 6. Generic code for a second order 1-D FDTD scheme, staggered in time but not in space.

2.2.3. Staggered spatial grid and time scheme example in 1-D

The benefit of also stagger the spatial discretization is not to further reduce the number of vectors (or matrices), but it will be shown here that it enables a halving of the discrete spatial variables in the 1-D vectors while still maintaining the same accuracy. In 2-D the number of variables in the matrices can be cut to ¼, and in 3-D to 1/8, greatly reducing memory cost. The velocity variable is shifted ½∆x as shown in Figure 7:

Figure 7. Staggered spatial discretization in 1-D.

Shifting the velocity half a spatial step ½∆x, eq (31) & (32) can be written as: 𝑢𝑢𝑥𝑥(𝑖𝑖+½)𝑛𝑛+½ = 𝑢𝑢 𝑥𝑥(𝑖𝑖+½)𝑛𝑛−½ −∆𝜕𝜕𝜌𝜌0𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖+½)𝑛𝑛 (2.37) 𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝 (𝑖𝑖) 𝑛𝑛 − ∆𝜕𝜕𝛽𝛽 �𝐷𝐷 0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½� (2.38)

The three-point central difference FD operator for spatial discretization of the velocity compo-nent is re-defined for half spatial steps, ½∆x:

𝐷𝐷0𝑥𝑥𝑓𝑓(𝜕𝜕) =𝑓𝑓(𝜕𝜕 + ½∆𝜕𝜕) − 𝑓𝑓(𝜕𝜕 − ½∆𝜕𝜕)2 ∙ ½∆𝜕𝜕 (2.39)

Same thing with the FD operator for the pressure, but shifted ½∆x forward so that its spatial discretization fits the definition in Figure 7:

(21)

11 Which gives: 𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖)𝑛𝑛+½ = 𝑢𝑢𝑥𝑥(𝑖𝑖+½)𝑛𝑛+½ − 𝑢𝑢 𝑥𝑥(𝑖𝑖−½)𝑛𝑛+½ ∆𝜕𝜕 (2.41) 𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖+½)𝑛𝑛 =𝑝𝑝(𝑖𝑖+1) 𝑛𝑛 − 𝑝𝑝 (𝑖𝑖)𝑛𝑛 ∆𝜕𝜕 (2.42)

Inserting (2.41) & (2.42) into (2.37) & (2.38) gives the two time-step equations for the stag-gered (in space and time) 1-D FDTD scheme:

𝑢𝑢𝑥𝑥(𝑖𝑖+½)𝑛𝑛+½ = 𝑢𝑢

𝑥𝑥(𝑖𝑖+½)𝑛𝑛−½ −𝜌𝜌∆𝜕𝜕0∆𝜕𝜕 �𝑝𝑝(𝑖𝑖+1)𝑛𝑛 − 𝑝𝑝(𝑖𝑖)𝑛𝑛 � (2.43)

𝑝𝑝(𝑖𝑖)𝑛𝑛+1= 𝑝𝑝(𝑖𝑖)𝑛𝑛 −∆𝜕𝜕𝛽𝛽∆𝜕𝜕 �𝑢𝑢𝑥𝑥(𝑖𝑖+½)𝑛𝑛+½ − 𝑢𝑢

𝑥𝑥(𝑖𝑖−½)𝑛𝑛+½ � (2.44)

When translating the above equations to computational code it might be a little confusing how to cope with the half spatial steps, as integer values are used for indexing vectors and matrices. For indexing purposes ½ is subtracted from the velocity spatial position (but it should be noted that physically the velocity nodes are defined in between the pressure nodes). Using generic code this 1-D case can be programmed as in Figure 8 (without any boundary conditions).

for n=1 to TimeLength

for i=1 to SpatialLength

ux(i) = ux(i) – dt/roh0/dx*(p(i+1) - p(i)) p(i) = p(i) – dt*bulk/dx*(ux(i) - ux(i-1)) end

end

Figure 8. Generic code for a second order 1-D FDTD scheme, staggered in time and space.

The benefits of using a staggered spatial discretization previously mentioned is perhaps not immediately clear when comparing (2.35) & (2.36) against (2.43) & (2.44), or comparing the code in Figure 7 against the one in Figure 8 for that matter. Both methods are indeed second order accurate, but since the staggered method uses half spatial steps in the FD operator, the numerical error is proportional to (½∆x)2, while the error using the un-staggered method is

proportional to ∆x2.

2.2.4. The FDTD method in 2-D and 3-D

(22)

12

Figure 9. 2-D spatial staggered grid.

Figure 10. The “Yee cell”.

(23)

13

Figure 11. Temporal discretization in 1-D, 2-D and 3-D.

In the 3-D case, the equations governing the discrete time stepping (2.37) & (2.38) becomes: 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ −∆𝜕𝜕𝜌𝜌0𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛 (2.45) 𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+½ ,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑦𝑦(𝑖𝑖,𝑗𝑗+½ ,𝑘𝑘)𝑛𝑛−½ −∆𝜕𝜕𝜌𝜌0𝐷𝐷0𝑦𝑦𝑝𝑝(𝑖𝑖,𝑗𝑗+½ ,𝑘𝑘)𝑛𝑛 (2.46) 𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛+½ = 𝑢𝑢 𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛−½ −∆𝜕𝜕𝜌𝜌0𝐷𝐷0𝑧𝑧𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛 (2.47) 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 = 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 − ∆𝜕𝜕𝛽𝛽 �𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ + 𝐷𝐷0𝑦𝑦𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ + 𝐷𝐷0𝑧𝑧𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ � (2.48)

The spatial FD operators in (2.41) & (2.42) in 3-D are: 𝐷𝐷0𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ − 𝑢𝑢 𝑥𝑥(𝑖𝑖−½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 (2.49) 𝐷𝐷0𝑦𝑦𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ =𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+½,𝑘𝑘) 𝑛𝑛+½ − 𝑢𝑢 𝑦𝑦(𝑖𝑖,𝑗𝑗−½,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 (2.50) 𝐷𝐷0𝑧𝑧𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛+½ − 𝑢𝑢 𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘−½)𝑛𝑛+½ ∆𝜕𝜕 (2.51) 𝐷𝐷0𝑥𝑥𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛 =𝑝𝑝(𝑖𝑖+1,𝑗𝑗,𝑘𝑘) 𝑛𝑛 − 𝑝𝑝 (𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 ∆𝜕𝜕 (2.52) 𝐷𝐷0𝑦𝑦𝑝𝑝(𝑖𝑖,𝑗𝑗+½,𝑘𝑘)𝑛𝑛 =𝑝𝑝(𝑖𝑖,𝑗𝑗+1,𝑘𝑘)𝑛𝑛 − 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 ∆𝜕𝜕 (2.53) 𝐷𝐷0𝑧𝑧𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛 = 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘+1)𝑛𝑛 − 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 ∆𝜕𝜕 (2.54)

Finally, inserting the FD operators in (2.49) to (2.54) into (2.45) to (2.48) yields the final time-stepping equations, the Yee scheme, for the 3-D FDTD method of second order accuracy in space and time:

(24)

14 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 = 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 − ∆𝜕𝜕𝛽𝛽 �𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘) 𝑛𝑛+½ − 𝑢𝑢 𝑥𝑥(𝑖𝑖−½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 + 𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+½,𝑘𝑘)𝑛𝑛+½ − 𝑢𝑢 𝑦𝑦(𝑖𝑖,𝑗𝑗−½,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 +𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½) 𝑛𝑛+½ − 𝑢𝑢 𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘−½)𝑛𝑛+½ ∆𝜕𝜕 � (2.58)

Using generic code the 3-D case (using a uniform cubic Yee cell with edge length dx (dx=∆x=∆y=∆z)) can be programmed as seen in Figure 12.

for n=1 to TimeLength

for i=1 to XSpatialLength for j=1 to YSpatialLength for j=1 to YSpatialLength

ux(i,j,k) = ux(i,j,k) – dt/roh0/dx*(p(i+1,j,k) - p(i,j,k)) uy(i,j,k) = uy(i,j,k) – dt/roh0/dx*(p(i,j+1,k) - p(i,j,k)) uz(i,j,k) = uz(i,j,k) – dt/roh0/dx*(p(i,j,k+1) - p(i,j,k)) p(i,j,k) = p(i,j,k) – dt*bulk/dx*(ux(i,j,k) - ux(i-1,j,k)… + uy(i,j,k) - uy(i,j-1,k)… + uz(i,j,k) - uz(i,j,k-1)) end end end end

Figure 12. Generic code for a second order 3-D FDTD scheme, staggered in time and space.

The scheme is repeated until the desired time duration of the simulation is reached. In com-putational simulations, the sound pressure and/or particle velocity distribution (or any other quantity based on the knowledge of these two) can be visualized in each time step. It should be noted that it is not necessary to keep the previous distributions in memory and only four ma-trices holding the most recent distribution is needed (one for sound pressure and three for the velocity vector components, in the case of a 3D simulation).

When programming the FDTD method in Matlab, the code can be optimized by taking ad-vantage of Matlab’s matrix operations, significantly simplifying the code and reducing compu-tational time. The above code in Figure 12 written for Matlab can read as in Figure 13 (note that the size of the velocity component and pressure matrixes differ slightly in size, hence the vectors are shortened or extended with zeros to give equal size in the matrix operations).

for n = 1:TimeLength

ux = ux - dt/roh0/h*(p(2:end,:,:) - p(1:(end-1),:,:)); uy = uy - dt/roh0/h*(p(:,2:end,:) - p(:,1:(end-1),:)); uz = uz - dt/roh0/h*(p(:,:,2:end) - p(:,:,1:(end-1)));

p = p - dt*bulk/dx*([ux; zeros(1,yDim,zDim)] - [zeros(1,yDim,zDim); ux]… + [uy zeros(xDim,1,zDim)] - [zeros(xDim,1,zDim) uy]…

+ cat(3,uz,zeros(xDim,yDim))- cat(3,zeros(xDim,yDim), uz)); end

Figure 13. Matlab code for a second order 3-D FDTD scheme, staggered in time and space.

2.3.

Stability and the choice of temporal step and element size

2.3.1. Spatial step/element size

(25)

15

“rule of thumb” for discretization, a minimum of 6 elements per wavelength is usually referred to, while some prefer to count the number of nodes instead, stating a minimum of 10 nodes per wavelength. The latter is perhaps a more appropriate approach as an element can be composed of virtually any number of nodes (the quadratic elements that is used in the classic FDTD method includes a “middle node”, so 4.5 elements are composed of 10 nodes). The absolute minimum number of nodes per wavelength is two (corresponding to Shannon’s sampling the-orem and the Nyquist frequency) but it should be noted that the numerical errors using this few nodes will be unacceptably large for many applications [4]. The element size, ∆x, in a FDTD simulation using a uniform cubic Yee cell (∆x=∆y=∆z) is thus given by;

∆𝜕𝜕 ≤𝑛𝑛𝜆𝜆𝑡𝑡𝑖𝑖𝑛𝑛

𝑡𝑡𝑒𝑒𝑡𝑡𝑡𝑡

(2.59)

where λmin [m] is the shortest wavelength of interest and nelem [-] is the number of elements per wavelength.

The final decision on how many elements or nodes to use over one wavelength comes down to the desired precision of the study. Great gains in computational time and memory cost can be achieved if the number of elements is kept at a minimum while still providing acceptable pre-cision. One approach is to simply test the system by making a simulation using a low number of elements per wavelength, then gradually step up the amount and study where the results start to converge. Ideally, the result of a numerical simulation should be compared to an exact analytical solution in order to determine the necessary number of elements per wavelength (within a desired error margin). As most models subject to numerical analysis will be too com-plex for an analytical solution (otherwise there would be no need for an numerical analyse in the first place) a physically simple part of the model can be extracted and analysed separately, assuming that the same numerical error will apply for the full model. The spatial step size will be further studied later in 2.4.2 Numerical errors.

2.3.2. Temporal step size

The maximum temporal step size, ∆t, of any discrete time signal should at the very least comply with Shannon’s sampling theorem, but the FDTD method requires an even smaller time step to maintain numerical stability. If the time step is too large the discrete field components will be able to propagate more than one element at the time, producing large numerical errors that will result in numerical instability severe enough for the simulation to collapse [5]. In other words, the temporal step, ∆t, in any FDTD simulation must be decided by the element size, ∆x, and the speed of sound in the medium, c. In a 1-D simulation, this limit is simply ∆t = ∆x/c. Expanded into 3-D and a second order FDTD scheme staggered in time and space, it is decided by the Courant condition [6] (also called the Courant–Friedrichs–Lewy condition):

∆𝜕𝜕 ≤ 1

𝑐𝑐� 1∆𝜕𝜕2+ 1∆𝜕𝜕2+ 1∆𝜕𝜕2

(2.60)

For a uniform cubic discretization with element size ∆x=∆y=∆z the Courant condition can be simplified to:

∆𝜕𝜕 ≤ ∆𝜕𝜕

(26)

16

where ndim [-] is the number of dimensions.

Assuming that the maximum allowed element size from (2.59) is used, the maximum temporal step size can be calculated from:

∆𝜕𝜕 ≤ 1

𝑓𝑓𝑡𝑡𝑚𝑚𝑥𝑥∙ 𝑛𝑛𝑡𝑡𝑒𝑒𝑡𝑡𝑡𝑡∙ �𝑛𝑛𝑑𝑑𝑖𝑖𝑡𝑡 (2.62)

where fmax [Hz] is the highest frequency component of interest.

It should be noted that the Courant condition applies for free propagation and will not neces-sarily work after adding boundary conditions to the simulation.

2.4.

Schemes of higher order

To increase the accuracy further without reducing the spatial step size, ∆x, an FD operator of higher order can be used. This will increase computational time but not memory cost. For example, using a FD five-point central difference operator in space, the time-stepping equa-tions for a scheme that is forth order accurate in space and second order accurate in time will be: 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ −𝜌𝜌∆𝜕𝜕0∆𝜕𝜕 �−24 𝑝𝑝1 (𝑖𝑖+2,𝑗𝑗,𝑘𝑘)𝑛𝑛 +98 𝑝𝑝(𝑖𝑖+1,𝑗𝑗,𝑘𝑘)𝑛𝑛 −98 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 +24 𝑝𝑝1 (𝑖𝑖−1,𝑗𝑗,𝑘𝑘)𝑛𝑛 � (2.63) 𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+½,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑦𝑦(𝑖𝑖,𝑗𝑗+½,𝑘𝑘)𝑛𝑛−½ −𝜌𝜌0∆𝜕𝜕∆𝜕𝜕 �−24 𝑝𝑝1 (𝑖𝑖,𝑗𝑗+2,𝑘𝑘)𝑛𝑛 +98 𝑝𝑝(𝑖𝑖,𝑗𝑗+1,𝑘𝑘)𝑛𝑛 −98 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 +24 𝑝𝑝1 (𝑖𝑖,𝑗𝑗−1,𝑘𝑘)𝑛𝑛 � (2.64) 𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛+½ = 𝑢𝑢 𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛−½ −𝜌𝜌∆𝜕𝜕0∆𝜕𝜕 �−24 𝑝𝑝1 (𝑖𝑖,𝑗𝑗,𝑘𝑘+2)𝑛𝑛 +98 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘+1)𝑛𝑛 −98 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 +24 𝑝𝑝1 (𝑖𝑖,𝑗𝑗,𝑘𝑘−1)𝑛𝑛 � (2.65) 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 = 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 − ∆𝜕𝜕𝛽𝛽 �− 124 𝑢𝑢𝑥𝑥(𝑖𝑖+1½,𝑗𝑗,𝑘𝑘) 𝑛𝑛+½ + 98𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ − 98𝑢𝑢𝑥𝑥(𝑖𝑖−½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ + 124 𝑢𝑢𝑥𝑥(𝑖𝑖−1½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 +− 124 𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+1½,𝑘𝑘) 𝑛𝑛+½ + 98𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗+½,𝑘𝑘)𝑛𝑛+½ − 98𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗−½,𝑘𝑘)𝑛𝑛+½ + 124 𝑢𝑢𝑦𝑦(𝑖𝑖,𝑗𝑗−1½,𝑘𝑘)𝑛𝑛+½ ∆𝜕𝜕 +− 124 𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+1½) 𝑛𝑛+½ + 98𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘+½)𝑛𝑛+½ − 98𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘−½)𝑛𝑛+½ − 124 𝑢𝑢𝑧𝑧(𝑖𝑖,𝑗𝑗,𝑘𝑘−1½)𝑛𝑛+½ ∆𝜕𝜕 � (2.66)

The order of the spatial FD operator must be reduced as any boundary condition is approached, which complicates implementation. In the above example using the fourth order FD operator, it is easily realized that variables from behind the boundary will otherwise be used. This will increase the memory cost as the order of the FD operator to be used for the calculations in each node need to be stored. Another consideration is that the temporal step size, ∆t, will need to be reduced further to satisfy the Courant condition (with a factor of 4/7 in the above example). Thus, the reduction of temporal step size, in combination with the increased spatial order of accuracy will allow a higher frequency component to be studied without adding extra elements to the system. In a study of FD operators up to the 16th order, Mourik and Murphy showed the

(27)

17

For the purpose of this thesis, the benefits of using FD operators of higher than second order accuracy is small, as the purpose is to investigate low frequencies with large wavelengths. Thus, increasing the accuracy of the FDTD model beyond second order accuracy will provide minimal gains in accuracy for an increased computational time and memory cost.

2.4.1. Sound source(s)

To simulate a point sound source, a pressure or velocity source can be simulated by exciting a node by a discrete time signal with a sampling rate of 1/∆t [Hz]. More complex, or multiple, sources can be simulated by the excitation of multiple nodes (unlike ray tracing, any number of nodes can be excited at virtually no additional computational cost). There are two basic types of sources in FDTD; hard and soft sources [8]. A hard source (2.67) forces the node to adopt the discrete time signal without any interference from adjacent nodes and can thus be viewed as a sound source of infinite impedance. Though it gives total control of the injected signal, its drawback is that it will reflect incident sound pressure. This scattering caused by the node of a point source might be mitigated by the fact that it a small obstacle in relation to the wave-lengths of interest and the incident sound pressure will diffract around it.

𝑝𝑝(ℎ𝑚𝑚𝐸𝐸𝑑𝑑 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 = 𝐹𝐹(𝑛𝑛) (2.67)

A soft source (2.68) is made by simply adding the input signal to the existing discrete value in the node. The benefit is that it will not cause any reflections, but the control of the injected signal is somewhat lost. If the purpose for the source is to simply provide an impulse containing all frequency components of interest, this lack of exact control is not a problem.

𝑝𝑝(𝑡𝑡𝐸𝐸𝑠𝑠𝑡𝑡 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 = 𝑝𝑝(𝑡𝑡𝐸𝐸𝑠𝑠𝑡𝑡 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 + 𝐹𝐹(𝑛𝑛) (2.68)

A third option is the so-called transparent source, which combines the signal control of a hard source while not causing reflections like a soft one. It’s essentially a soft source which super-imposes the signal with a pre-calculated grid response function. Calculating the grid response in a room with boundaries is computationally heavy and for linear room acoustic systems there is no real gain in using a transparent source over a soft source [9].

The time signal from the source can be virtually any continuous discrete time signal, for exam-ple stochastic/random noise, harmonic sinusoidal signals or even a speech or music samexam-ple. For studying room acoustic parameters, or auralization using convolution, all that is required is a short transient signal containing all frequency components of interest. A Dirac pulse im-plemented as a hard source in (2.69) and Figure 14, contains every frequency component but, as will be shown in 2.4.2 Numerical errors (and seen in the figure, as small “waves” following the initial one), it produce noticeable numerical errors from a long tail of fluctuations.

𝑝𝑝(ℎ𝑚𝑚𝐸𝐸𝑑𝑑 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 = 𝛿𝛿(𝑛𝑛) 𝛿𝛿(𝑛𝑛) = �1, 𝑛𝑛 = 0

(28)

18

Figure 14. Dirac pulse, hard source. Top: 2D visualization, left bottom: time signal, right bottom: fre-quency spectrum.

To avoid the numerical errors produced by a Dirac pulse, but still produce all the frequency components of interest, a Gaussian pulse is the impulse of choice for FTDT simulations. Im-plemented as a hard source, the equation reads as (2.70) and the result can be seen in Figure 15 (for fmax=1000 Hz in this example). It produces a strong DC component, which can be

fil-tered out using a high pass filter during post processing. The downside of the strong DC com-ponent is that it might cause artefacts and, if nothing else, makes visualizations of the sound pressure distribution in the system a bit hard to read.

𝑝𝑝(ℎ𝑚𝑚𝐸𝐸𝑑𝑑 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 = 𝑒𝑒−�𝑛𝑛∆𝑡𝑡−𝑡𝑡𝑡𝑡𝑤𝑤 0� 2 𝜕𝜕𝑤𝑤=𝜋𝜋𝑓𝑓2 𝑡𝑡𝑚𝑚𝑥𝑥 𝜕𝜕0= 4𝜕𝜕𝑤𝑤 (2.70)

(29)

19

Figure 15. Hard source Gaussian pulse with fmax = 1000 Hz. Top: 2D visualization, left bottom: time

signal, right bottom: frequency spectrum.

(30)

20

Figure 16. Soft source Gaussian pulse with fmax = 1000 Hz. Top: 2D visualization, left bottom: time

signal, right bottom: frequency spectrum.

Finally, if a hard source is without a strong DC component is desired, the second derivative of the Gaussian pulse, the so-called Ricker wavelet [7] can be used (2.71). The result is seen in Figure 17. 𝑝𝑝(ℎ𝑚𝑚𝐸𝐸𝑑𝑑 𝑡𝑡𝐸𝐸𝑠𝑠𝐸𝐸𝑠𝑠𝑡𝑡 𝑛𝑛𝐸𝐸𝑑𝑑𝑡𝑡)𝑛𝑛 = �1 − 2(𝑛𝑛∆𝜕𝜕 − 𝜕𝜕𝜕𝜕 0 𝑤𝑤 ) 2� ∙ 𝑒𝑒−�𝑛𝑛∆𝑡𝑡−𝑡𝑡𝑡𝑡𝑤𝑤 0� 2 (2.71)

Figure 17. Hard source Ricker wavelet pulse with fmax = 1000 Hz. Top: 2D visualization, left bottom:

(31)

21

2.4.2. Numerical errors

As previously mentioned, using second order accuracy in space and time will cause numerical errors dominated by the third derivative of space and time respectively (proportional to ∆x2 and ∆t2). In a 1-D simulation these two errors will cancel out if ∆t is set to the maximum allowed value by the Courant condition, but when performing 2-D or 3-D simulations they will not cancel out and will cause dispersive errors. These numerical error causes the phase and group velocities among the frequency components to differ, leading to frequency components prop-agating at different speeds [6]. In other words, the speed of sound is no longer the constant c = ω/k but dependent on frequency. The error appears to be the strongest for waves propagating along the Cartesian axes and the weakest when propagating along its diagonals. For a uniform staggered Cartesian grid the numerical error will get severe and cause large dissipation for high enough frequency components, where the spatial discretization gets too coarse (or when reach-ing the Nyquist frequency). The phase error for a wave propagatreach-ing along one of the Cartesian axes can be calculated using [10]:

𝜑𝜑𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 𝜑𝜑𝐴𝐴𝑛𝑛𝑚𝑚𝑒𝑒𝑦𝑦𝑡𝑡𝑖𝑖𝑠𝑠𝑚𝑚𝑒𝑒= � 2arcsin (𝑐𝑐∆𝜕𝜕�𝑠𝑠𝑖𝑖𝑛𝑛2(𝑘𝑘∆𝜕𝜕/2)/∆𝜕𝜕2 𝑐𝑐∆𝜕𝜕√𝑘𝑘2 � 𝑛𝑛 (2.72)

Along the diagonal, the phase will change according to: 𝜑𝜑𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 𝜑𝜑𝐴𝐴𝑛𝑛𝑚𝑚𝑒𝑒𝑦𝑦𝑡𝑡𝑖𝑖𝑠𝑠𝑚𝑚𝑒𝑒= ⎝ ⎛2arcsin (𝑐𝑐∆𝜕𝜕�3𝑠𝑠𝑖𝑖𝑛𝑛2(𝑘𝑘/�3/∆𝜕𝜕/2)/∆𝜕𝜕2 𝑐𝑐∆𝜕𝜕√𝑘𝑘2 ⎠ ⎞ 𝑛𝑛 (2.73)

where k [1/m] is the wavenumber of the frequency in question and n [-] is the number of time steps taken.

For auralization purposes, these phase errors will appear as a high frequency “ringing” which should naturally be avoided if a natural sound is desired. For this reason, an upper frequency range “headroom” is required to be able to filter out these artefacts using a low pass filter dur-ing post processdur-ing.

These numerical errors can be tested by using a 3D FDTD system of second order accuracy and sample the result after 10 and 20 meters of propagation (about 58 ms at a speed of sound of 343 m/s), using a transient pulse with a frequency content up to 1000 Hz. The four different sound sources described previously are all tested to investigate if they prove to show equivalent stability. The test is performed in the direction causing the highest dispersion, namely along one of the Cartesian axes.

(32)

22

Figure 18. Hard source Gaussian pulse with 2 elements/wavelength at 1000 Hz. Left: time signal, right: frequency spectrum.

A soft source Gaussian pulse at a mesh density of 2 elements per wavelength at 1000 Hz seem to suffer even more numerical errors than its hard source equivalent. As can be seen in Figure 19, the source is not stable, resulting in fluctuations already at the source itself. At distance this result in even larger tails of slower frequency components. The frequency spectrum shows a similar behaviour to the hard pulse, with a cut-off at around 650 Hz.

Figure 19. Soft source Gaussian pulse with 2 elements/wavelength at 1000 Hz. Left: time signal, right: frequency spectrum.

(33)

23

Figure 20. Hard source Gaussian pulse with 4 elements/wavelength at 1000 Hz. Left: time signal, right: frequency spectrum.

Using 4 elements per wavelength for a soft source Gaussian pulse resolved the problem of the immediate and large fluctuations at the source, as can be seen in Figure 21. All in all, the soft and hard source Gaussian pulses show equivalent behaviour at this grid resolution. 4 elements per wavelength seem to ensure that dissipative numerical errors occur well above this fre-quency of interest (1000 Hz in this example).

Figure 21. Soft source Gaussian pulse with 4 elements/wavelength at 1000 Hz. Left: time signal, right: frequency spectrum.

(34)

24

Figure 22. Hard source Ricker wavelet pulse with 4 elements/wavelength at 1000 Hz. Left: time sig-nal, right: frequency spectrum.

Finally, the result for a hard source Dirac pulse can be seen in Figure 23. The cut-off is at 1350 Hz, the same as the other pulses tested at nelem = 4, but the time signal show the largest dissi-pative errors of the pulses tested here.

Figure 23. Hard source Dirac pulse with 4 elements/wavelength at 1000 Hz. Left: time signal, right: frequency spectrum.

From the above tests it appears that about 4 elements per wavelength is the minimum spatial resolution to avoid dissipation within frequencies within this wavelength. The dispersion error, causing higher frequency components to move at lower speed, grows quite large even at short distances from the source.

(35)

25

The tests conducted above show that a Gaussian pulse show the lowest dispersion error, so a new simulation is made where the pulse is recorded after travelling 172 meters. In Figure 24 it can be seen that dispersion stretches a hard gaussian source at nelem = 10 in time, making it last about 20 ms longer than the 5 ms long original signal within a range of 60 dB (sound pressure level).

Figure 24. Hard source Gaussian pulse with 10 elements/wavelength at 1000 Hz, after 172 meters of propagation (0.5 seconds). Left: time signal, right: Sound Pressure Level.

Moving up to 20 elements per wavelength, as seen in Figure 25, the stretching of the original signal is reduced to just 4 ms from the original 5 ms signal. In other words, after propagating for 500 ms, the last frequency components of interest (within a range of 60 dB) arrives on average 6-7 ms later than they would if there were no dispersion. This corresponds to an in-crease in speed of just slightly over one percent (1.3 %, with the speed of sound being c = 343 m/s). It should be noted again that these examples are made along the direction producing the maximum dispersion and should be seen as “worst case scenarios”.

Figure 25. Hard source Gaussian pulse with 20 elements/wavelength at 1000 Hz, after 172 meters of propagation (0.5 seconds). Left: time signal, right: Sound Pressure Level.

(36)

26

3. Boundary conditions

When a sound wave travel from one medium and reach the surface of another, i.e. reaches a boundary, the relationship between the sound pressure and particle velocity is typically de-scribed by the specific acoustic surface normal impedance Zn [Pa•s/m] [3]:

𝑍𝑍𝑛𝑛=𝑢𝑢�⃗ ∙ 𝑛𝑛�⃗ 𝑝𝑝 (3.1)

where n [-] is the normal of the surface

The most basic type of boundary condition; a perfectly rigid wall, has infinite acoustic surface normal impedance resulting in zero particle velocity at the surface. Modelling a perfectly rigid boundary condition is thus as easy as permanently assigning zero velocity to the velocity nodes at the boundary. In a FDTD model this boundary condition is “automatically” assigned to the end nodes of the simulated fluid domain, as the particle velocity is always zero outside the matrix. Implementing any real-numbered impedance is quite straight forward, but a frequency dependent boundary requires a complex numbered specific acoustic surface normal imped-ance (from now on called just “impedimped-ance” for short):

𝑍𝑍𝑛𝑛(𝑓𝑓) = 𝑍𝑍𝑅𝑅+ 𝑖𝑖𝑍𝑍𝐼𝐼(𝑓𝑓) (3.2)

where ZR [Pa•s/m] is the resistive part and ZI [Pa•s/m] the frequency dependent reactive part. The direct use a complex component for the impedance in a FDTD model is not possible for two reasons; 1) all variables in the FDTD scheme need to be real-numbered, and 2) the discrete sound wave can potentially contain any number of frequency components, thus elimination the possibility to use a frequency dependent reactive part of the impedance. The incident sound wave will hit the boundary one sample per time step, making it impossible to program a fre-quency dependent reactive function.

(37)

27

reacting, meaning there is no link between adjacent boundary nodes, as illustrated in Figure 26.

Figure 26. Locally reacting 1-DOF mass, spring and damper boundary condition.

3.1.

1-DOF mass, spring and damper boundary condition

Botteldooren based his boundary condition [10] on two typical absorbing constructions; • A thin absorbing layer on a hard background, having the specific acoustic impedance:

𝑍𝑍 = 𝑍𝑍𝑅𝑅+ 𝑍𝑍𝐾𝐾� 𝑖𝑖𝑖𝑖 (3.3)

• A light wall or a curtain, having the specific acoustic impedance:

𝑍𝑍 = 𝑍𝑍𝑅𝑅+ 𝑖𝑖𝑖𝑖𝑍𝑍𝑀𝑀 (3.4)

where ZR [Pa•s/m] is the resistance, ZK [Pa•s2/m] is the stiffness and ZM [kg/m2] is the area

mass of the construction.

Combining (3.3) and (3.4) provides a general boundary surface impedance:

𝑍𝑍 = 𝑍𝑍𝑅𝑅+ 𝑖𝑖𝑖𝑖𝑍𝑍𝑀𝑀+ 𝑍𝑍𝐾𝐾� 𝑖𝑖𝑖𝑖 (3.5)

(38)

28

To implement this surface impedance in the time domain of a FDTD system, the relationship between pressure, velocity and impedance at one discrete point is:

𝑝𝑝(𝜕𝜕) = 𝑢𝑢�⃗(𝜕𝜕) ∙ 𝑍𝑍 = 𝑍𝑍𝑅𝑅𝑢𝑢�⃗(𝜕𝜕) + 𝑖𝑖𝑖𝑖𝑍𝑍𝑀𝑀𝑢𝑢�⃗(𝜕𝜕) + 𝑍𝑍𝐾𝐾𝑢𝑢�⃗(𝜕𝜕)� 𝑖𝑖𝑖𝑖 (3.6)

Assuming harmonic propagation gives:

𝑢𝑢�⃗(𝜕𝜕) = 𝑢𝑢�𝑒𝑒𝑖𝑖𝑖𝑖𝑡𝑡∙ 𝑛𝑛�⃗ (3.7) 𝑑𝑑𝑢𝑢�⃗(𝜕𝜕) 𝑑𝑑𝜕𝜕 = 𝑖𝑖𝑖𝑖𝑢𝑢�𝑒𝑒𝑖𝑖𝑖𝑖𝑡𝑡∙ 𝑛𝑛�⃗ = 𝑖𝑖𝑖𝑖𝑢𝑢�⃗(𝜕𝜕) (3.8) � 𝑢𝑢�⃗(𝜏𝜏)𝑑𝑑𝜏𝜏𝑡𝑡 −∞ = 𝑢𝑢�𝑒𝑒𝑖𝑖𝑖𝑖𝑡𝑡∙ 𝑛𝑛�⃗ 𝑖𝑖𝑖𝑖 = 𝑢𝑢�⃗(𝜕𝜕) 𝑖𝑖𝑖𝑖 (3.9)

Inserting eq (3.8) and (3.9) into (3.6) gives:

𝑝𝑝(𝜕𝜕) = 𝑍𝑍𝑅𝑅𝑢𝑢�⃗(𝜕𝜕) + 𝑍𝑍𝑀𝑀𝑑𝑑𝑢𝑢�⃗(𝜕𝜕)𝑑𝑑𝜕𝜕 + 𝑍𝑍𝐾𝐾� 𝑢𝑢�⃗(𝜏𝜏)𝑑𝑑𝜏𝜏 𝑡𝑡

−∞

(3.10)

For a plane wave propagating along the positive x-axis, the boundary can be placed at a velocity node located at (i0, j, k) with its normal pointing in the negative direction, as illustrated in Figure 27.

Figure 27. Sound wave propagating along the positive x-axis towards a boundary condition located at the velocity node at (i+1/2, j, k).

Without a boundary the FDTD time-stepping equation in x-direction for freely propagating waves is (from (2.55)):

𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢

𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ − 𝜌𝜌�����������������∆𝜕𝜕0∆𝜕𝜕 �𝑝𝑝(𝑖𝑖+1,𝑗𝑗,𝑘𝑘)𝑛𝑛 − 𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 � spatial 3−𝑝𝑝𝐸𝐸𝑖𝑖𝑛𝑛𝑡𝑡 𝑡𝑡𝑖𝑖𝑑𝑑𝑝𝑝𝐸𝐸𝑖𝑖𝑛𝑛𝑡𝑡 𝐸𝐸𝑠𝑠𝑒𝑒𝑡𝑡𝐸𝐸

(39)

29

Since the pressure behind the boundary at (i+1, j, k) is unknown, the three-point midpoint Euler FD operator used in (3.11) must be substituted for a two-point backward:

𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ − 𝜌𝜌2∆𝜕𝜕 0∆𝜕𝜕 �𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘) 𝑛𝑛 − 𝑝𝑝 (𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 � ����������������� spatial 2−𝑝𝑝𝐸𝐸𝑖𝑖𝑛𝑛𝑡𝑡 𝑏𝑏𝑚𝑚𝑠𝑠𝑘𝑘𝑤𝑤𝑚𝑚𝐸𝐸𝑑𝑑 𝐸𝐸𝑠𝑠𝑒𝑒𝑡𝑡𝐸𝐸 (3.12)

Now (3.10) is discretized at the node of the boundary and the time derivative of the velocity vector is substituted by a three-point midpoint Euler:

𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 = 𝑍𝑍𝑅𝑅𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛 +𝑍𝑍∆𝜕𝜕 �𝑢𝑢𝑀𝑀 𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛+½ − 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛−½ � + 𝑍𝑍𝐾𝐾 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ 𝑛𝑛

𝑡𝑡=−∞

∆𝜕𝜕 (3.13)

The first term in (3.13), 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛 , is linearly interpolated in time, giving:

𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 =𝑍𝑍𝑅𝑅 2 �𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛−½ + 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛+½ � + 𝑍𝑍𝑀𝑀 ∆𝜕𝜕 �𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛+½ − 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛−½ � + 𝑍𝑍𝐾𝐾 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ 𝑛𝑛 𝑡𝑡=−∞ ∆𝜕𝜕 (3.14)

Which can then be simplified to: 𝑝𝑝(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+1 = 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛−½ 𝑍𝑍𝑅𝑅 2 − 𝑍𝑍𝑀𝑀 ∆𝜕𝜕 � + 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛+½ � 𝑍𝑍𝑅𝑅 2 + 𝑍𝑍𝑀𝑀 ∆𝜕𝜕 � + 𝑍𝑍𝐾𝐾 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ ∆𝜕𝜕 𝑛𝑛 𝑡𝑡=−∞ (3.15)

Inserting (3.15) into (3.12) gives: 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝑢𝑢 𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ −𝜌𝜌2∆𝜕𝜕 0∆𝜕𝜕 �−𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘) 𝑛𝑛 + 𝑢𝑢 𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛−½ �𝑍𝑍2 −𝑅𝑅 𝑍𝑍∆𝜕𝜕 � + 𝑢𝑢𝑀𝑀 𝑥𝑥(𝑖𝑖+½,j,k)𝑛𝑛+½ �𝑍𝑍2 +𝑅𝑅 𝑍𝑍∆𝜕𝜕 �𝑀𝑀 + 𝑍𝑍𝐾𝐾 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ ∆𝜕𝜕 𝑛𝑛 𝑡𝑡=−∞ � (3.16)

Which can be rewritten to: 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ =𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘) 𝑛𝑛−½ �1 − 𝑍𝑍𝑅𝑅∆𝜕𝜕 𝜌𝜌0∆𝜕𝜕 + 2𝑍𝑍𝑀𝑀 𝜌𝜌0∆𝜕𝜕� − 2∆𝜕𝜕 𝜌𝜌0∆𝜕𝜕 �−𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘) 𝑛𝑛 + 𝑍𝑍 𝐾𝐾∑𝑛𝑛𝑡𝑡=−∞𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ ∆𝜕𝜕� �1 + 𝑍𝑍𝑅𝑅∆𝜕𝜕 𝜌𝜌0∆𝜕𝜕 + 2𝑍𝑍𝑀𝑀 𝜌𝜌0∆𝜕𝜕� (3.17)

Equation (3.17) can then be simplified by defining:

(40)

30 𝛼𝛼 =1 − 𝑍𝑍 𝑅𝑅 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹+ 2𝑍𝑍 𝑀𝑀 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹∆𝜕𝜕 1 + 𝑍𝑍𝑅𝑅 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹+ 2𝑍𝑍 𝑀𝑀 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹∆𝜕𝜕 (3.19) 𝛽𝛽 = 2 1 + 𝑍𝑍𝑅𝑅 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹+ 2𝑍𝑍 𝑀𝑀 𝑍𝑍𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹∆𝜕𝜕 (3.20)

Finally, inserting (3.18) to (3.20) into (3.17) gives the boundary condition as formulated by Botteldooren: 𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝛼𝛼𝑢𝑢 𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ − 𝛽𝛽𝜌𝜌∆𝜕𝜕0∆𝜕𝜕 �−𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 + 𝑍𝑍𝐾𝐾 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k)𝑡𝑡−½ ∆𝜕𝜕 𝑛𝑛 𝑡𝑡=1 � (3.21)

Written in a general form, allowing for free sound propagation or boundaries with a normal in any direction, (3.21) can be written as:

𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛+½ = 𝛼𝛼 𝑥𝑥𝑢𝑢𝑥𝑥(𝑖𝑖+½,𝑗𝑗,𝑘𝑘)𝑛𝑛−½ −𝜌𝜌∆𝜕𝜕 0∆𝜕𝜕 �𝛽𝛽𝑥𝑥+𝑝𝑝(𝑖𝑖+1,𝑗𝑗,𝑘𝑘) 𝑛𝑛 −𝛽𝛽 𝑥𝑥−𝑝𝑝(𝑖𝑖,𝑗𝑗,𝑘𝑘)𝑛𝑛 + γ𝑥𝑥� (3.22) where: 𝛼𝛼𝑥𝑥 = �𝛼𝛼, 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙1, 𝑓𝑓𝑓𝑓𝑓𝑓 𝑛𝑛𝑓𝑓 𝐵𝐵𝐶𝐶 𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) = 0 (1,0,0) ≠ 0 (3.23) 𝛽𝛽𝑥𝑥+ = � 𝛽𝛽, 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) > 0 0, 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) < 0 1, 𝑓𝑓𝑓𝑓𝑓𝑓 𝑛𝑛𝑓𝑓 𝐵𝐵𝐶𝐶 𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) = 0 (3.24) 𝛽𝛽𝑥𝑥−= � 0, 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) > 0 𝛽𝛽, 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) < 0 1, 𝑓𝑓𝑓𝑓𝑓𝑓 𝑛𝑛𝑓𝑓 𝐵𝐵𝐶𝐶 𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) = 0 (3.25) 𝛾𝛾𝑥𝑥 = �𝑍𝑍𝐾𝐾𝛽𝛽∆𝜕𝜕 � 𝑢𝑢𝑥𝑥(𝑖𝑖+½,j,k) 𝑡𝑡−½ 𝑛𝑛 𝑡𝑡=1 , 𝑓𝑓𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) ≠ 0 0, 𝑓𝑓𝑓𝑓𝑓𝑓 𝑛𝑛𝑓𝑓 𝐵𝐵𝐶𝐶 𝑓𝑓𝑓𝑓 𝐵𝐵𝐶𝐶 𝑤𝑤𝑖𝑖𝜕𝜕ℎ 𝑛𝑛�⃗ ∙ (1,0,0) = 0 (3.26)

It should be noted that (3.22) is second order accurate only in the absence of a boundary. At the boundaries it will be reduced to first order accuracy due to the use of the spatial 2-point backwards Euler operator in (3.12). The implementation of this BC might also require the Courant condition given in (2.61) to be altered for a slightly shorter time step, by a factor of (2/3)1/2, in order to ensure stability (though when ZM and ZK are small, or zero, the original

condition is stable) :

∆𝜕𝜕 ≤ �2/3 ∆𝜕𝜕

𝑐𝑐�𝑛𝑛𝑑𝑑𝑖𝑖𝑡𝑡 (3.27)

(41)

31 ∆𝜕𝜕 ≤∆x 𝑐𝑐 �� 1 + 2𝑍𝑍𝑀𝑀/𝜌𝜌0∆𝜕𝜕 1 + 𝑍𝑍𝐾𝐾∆𝜕𝜕/𝜌𝜌0𝑐𝑐2� (3.28)

Programmed to utilize matrices, adding the boundary conditions from (3.22) into the code for Matlab from Figure 13 can be written as in Figure 28.

for n = 1:TimeLength

gamma_x = gamma_x + Zk.*beta_x.*dt.*ux; gamma_y = gamma_y + Zk.*beta_y.*dt.*uy; gamma_z = gamma_z + Zk.*beta_z.*dt.*uz;

ux = alpha_x.*ux - dt/roh0/h*(beta_pos_x.*p(2:end,:,:)... - beta_neg_x.*p(1:(end-1),:,:) + gamma_x); uy = alpha_y.*uy - dt/roh0/h*(beta_pos_y.*p(:,2:end,:)... - beta_neg_y.*p(:,1:(end-1),:) + gamma_y); uz = alpha_z.*uz - dt/roh0/h*(beta_pos_z.*p(:,:,2:end)... - beta_neg_z.*p(:,:,1:(end-1)) + gamma_z); p = p - dt*bulk/dx*([ux; zeros(1,yDim,zDim)] - [zeros(1,yDim,zDim); ux]… + [uy zeros(xDim,1,zDim)] - [zeros(xDim,1,zDim) uy]…

+ cat(3,uz,zeros(xDim,yDim))- cat(3,zeros(xDim,yDim), uz)); end

Figure 28. Matlab code for a second order 3-D FDTD scheme, staggered in time and space, with Bot-teldooren boundary condition.

4. Validations

Before jumping ahead and discretise a model of a complex room it’s good practice to first sim-ulate a simple case to check how many elements per wavelength will be necessary to keep the numerical errors within acceptable limits (with regards to the purpose of the simulation). Thought the underlaying physics of the 1-DOF mass, spring and damper BC (from now called “1-DOF BC” for short) seem to be in order, a number of tests will be made to control how well the results match analytical solutions at normal and random incidence.

4.1.

1-DOF BC at normal incidence

The 1-DOF BC is formulated for normal incidence perpendicular to the x, y or z plane, so sim-ulating plane waves in a virtual impedance tube is a natural first step. Using the transfer func-tion method, as described in standard ISO 10534-2:1998 [16], the normal incidence sound ab-sorption coefficient, α0, of a virtual sample can be studied and deviations from the analytical solution caused by numerical errors quite easily identified.

The normal incidence sound absorption coefficient, α0 [-], is:

𝛼𝛼0(𝑓𝑓) = 1 − |𝑓𝑓(𝑓𝑓)|2 (4.1)

References

Related documents

Thus, the small threshold matrices are repeated (or tiled) to make a larger matrix the same size as the original image in order to be used in ordered dithering.. Then each pixel

been occupied by a Type 2 domain. It is noted that the selective dissolution procedure does not allow for direct identification of Type 4 domains, as they would

perceptions of themselves in four different fields: (i) their speaking abilities, (ii) their contributions to oral class activities (including both whole class and small

The method of performing blind source separation of convolutive signals by simultaneously diagonalizing second order statistics at multiple time periods in the

This came as no surprise, however after tallying the data from the surveys, 56% of the respondents agreed with the statement (see Figure 5, next page); “The use of new media in

Many of these methods such as high order local NRBCs and the perfectly matched layer (PML) which we will discuss in Chapter 2 and Chap- ter 3 respectively, are efficient for

According to the respondents the different levels of fluency that can be identified when it comes to communication in English within the global company can at times be seen as

The errors that were found in the past tense are divided and presented in four sections; first, errors which belong to the simple past, second, errors made in the present