• No results found

Simulated Annealing For Large Scale Optimization inWireless Communications

N/A
N/A
Protected

Academic year: 2021

Share "Simulated Annealing For Large Scale Optimization inWireless Communications"

Copied!
67
0
0

Loading.... (view fulltext now)

Full text

(1)

Simulated Annealing

For Large Scale Optimization in Wireless Communications

Master Thesis Report in Electrical Engineering

Author: Tamim Sakhavat Haithem Grissa Ziyad Abdalrahman

Organisation: Faculty of Technology Date: 2013-02-20

Subject: Simulated Annealing Level: Master’s

(2)
(3)

Abstract

In this thesis a simulated annealing algorithm is employed as an optimiza- tion tool for a large scale optimization problem in wireless communication. In this application, we have 100 places for transition antennas and 100 places for receivers, and also a channel between each position in both areas. Our aim is to find, say the best 3 positions there, in a way that the channel capacity is maximized.

The number of possible combinations is huge. Hence, finding the best chan- nel will take a very long time using an exhaustive search. To solve this problem, we use a simulated annealing algorithm and estimate the best answer. The sim- ulated annealing algorithm chooses a random element, and then from the local search algorithm, compares the selected element with its neighbourhood.

If the selected element is the maximum among its neighbours, it is a local maximum. The strength of the simulated annealing algorithm is its ability to escape from local maximum by using a random mechanism that mimics the Boltzmann statistics.

(4)

Contents

1 Simulated Annealing 6

1.1 Combinatorial Optimization Problem and Local Search . . . 6

1.2 The Annealing Algorithm . . . 8

1.3 Cooling Schedules . . . 12

1.3.1 Initial Value for the Control Parameter . . . 12

1.3.2 Decrement of the Control Parameter . . . 13

1.3.3 Final Value for the Control Parameter . . . 13

1.3.4 Length of Repetition . . . 13

2 Channel Capacity 15 2.1 Channel . . . 15

2.2 Channel Properties . . . 16

2.3 Channel Capacity Formula . . . 16

3 MATLAB Codes 19 3.1 Basic Program for the Fix Temperature . . . 19

3.1.1 Algorithm . . . 19

3.2 Main Program . . . 20

3.2.1 Temperature Algorithm . . . 20

3.2.2 Main Program and Codes Explanation . . . 21

3.3 Functions . . . 24

3.3.1 Randomize . . . 24

3.3.2 Skip Equality . . . 25

3.4 GUI . . . 26

3.4.1 GUI Appearance . . . 26

3.4.2 Simulated Annealing in GUI . . . 28

4 Plots and Results 31 4.1 Fix Temperature program . . . 31

4.2 Decreasing Temperature . . . 32

5 References 38 Appendix A MATLAB Programs 39 A.1 Program for Fix Temp . . . 39

A.2 Program for Flexible Temp . . . 41

Appendix B Functions 43 B.1 Randomize function . . . 43

B.2 Skip Equality function . . . 44

Appendix C GUI codes 45

(5)

Introduction

In some MIMO (multiple input multiple output) system, when there are different choices to select for inputs or outputs, finding the best solution could be difficult.

For instance in this project, we have 100 places for transition antennas and 100 places for receivers. Also there is an area between them, with objects and obstacles which caused scattering or fading of the signal. So we have a channel between each position in transition and receiving area. Our aim is to find the best 3 positions here, in the way that, the fading and scattering of the signal become Minimum or in another word the SNR becomes Maximum. The number of solution for this simple example, just for one area is defined from:

( 100

3 ) = 100!

(100 − 3)!=100 × 99 × 98 = 941288040000.

This is a very large number. So finding the best channels takes a lot of time.

To solve this problem, one way is using the annealing simulation and estimate the best answer.

Annealing is the physical process of heating up a solid until it melts, fol- lowed by cooling it down until it crystallizes into a state with a perfect lattice.

[Simulated Annealing and Boltzmann machine by Emile Aarts and Jan Korst].

The goal is reaching the ground state, where the energy of the solid is Minimum.

The annealing simulation chooses a random element, and then from the local search algorithm, compares the selected element with its neighborhood. If the selected element is the maximum in its neighbours, it will be chosen as the best solution, and then the same thing happens for the new element. The Boltzmann equation which is given below, help us to test other possibility when we stuck in a place, The Boltzmann equation is:

exp(Ei−Ej KBT ),

where Ei is the current state and Ej is the best tested solution. KB is the Boltzmann Constant and T defined as temperature. Then the probability of selecting i as the best solution will be:

Pc(accept j) =

⎧⎪

⎪⎪

1, if Ej≥Ei

exp(Ei−Ec j), if Ej<Ei

.

We have four chapters in our report. In first chapter we try to explain simulated annealing as much as we need for this report. Second chapter is a very basic

(6)

explanation about the channel capacity and the related formula.

And finally in the two last chapters, we use the Matlab software to simulate the annealing method for our project and see the results. In the third chapter, first we assume that the temperature is fixed, and then by extending the result for flexible temperature, we can have a reasonable output.

And at the end we use Matlab GUI to make the program user friendly and easy to use. A sample output is shown in the figure below:

As we can see, in this example, algorithm will find the best solution at the beginning, around 15 [db]. But after a while by cooling down the system, the result is improved and reached to 22 [db].

(7)

1 Simulated Annealing

To understand the annealing optimization, we started with the travelling sales- man problem. Travelling salesman problem probably is the best known of among all combinatorial optimization problems. Where, we have to find the shortest way for a salesman which is travelled from his home town and he has to visit some finite number cities and return back to his home. The question is which way is the shortest way for him to visit all the cities once and back home.

If n is the number of cities which he has to visit in his tour, then the different possibility he can chose or the number of solution is n!. Obviously when the number of cities grow up, then the number of possible solution will be grow up too, and so on, checking all the solution to find the best answer. So instead of checking all the possible solutions and find the best one, we can use an algorithm to find the optimal one, where the optimal solution is very close to the best answer.

Figure 1: Two Random Solutions for The Travelling Salesman problem In the figure(1) we can see two different choices that the salesman has, to go from his home town and go back to home again.

1.1 Combinatorial Optimization Problem and Local Search

A combinatorial optimization problem could be defined as finding the maximum solution or minimum. In this project we are going to find the best optimized maximum solution.

If the ioptis the optimal solution for a combinatorial optimization problem,

(8)

then we can formalized the problem to:

⎧⎪

⎪⎪

Maximization, f (iopt) ≥f (i) for all i ∈ S Minimization, f (iopt) ≤f (i) for all i ∈ S.

Where fopt=f (iopt)denotes the optimal cost, and Sopt the set of optimal solutions.

The next step is Local Search. When we searching for the solution, and we look around the neighbourhood of the previous guessing solution, the approxi- mation of the best solution will be decrease considerably.

The algorithm start from the very randomly solution, then the algorithm search its neighbourhood to find the better solution. The next step would be replacement of the new solution into the old one, in the case of finding a better solution. And it goes until there wasn’t any better solution in a neighbour of the existent one. The last solution would be the optimal solution for a combi- natorial optimization problem. The following definition will explain local search.

Definition 1.1: Let (S, f ) be an instance of a combinatorial optimization problem and let N be a neighbourhood structure, then ˆi ∈ S is called a locally optimal solution or simply a local optimum with respect to N if ˆi is better than, or equal to, all its neighbouring solutions with regard to their cost. More specifically, in the case of minimization, ˆi is called a locally minimal solution or simply a local minimum if:

f (ˆi) ≤ f (j), for all j ∈ Sˆi

and in the case of maximization, ˆi is called a locally maximal solution or simply a local maximum if

f (ˆi) ≥ f (j), for all j ∈ Sˆi

See [1, Chapter. 1 - page. 8] ∎

In the codes below we can see the MATLAB®program for local search al- gorithm.

i = ” i n i t i a l v a l u e f o r s t a r t ” ;

N = ” L a s t Members o f t h e Neighbourhood ” ; f o r j =1:N

i f f ( i ) < f ( j ) ; i=j ;

end end

(9)

The most important things in a local search for combinatorial optimization problem are initial value for starting and which algorithm we use to chose ran- domly the neighbourhood.

The best feature of simulated annealing algorithm is the optimal solution is not depended strongly to the initial value nor the randomly algorithm of choos- ing the neighbourhood.

In the figure (2), we can see how local search algorithm, is to be able to find the maximum number in the matrix after three steps.

Figure 2: Finding The Maximum, Using Local Search

The major problem in local search is, the algorithm may stocks in a neigh- bour with a solution, but the solution is not the maximum nor enough close to it. In the next section we will introduce a control parameter which will help us to avoid this sort of situation.

1.2 The Annealing Algorithm

Annealing, which is known as a thermal process for obtaining low energy states of solid in heat bath, contains two steps:

- Heat up the solid until it melts.

- Decrease carefully the temperature until the particles change to the crystal by itself.

The goal is, the whole process has to reach to the ground state1 in the end.

The ground state happens just when the cooling down goes very slowly.

1The Ground state is the state when the energy of the particle is Minimum.

(10)

The main different between the Annealing algorithm and Local search is, in Local search, as we mention before, we don’t have any control on the system during the search or process. The cooling happens randomly and after a while it will be stopped. But in Annealing algorithm in addition to choose randomly from the neighbourhood of the solution we have a control parameter. The con- trol parameter helps the algorithm to go out of the trap. In normal Local search, if the current solution is the optimal solution on its neighbourhood, the process will be stop, whereas there are still much better solution around. This case is usually called the trap. The way to get out of the trap is choosing a control parameter, and check the current solution with more different cases.

In the Metropolis algorithm for annealing simulation, the control parameter is temperature. If we show the energy of the state i with Ei and the energy of state j with Ej, then Ei−Ej is the energy difference between state i and j. If the energy difference is less than or equal to zero, state j will be accepted as the best state solution, otherwise the state j still have a chance to be chosen as the best solution with a certain probability of

exp(Ei−Ej

kBT ), (1)

where T is the temperature of the heat bath and kB would be the Boltzmann constant.

If X is a stochastic variable denoting the current state of the solid, and Z(T ) is the partition function. Then the probability for the solid to be in the sate i will be calculated as

PT(X = i) = 1

Z(T )exp(−Ei

kBT), (2)

where Z(T ) is

Z(T ) = ∑(exp(−Ei

kBT)). (3)

Convincingly, we can say the annealing algorithm is followed by the metropo- lis algorithm, when a sequence of solution for a combinatorial problem will be request. There are two equivalents here. First, states of physical system are the solution for the problem, and secondly the energy of state is the cost of solution.

The definition 2.1 will show us how annealing algorithm works.

Definition 2.1: Let (S, f ) denote an instance of a combinatorial optimiza- tion problem and i and j are two solutions with cost f (i) and f (j), respectively.

(11)

Then the acceptance criterion determines whether j is accepted from i by ap- plying the following acceptance probability:

Pc(accept j) =

⎧⎪

⎪⎪

1, if f (j) ≤ f (i)

exp(f (i)−f (j)

c ), if f (j) > f (i), (4) where c ∈ R+denotes the control parameter.

Clearly, the generation mechanism corresponds to the perturbation mecha- nism in the metropolis algorithm, where the acceptance criterion corresponds to the metropolis criterion.

See [1, Chapter. 2 - page. 15]. ∎

By the definition and if L denotes the number of transition, c is the control parameter, and i,j are two possible solutions for the annealing problem, then a simple MATLAB® code for generating an optimum solution could be:

f o r i =1:L

i f f ( i ) <= f ( j ) ; i = j ;

e l s e i f exp ( ( f ( i ) − f ( j ) ) / c ) > rand ; i = j ;

end end

As we can see in the equation (No. 1), for the large c, the whole exponential is very small, so the possibility of acceptance of the solution in the second step is very large. But as c decreases, the exp(f (i)−f (j)

c ), increase to 1 (the power goes to infinity). And again from the second condition in definition 4.1 , the possibility of acceptance of the solution goes to zero. And finally when c is zero, all the solution must satisfied the first condition.

From the basic of stochastic distribution we have the definition (2.2).

Definition 2.2:

1. The expected cost Ec(f ) is equal to:

Ec(f )def= ⟨f ⟩c = ∑

i∈S

f (i)P(X = i) = ∑

i∈S

f (i)qi(c). (5)

2. And then the expected square cost Ec(f2)would be:

Ec(f2)

def= ⟨f2c = ∑

i∈S

f2(i)P(X = i) = ∑

i∈S

f2(i)qi(c). (6)

(12)

3. The variance Varc(f ) of the cost function or σc2 is:

Varc(f )def= σc2 = ∑

i∈S

(f (i) − Ec(f ))2P(X = i)

= ∑

i∈S

(f (i) − ⟨f ⟩c)2qi(c) = ⟨f2c− ⟨f ⟩2cc2. (7)

4. And finally the entropy2 is calculated from:

Sc= − ∑

i∈S

qi(c) ln qi(c). (8)

∎ And The following results is also useful to understand the consent of anneal- ing optimization better.

Definition 2.3:

1.

c→∞lim⟨f ⟩c

def= ⟨f ⟩= 1

∣S∣∑

i∈S

f (i). (9)

2.

limc→0⟨f ⟩c

def

= fopt. (10)

3.

lim

c→∞σ2c def= σ2= ∑

i∈S

(f (i) − ⟨f ⟩)2. (11)

4.

limc→0σ2c =0. (12)

5.

c→∞limScdef= S=ln ∣S∣. (13) 6.

limc→0Sc

def

= S0=ln ∣Sopt∣. (14)

∎ To see the proofs and more information about the (Definition 2.3 ) check the book Simulated annealing and Boltzmann Machines chapter 2.[1]

2The entropy is a natural measure of amount of disorder or information in a system.

(13)

1.3 Cooling Schedules

In simulated annealing choosing a cooling schedule would be the most impor- tant thing to do. By choosing the best control parameter, the best decrement value for temperature and etc... we can have a fast and efficient way to reach our goal, which is locally optimal cooling structure with crystal imperfections.

in this section we try to explain very briefly about this important components.

Definition 3.1:

A cooling schedule specifies:

- An initial value of the control parameter.

- A decrement function for decreasing the value of the control parameter - A final value of the control parameter specified by stop criterion - A finite length of each homogeneous repetition.

[1, Chapter. 4 - page. 57] ∎

1.3.1 Initial Value for the Control Parameter

A very important element in cooling Schedules is acceptance ratio. The accep- tance ratio is defined in the following definition.

Definition 3.2: The acceptance ratio χ(c) in the simulated annealing algo- rithm is defined as:

χ = number of accepted transitions

number of proposed transitions∣c. (15)

∎ If c0 denotes the initial value for the control parameter, then for mostly accepted transition by the system the c0 should be large enough, but we have to be careful because if c0 was too large then, the time for cooling down will be increase. For this reason the acceptance ratio χ0=χ(c0)should be close to 1.

It can be calculated in reality by choosing a very small value for c0 and multiplying it by a constant grater than 1 and reach the point which χ0=χ(c0) is close to 1 enough. But for be more specific and practical, we also can use the following formula:

χ ≈m1+m2.exp(−∆(f )

+

c )

m1+m2

, (16)

and c will be calculated from:

c = ∆(f )+ ln(m m2

2.χ−m1(1−χ))

, (17)

(14)

where m1 and m2 are the numbers of transitions from i to j when f (i) on the condition:

⎧⎪

⎪⎪

m1 if f (i) ≥ f (j) m2 if f (i) < f (j),

and (−∆(f )+) would be an average difference in cost over m2cost increasing transitions.

For start we can put the zero as the initial value for c0and start the loop to calculate the sequences. The following value for m1, m2 , χ will be calculated from the c0 then. For more details check the book Simulated Annealing and Boltzmann Machines by Emile Aarts [1].

1.3.2 Decrement of the Control Parameter

In general the decrement of the control parameter could be calculate from the multiplying a number in to c:

ck+1=α.ck , k = 1, 2, ...,

where α is a constant number close to 1, and usually it’s number between 0.8 and 0.99. But for more details we bring the following definition which we can calculate the exact value for decrementing of the control parameter.

1.3.3 Final Value for the Control Parameter

The value for the control parameter depends on an extrapolation of the expected cost function when ⟨f ⟩ckgoes to zero. Then we estimate the final value by using

∆⟨f ⟩c. So we have

∆⟨f ⟩c= (⟨f ⟩c−fopt) ≈cδ⟨f ⟩c

δc , (18)

ck

⟨f ⟩

∆⟨f ⟩c

δc ∣c=ck<s. (19)

The s should be a very small, positive number.

1.3.4 Length of Repetition

By choosing the right control parameters and Start and Stop temperature we can do the cooling down in the shortest time. From the equation below we can calculate a coefficient to control the speed of the process for cooling down.

Tstop=Tstart×aq, (20)

then:

log(Tstop) =log(Tstart) +q log(a), (21)

(15)

so:

a = 10(log(TTstop

start)

q ). (22)

The equation (No. 22), is the temperature coefficient which we will use later to cool down the system.

(16)

2 Channel Capacity

A channel in signal processing antenna is a path between transition and receiver elements. Each path has it’s own characteristic feature , like gain, fading, scat- tering and etc. In general, this features are called Capacity. In this chapter we will explain this meaning as far as we need. For more information check Fun- damentals of Wireless Communication by David Tse and Pramod Viswanath.

2.1 Channel

A system in signal processing is a collection of components together which has the same physical behaviour. And any cost function which has data or infor- mation about a specific system is called signal.

The MIMO system or Multiple Input, Multiple Output is a system with more than one input and output signal. In antenna area, the MIMO is also called MEAs or in some books MEA which is stand for Multiple Elements Antenna system. In the figure(3) we can see a sample of MIMO system.

Figure 3: MIMO Sample Channel

As we can see in the figure, it’s not important that the number of input signal and output signal be equal and also it’s possible that two input signal effect one output or more. A system could be an area between to antenna which actually is a path for signal to reach the reviver antenna. This system is called channel.

(17)

2.2 Channel Properties

There are a lot of things where will be effected the capacity of channel like scat- tering, diffraction, infraction which are predictable and also in addition there are some unpredictable things. We call these unpredictable things, noise. The noise is a undesired and unwanted energy (electrical or electromagnetic) which reduce and effect the quality of the signal in the output. In real life the noise is a random signal which will be added to the desired signal.

Assume that the Tris period of the time when the channel receive the new information or data. And Ts is the period of time when the channel will spend to send the data to the output. Now if the H(s) is the entropy of the system.

Then the bit rate of the channel will be given by, (Br) and it equal to:

Br=

H(s)Ts

Tr

. (23)

From the bit rate of the channel, we can define another definition for capacity of the channel. The capacity of the channel is the maximum data we will see in the output without error, or with a negligible error. Now if the bandwidth of the channel is ω and Tr is still as before, then we can say:

Tr= 1

2ω. (24)

With equation (24), the capacity of the channel is measured in data bits per second. In addition if X and Y are two random variable and H(X∣Y ) is a conditional entropy then the capacity of channel will be defined as:

C = max I(X; Y ) where I(X; Y ) = H(Y )H(Y ∣X). (25) where H is the channel properties and X and Y is input and output of the channel respectively.

2.3 Channel Capacity Formula

When we are modelling the channel, to approach the better result and have more control on that, the system should be LT I3. And for this reason we assume that the signal is narrow banded, it means the signal wouldn’t change a lot during the time. In general we show the relation between input/output for a narrow band signal in any MIMO system, with the complex numbers as follow:

y = H(x) + n. (26)

So if y is a receive signal which will be show by (N × 1) matrix and x which is the output and we denote it by a (M × 1) matrix, then H as the channel will

3Linear, Time invariant

(18)

be a matrix with M input signal and N output and will be shown by a (M × N ) matrix. And n in the equation (26) is white Gaussian noise and its dimension should be the same as the output, which is (N × 1).

To have the receiver with the perfect channel knowledge, we assume that the channel matrix is random and memoryless4.

Now we can write the matrix for the MIMO channel. The H matrix that denote the channel has (M ×N ) elements and notice that each element is complex number. It means it has a real part and imaginary part both. If each element will be given as hmn then the channel matrix H will be shown as:

⎡⎢

⎢⎢

⎢⎢

h11 ⋯ h1N

⋮ ⋱ ⋮

hM 1 ⋯ hM N

⎤⎥

⎥⎥

⎥⎥

(27)

and each complex element would be given by:

hmn=α + iβ or hmn=

α22×e−i arctan(αβ). (28)

SISO5 is a channel where M = N = 1. The capacity of a SISO channel is the maximum of the information between the input signal and the output for all distributions which satisfied the condition. Now if Signal to Noise Ratio will be denoted by SN R then the capacity for the SISO channel will be given by:

C = EH{log2(1 + SN R.∣h112)}, (29) where EH is the expectation for all channel. As we can see because the channel has one input, and one output, then the Matrix has just one element h11.

But in MIMO channel, the matrix H has more than one element, So the related formula for channel capacity in this case will be defined as:

C = EH{log2[det(InR+ PT

σ2nTHHH)]}. (30)

Here, PT is the transited power and nT is the number of transition antennas.

We can rewritten the equation (No. 30) to:

C = EH{log2[det(InR+ ρ

nTHHH)]}, (31)

where ρ is the PσT2.

4the capacity can be computed as the maximum of the information

5Single Input, Single Output channel

(19)

The covariance6 matrix for input signal will be denoted as:

σx2= PT nT

InT. (32)

We usually assume the noise and the desired signal are uncorrelated. In gen- eral and after some calculation and estimation we will reach the main formula for capacity of channel. Assume that the (m × n) H channel matrix with white Gaussian noise. Then if n was the number of antenna at reviver at transition area, then the capacity of channel will be given by:

C = log2det(I +SN R

n HHH), (33)

where I is the (m × m) identity matrix, and SN R is the signal to noise ratio.

6σ2c or Cov(x) of the matrix is (XXH) and H means Hermitian of the matrix.

(20)

3 MATLAB Codes

In this chapter first we use the MATLAB®software to simulated the Annealing method for the (3×3) random places in a (100×100) positions of antennas. Then by extending the result to any desire numbers for antennas in any area with any possibility of number for solutions we will give the reader a good perspective of how the simulated Annealing should work. We also try to separate the whole program to smaller part and Functions. And in the end we use the MATLAB GUI® to make the program more user friendly and easier to use.

3.1 Basic Program for the Fix Temperature

We have two sort of algorithms for annealing estimation, first, use the algorithm in the specific temperature, which we call it FIX Temperature, and second one for cooling down system, when temperature start at very high degree and ended in its lowest point. In this chapter we will talk about the FIX temperature algorithm.

3.1.1 Algorithm

As we explain before, first we start with very basic program, with a fix giving temperature, fix input/output number of signals and fix number of positions for channel.

Before we go through the program, notice that to create a random channel, instead of using “rand ” we use “randn”. The main different between rand and randn is, the rand code in MATLAB® will create a random number between 0 and 1 but randn is a “normally (Gaussian) distributed random number gen- erator”. The reason to use randn is, in any telecommunication channel, we have The white noise, and the white noise has the normally (Gaussian) distri- bution elements and these elements will be added to the elements of the channel characters. So by using the randn we can simulate the channel, more realistic.

Anyway, the programs that we write for this reason is given in the appendix (A.1).

In this program, algorithm will calculate the capacity of the given channel (which here it is [100×100] random channel) and plot the capacity for (3×3) best positions in the end. As we can see in appendix (A.1) the noise power assume is 1 and signal power is given 10. K which is the test repetition is 100000. In the follow we show outputs, for different temperatures.

(21)

3.2 Main Program

By Main we mean the algorithm which works for decreasing temperature and do the estimation for K different times.

3.2.1 Temperature Algorithm

Before we write the program for simulated annealing, lets talk a little about how temperature has to change during the repetition.

The main program should decrease temperature after each repetition. The temperature started at very high degree and after multiplying with a coefficient less than one, will be decrease until very low degree, close to zero. A sample graph is shown in figure(4), where the start temperature is “100” degree and stop temperature is “0.0001” degree.

Figure 4: Temperature Decrement Graph

The algorithm is very simple, we have a Start Temperature and Stop temper- ature. Then from equation(34), we can calculate a coefficient which decreases the temperature after each loop to cool down the system.

The coefficient will be calculated from:

Tstop=Tstart×aq, (34)

(22)

where Tstart and Tstop are start and stop temperatures, q is the steps of decreasing or in another words, how fast the curve of temperature graph should reach to zero. And finally a is the coefficient which we are looking for. So we have:

log(Tstop) =log(TStart) +q log(a) log(a) = log(Tstop) −log(TStart)

q log(a) =log(TTstop

Start) q a = 10

log(Tstop TStart)

q , (35)

and a, in the equation (3.2.1) is the coefficient which we used in the program.

3.2.2 Main Program and Codes Explanation

We can see the MATLAB®program for annealing algorithm in appendix (A.2).

We explain each part separately in the follow:

From the codes of the Main Program in appendix (A.2), the first lines are initial values for the loops and algorithm which will be used later. Signal power, Noise power will be used for SNR. Start and stop temperatures and q as tem- perature steps is for calculating a for temperature decrement coefficient. And finaaly k for repetition for each temperature. After them we can see the code:

a = 10^(log10(Tstop/Tstart)/q); %Temp coeff

Here by Tstart, Tstop and “q”, the MATLAB® will calculate the coefficient

“a” from the equation (3.2.1). And then later by multiplying the previous temperature in this coefficient we will have the current temperature for each k repetition.

Now we have the codes below:

Cvec = zeros(k,1);

CMean = zeros(q,1);

Tvec = zeros(q,1);

SamNum = k/10;

Sampoint = fix(linspace(1,k,SamNum));

Cvecdoc = zeros(SamNum,1);

(23)

The first three lines are:

- The Capacity vector for the channel after each repetition.

- Mean value for the capacity of the channel.

- Time vector for each repetition.

Then at the fourth line we sampling by 101 of the Capacity vector to make it smaller vector to be able to plot it later. Just to be more clear why we have to do that, imagine that the k, repetition is 1000 and q temperature steps is 10000, then the output vector for capacity will have 1000 × 10000 = 10000000 elements. But after sampling 1001 we have 10000000100 = 100000, which is quite smaller than the original. Obviously we can change the number of sampling if the input numbers for k and q are not that much high, like here in this program we use 101 sample instead of 1001 .

The next lines will choose the initial random values for the chosen Columns and Rows, which are the positions for the antennas in transition and receiver areas. The codes are:

RN = 3;

RC = 3;

hn = 100;

Rr = fix(hn*rand(1,RN)) + 1; %intial value for Random Row Rc = fix(hn*rand(1,RC)) + 1; %intial value for Random Column

If RC is 3 and hn is 100 then the code below:

Rc = fix(hn*rand(1,RC)) + 1;

Will choose three random numbers between 1 and 100. The +1 is for avoid- ing to be “0”. We have the same code to find three random numbers for Rows.

Again, we can choose any number for RN and RC, then we will have RN and RC random number as transmitted and receiver antennas.

Now we use two functions which we will declare in the section (3.3.1) and (3.3.2). in the for-loop, we find the new random numbers for each repetition and also the numbers shouldn’t be equal. So we have:

[Rr,Rc] = Randomize(Rr,Rc,hn);

Rr = NonEqualElements(Rr,hn);

Rc = NonEqualElements(Rc,hn);

(24)

Then it’s time to calculate the channel capacity for a chosen number RN and Rc.

Hm= H(Rr,Rc);

Cm = log2(det((eye(RN)+(Signalvar/noise)*(Hm*Hm’))));

The first line will detect the selected random antenna positions from the channel. The next line will calculate the capacity of the channel for the random selected positions. The formula for channel capacity as we maintain before in (33) is:

C = log2(det(I + [SN R].(H.HH) ) ), (36) or:

C = log2(det(I + [var(Signal)

var(noise) ].(H.HH) ) ), (37) and after that we use Annealing algorithm to find the one of solution.

if Cm > Cmax BR = Rr;

BC = Rc;

Cmax = Cm;

elseif exp((Cm-Cmax)/T) > rand;

BR = Rr;

BC = Rc;

Cmax = Cm;

end

In each repetition the algorithm test the new value of capacity and compare it with the previous one and if it’s greater than the previous value, it will be replaced.

The rest is simple, We have the Time vector and the Capacity vector and mean value for each repetition, so now we can plot the result and find the answer.

The best solutions, are also appear in the work space of MATLAB® soft- ware, by the codes below:

disp(’The Best Positions for The Antenas:’);

disp(BR);

disp(BC);

(25)

The plots and results are explained in the next chapter.

3.3 Functions

MATLAB® functions are some .m files which will be used more frequently in other codes and programs. Here we create two functions to make the program simpler to use. these two functions are used to randomize the input/output elements and to skip choosing the same element for the inputs/outputs, if the number of selection is more than one.

3.3.1 Randomize

This function is written to randomize the number of input/output signal for each period. The Randomize Function is given in the below and the codes for that are in the appendix (B.1):

[Rr , Rc] = Randomize(R,C,M)

We have three inputs for this function:

R: number of Rows.

C: number of Columbus.

M: maximum desired value (here the size of the channel).

For example we started with a matrices with three elements and there ele- ments are:

Rr = [1 2 3], and

Rc = [1 2 3].

After one time using the function with the maximum of 100, the function gave us back the Rr and Rc as bellow:

Rr = [1 3 3], Rc = [100 2 3].

Then we put the function in the loop with 1000 times repetition, the result was:

Rr = [9 38 91],

(26)

Rc = [94 84 36].

And we have completely different random number, between 1 and 100.

3.3.2 Skip Equality

Just because we couldn’t have the same Column’s or Row’s elements in the out- put for our algorithm, we write the following function to skip that. The related codes for this function is in appendix (B.2).

RM = NonEqualElements(M,hn)

We have two inputs. One is our vector and another one is the maximum possible value for each elements. For example imagine a vector with 10 elements, and all of them are 1 as below:

R = [ 1 1 1 1 1 1 1 1 1 1 ].

After using the function the result for R as above and the maximum possible value for each element equal to 100 will be:

R = [ 1 44 82 51 70 37 49 63 55 66 ].

(27)

3.4 GUI

GUI7 is a library in MATLAB®, for presentations and make the codes more user friendly. In this section we explain how it works and then we explain our GUI in this project.

3.4.1 GUI Appearance

After running the MATLAB® software, on the top of the Main window there is an Icon with the shape of pen in it, right beside the Simulink icon.

We can use the GUI feature by pushing this button or writing GUIDE in the command window. After doing that, a new window will be appear.

7Graphical User Interface

(28)

In this window you can choose what type of GUI do you need to work with, Or choose from a list which you have already made.

To start, we choose the Blank GUI from the first menu. The new window will be made, with the name (untitled.fig).

Here in this window by using the button in the left side for adding figures, Panels, Push button and etc... we can made our GUI figure. For instance our GUI, in this report would be like the figure below.

(29)

By Double clicking on each selected box, in this window, the Inspector win- dow for the related box will be shown, where you can edit everything of the selected box in GUI, like the colour, size, place of the box in the whole plate and etc.

After putting everything in the right place, now you have to save the figure.

By saving the figure, MATLAB® will create a .m file, where every buttons and boxes have a definition and codes. Now we have to back to our GUI.fig, and right click on the target button or any box, and from the pop up menu, choose “View Callback” → “Callback”.MATLAB®will go to the .m file which is already made and show the place that you have to add the related codes for that button.

The next time you want to run the GUI, you just need to run the program like any other .m files. MATLAB®will show the GUI window by itself. To edit it again, go to the GUI button on the top of the main window or easily write GUIDE in the command window.

3.4.2 Simulated Annealing in GUI

We just explain about how to make a GUI window, button, and etc. So in this section we just give the reader some explanation about what does each button do in our GUI window and how to plot the desired figure.

Our GUI window in this project is like:

(30)

We try to make the program, much more flexible as we can. In the first Box on the top we have Inputs. from this box we can choose:

1. Number of Columns 2. Number of Rows 3. Channel

4. Signal power 5. Noise power

So almost everything is flexible and we are not stock with (3 × 3) random matrix or the (H = randn(100) + randn(100) ∗ 1i; ). For signal to noise ratio, Signal and Noise power are choose-able too.

In the next panel which we called it Fix Temperature, there are two boxes, one for selecting the desired temperature and another one for number of repeti- tions. At the bottom of the panel, there is plot button to plot the desired figure.

In the Flexible Temperature panel we have more function to change:

1. Start temperature 2. Stop temperature 3. Temperature Steps

4. K as repetition for each step

(31)

And again at the bottom of the panel, there is push button which by perus- ing that, The MATLAB® will plot the figure with the given input data.

There is another button in the plot panel, with the Z label, which is for zooming to the current plot.

We show an example in the figure below

The GUI codes are given in the appendix (C).

By pressing the Z button in GUI window, we can see the figure will pop up in normal MATLAB figure window. there is a small difference between this GUI figure and Main figure in previous section. As we can see here when the algorithm find the best solution, It will keep show it after each repetition.

To explain more about GUI codes, we have two programs (Main and Basic) together. When we select the PLOT button in the Fix Temperature path, GUI will go to the codes which is related to the Basic codes and run that part of programs. And when we choose the PLOT button on the Flexible part, the GUI will run the codes that we put in Flexible part of program which is the same as Main codes as we explain in the previous section.

(32)

4 Plots and Results

In this chapter we will give you the output of the programs and then we will explain each plot separately.

4.1 Fix Temperature program

In his section we will test different temperature for the FIX algorithm. First let the temperature will be “0.3” degree and K is equal to 100000 times for repetition, then the result is shown in figure(5).

Figure 5: Temprature is 0.3 degree for K = 100000

As we can see in the figure(5), at the beginning, program is detected the best value for the capacity around 16.5 [db]. But at the end and after testing 100000 different positions or different solutions, the best channel capacity will be around 21 [db], which is obviously a great improvement. the 0.3 degree is a quite cold temperature to test. We will test a higher temperature, just for comparison. In the figure(6) we can see how algorithm works at “0.8” degree.

As we can see in the figure(6), while we just heat up the system a little bit, the result change a lot and became so messy. Finding the best result here is quite difficult. So we cool down the system again to the “0.01” degree temperature and shown the output in figure(7). The result is outstanding.

(33)

Figure 6: Temprature is 0.8 for K = 100000

Figure 7: Temprature is 0.01 for K = 100000

And the program will find the best solution very fast. In the follow we will show how the decreasing temperature algorithm works.

4.2 Decreasing Temperature

After we run the program the output will be appear in three different windows.

Because the results are based on Channel H, and the channel is given by a ran- dom vector as below, the results or the solutions, each time we run the program could be completely different.

H = randn(hn)+randn(hn)*1i;

(34)

So the output for the Channel Capacity, Mean value for that and the tem- perature in one Sub-plot would be like figure(8):

Figure 8: Channel Capacity, Mean value for that and the temperature

To explain about the result in figure(8), first we draw each plot separately.

Figure 9: Channel Capacity for K = 1000 and q = 100

(35)

from the figure (9) we can see the algorithm will start by choosing the random numbers as the column and row. Then from the Boltzmann equation, we try to estimate best answer:

exp (Ci−Cmax

T ) > [0, 1), (38)

where Ciis the capacity of the channel for the current solution and Cmaxis the best solution. But here, because 100 is quite a big number compare to the difference between the maximum and the current value of the channel capacity, so the exponential is almost one at the beginning, and so on all the solutions will be accepted by algorithm. We can see here even if the current solution was 10 and the maximum was 20:

exp (10 − 20

100 ) =0.9048. (39)

Just because the temperature is so high, the result will be too close to 1 and so almost all the selected random number in the area [0,1) will be less than exponential and will be accepted.

But after a while, when the temperature rich to the 15 degree, then the possibility of the acceptance is around 50 percent and the result will be improve:

exp (10 − 20

15 ) =0.5134, (40)

and when the system is totally cooled down, we can see the result wouldn’t change at all. In the figure(10) we show the mean value for each repetition.

Figure 10: Mean Value of the Channel Capacity for flexible Temp

(36)

The mean value for channel capacity in the figure(10) shows that the result improve when the system cools down. The result started from around 15 [db]

and after all process, finally when the system is cool enough, it reached to it’s maximum value around 21 [db].

And the best antenna positions are:

Rc=53 6 74 Rr=91 62 13

The only thing which we can add to this section is, sometimes the maximum could be fined in the middle of the process. Some times, when the system temperature is high and the system is very unstable, the algorithm will find the best solution.

Figure 11: Highest Capacity in the Middle of the plot

As we can see from the figure(11), the maximum of the channel capacity actually happens somewhere between 4000 to 5000, and just because the tem- perature is not down enough, it will loose it again. We save the result for this moment, but just to be sure that we couldn’t find any better result, we have to go until the system reaches its lowest temperature.

But interesting point here is, the mean value for the capacity is still grown up as the system cools down.

(37)

Figure 12: Mean value for the Channel Capacity

And the best antenna position in transition and receiving place are:

Rc=81 8 20 Rr=98 46 74

(38)

Conculusion

In general, we have lot’s of possible solutions for a specific problem. If solv- ing the problem takes a lot of time, there are different way to reach the answer faster and easier. Here we choose the Annealing Simulation to find one of the best solution. The best solution from this method, shouldn’t always be the best possible one, but it’s almost always is close enough to that, which we can count it as the best solution.

For this report, as we mention before, we had to find 3 position form 100, where the channel capacity is the highest. So, first we write a MATLAB code for very basic situation, where the temperature were fix and the input/output antenna positions were fix and we didn’t have that much freedom to select dif- ferent functionality.

Then by extending the basic program to the next level we add some func- tionality to the program, which help us to reach the better possible solution in the less time. And finally we put every things in a GUI window.

(39)

5 References

References

[1] E Aarts and J Korst, Simulated Annealing and Boltzmann Ma- chines. Wiley, Chichester, UK, 1989.

[2] Van Laarhoven, Simulated annealing: theory and applications.

Dordrecht Boston Norwell, MA, U.S.A, 1987.

[3] T Cover, Elements of information theory. Wiley, New York, 1991.

[4] John Proakis, Digital communications. McGraw-Hill, New York, 1995.

[5] David Brown, Managing Performance and Capacity. Prentice Hal, 2011.

(40)

A MATLAB Programs

A.1 Program for Fix Temp

c l c , clear , c l f

n o i s e = 1 ; %N o i s e Power

S i g n a l v a r = 1 0 ; %S i g n a l Power

k = 1 0 0 0 0 0 ; %t i m e s we run t h e programs Cmax = 0 ; %i n t i a l v a l u e f o r c a p a c i t y

T = 0 . 3 ; %Temprature

Cvec = zeros ( k , 1 ) ; %i n t i a l v a l u e f o r o u t p u t RN = 3 ;

CN = 3 ;

H = randn ( 1 0 0 )+randn ( 1 0 0 )∗1 i ; %hn by hn Chanel Rr = f i x ( 1 0 0∗rand (1 ,RN) ) + 1 ; %i n t i a l v a l u e f o r Row Rc = f i x ( 1 0 0∗rand (1 ,CN) ) + 1 ; %i n t i a l v a l u e f o r

Column f o r m = 1 : k

Rrowindex = f i x ( 3∗rand) + 1 ; %c h o o s e row randomly R c o l i n d e x = f i x ( 3∗rand) + 1 ; %c h o o s e column

randomly x = f i x ( 2∗rand) ;

i f x == 0 %c h o o s e x b e t w e e n 1 o r −1 t o c h a n g e t h e

x = −1; %o l d m a t r i x t o new end

NewRowNumber = Rr ( Rrowindex ) + x ; %New row number

i f NewRowNumber == 101 Rr ( Rrowindex ) = 1 ; e l s e i f NewRowNumber == 0

Rr ( Rrowindex ) = 1 0 0 ; e l s e

Rr ( Rrowindex ) = NewRowNumber ; %new row s e l e c t e d m a t r i x 1 by 3

end

y = f i x ( 2∗rand) ;

i f y == 0 %c h o o s e y b e t w e e n 1 o r −1 t o c h a n g e t h e

y = −1; %o l d m a t r i x t o new

(41)

end

NewColNumber = Rc ( R c o l i n d e x ) + y ; %new column number

i f NewColNumber == 101 Rc ( R c o l i n d e x ) = 1 ; e l s e i f NewColNumber == 0

Rc ( R c o l i n d e x ) = 1 0 0 ; e l s e

Rc ( R c o l i n d e x ) = NewColNumber ;

end %new column s e l e c t e d m a t r i x

1 by 3

Hm= H( Rr , Rc ) ; % random s e l e c t e d m a t r i x from t h e H

Cm = log2 ( det ( ( eye (RN) +( S i g n a l v a r / n o i s e )∗(Hm∗Hm’ ) ) ) ) ;

%3 by 3 c h a n e l C a p a c i t a n c e i f Cm >= Cmax

%t e s t i n g t h e new v a l u e o f c a p a c i t y w i t h t h e p e r i v o u s one

%and i f i t ’ s g r e a t e r r e p l a c e i t ! BR = Rr ;

BC = Rc ; Cmax = Cm;

e l s e i f exp ( (Cm−Cmax) /T) > rand ; BR = Rr ;

BC = Rc ; Cmax = Cm;

end

Cvec (m) = Cmax ; end

disp ( ’ The B e s t P o s i t i o n s f o r The Antenas : ’ ) , disp (BR) ; disp (BC) ;

plot ( ( r e a l ( Cvec ) ) )

(42)

A.2 Program for Flexible Temp

t i c ;

n o i s e = 1 ; %N o i s e Power

S i g n a l v a r = 1 0 ; %S i g n a l Power

k = 1 0 0 0 ; %t i m e s we run t h e

programs

q = 1 0 0 ; %d i v i d e d Time l e n g t h

Cmax = 0 ; %i n t i a l v a l u e c a p a c i t y

T s t a r t = 1 0 0 ; Tstop = 0 . 0 1 ; %Tempratures

T = T s t a r t ; %i n t i a l v a l u e f o r T

a = 1 0 ˆ ( log10 ( Tstop / T s t a r t ) /q ) ; %Temprature c o e f f i c e n t Cvec = zeros ( k , 1 ) ; %i n t i a l v a l u e f o r

o u t p u t

CMean = zeros ( q , 1 ) ; Tvec = zeros ( q , 1 ) ; SamNum = k / 1 0 ;

Sampoint = f i x ( l i n s p a c e ( 1 , k , SamNum) ) ; %S a m l i n g i n t e r v a l

Cvecdoc = zeros (SamNum, 1 ) ; RN = 3 ;

RC = 3 ; hn = 1 0 0 ;

H = randn ( hn )+randn ( hn )∗1 i ; %hn by hn Chanel Rr = f i x ( hn∗rand (1 ,RN) ) + 1 ; %i n t i a l Random Row Rc = f i x ( hn∗rand (1 ,RC) ) + 1 ; %i n t i a l Random Column f o r t = 1 : q

T = T∗a ; f o r m = 1 : k

[ Rr , Rc ] = Randomize ( Rr , Rc , hn ) ; Rr = NonEqualElements ( Rr , hn ) ; Rc = NonEqualElements ( Rc , hn ) ;

Hm= H( Rr , Rc ) ; % Random s e l e c t e d m a t r i x H Cm = log2 ( det ( ( eye (RN) +( S i g n a l v a r / n o i s e )∗(Hm∗Hm’ )

) ) ) ;

% Chanel C a p a c i t a n c e i f Cm > Cmax

(43)

%t e s t i n g t h e new v a l u e o f c a p a c i t y w i t h t h e p e r i v o u s one

%and i f i t ’ s g r e a t e r r e p l a c e i t ! BR = Rr ;

BC = Rc ; Cmax = Cm;

e l s e i f exp ( (Cm−Cmax) /T) > rand ; BR = Rr ;

BC = Rc ; Cmax = Cm;

end

Cvec (m) = r e a l (Cmax) ;

end

Cvecdoc ( ( t∗SamNum) −(SamNum−1) : t ∗SamNum) = Cvec ( Sampoint ) ;

Tvec ( t ) = T ;

CMean( t ) = mean( Cvec ) ; end

t = toc ;

disp ( ’ t i m e ’ ) , disp ( t )

disp ( ’ The B e s t P o s i t i o n s f o r The Antenas : ’ ) , disp (BR) ; disp (BC)

subplot ( 2 , 2 , 4 ) plot (CMean) subplot ( 2 , 2 , 3 ) plot ( Tvec )

subplot ( 2 , 2 , 1 : 2 ) plot ( Cvecdoc )

(44)

B Functions

B.1 Randomize function

function [ Rr , Rc ] = Randomize ( Rr , Rc , hn ) RR = length ( Rr ) ;

RC = length ( Rc ) ;

Rrowindex = f i x (RR∗rand) + 1 ; R c o l i n d e x = f i x (RC∗rand) + 1 ; x = f i x ( 2∗rand) ;

i f x == 0 x = −1;

end

NewRowNumber = Rr ( Rrowindex ) + x ; i f NewRowNumber == hn+1

Rr ( Rrowindex ) = 1 ; e l s e i f NewRowNumber == 0

Rr ( Rrowindex ) = hn ; e l s e

Rr ( Rrowindex ) = NewRowNumber ; end

y = f i x ( 2∗rand) ; i f y == 0

y = −1;

end

NewColNumber = Rc ( R c o l i n d e x ) + y ; i f NewColNumber == hn+1

Rc ( R c o l i n d e x ) = 1 ; e l s e i f NewColNumber == 0

Rc ( R c o l i n d e x ) = hn ; e l s e

Rc ( R c o l i n d e x ) = NewColNumber ; end

(45)

B.2 Skip Equality function

function RM = NonEqualElements (M, hn ) L = length (M) ;

f o r k = 2 : L

f o r n = 1 : k−1

while M( k ) == M( n ) a = f i x ( hn∗rand) +1;

f o r i = 1 : n i f M( i ) == a

break end

i f i == n ; M( k ) = a ; end

end end end end RM = M;

(46)

C GUI codes

function v a r a r g o u t = GUIAnnealing ( v a r a r g i n )

%GUIANNEALING M− f i l e f o r GUIAnnealing . f i g

% GUIANNEALING, by i t s e l f , c r e a t e s a new GUIANNEALING o r r a i s e s t h e e x i s t i n g

% s i n g l e t o n∗.

%

% H = GUIANNEALING r e t u r n s t h e h a n d l e t o a new GUIANNEALING o r t h e h a n d l e t o

% t h e e x i s t i n g s i n g l e t o n∗.

%

% GUIANNEALING( ’ P r o p e r t y ’ , ’ Value ’ , . . . ) c r e a t e s a new GUIANNEALING u s i n g t h e

% g i v e n p r o p e r t y v a l u e p a i r s . U n r e c o g n i z e d p r o p e r t i e s a r e p a s s e d v i a

% v a r a r g i n t o GUIAnnealing OpeningFcn . T h i s c a l l i n g s y n t a x p r o d u c e s a

% w a r n i n g when t h e r e i s an e x i s t i n g s i n g l e t o n∗.

%

% GUIANNEALING( ’CALLBACK’ ) and GUIANNEALING( ’ CALLBACK’ , h O b j e c t , . . . ) c a l l t h e

% l o c a l f u n c t i o n named CALLBACK i n GUIANNEALING.M w i t h t h e g i v e n i n p u t

% a r g u m e n t s .

%

% ∗See GUI Options on GUIDE’ s Tools menu . Choose ” GUI a l l o w s o n l y one

% i n s t a n c e t o run ( s i n g l e t o n ) ” .

%

% See a l s o : GUIDE, GUIDATA, GUIHANDLES

% E d i t t h e a b o v e t e x t t o m o d i f y t h e r e s p o n s e t o h e l p GUIAnnealing

% L a s t M o d i f i e d by GUIDE v2 . 5 22−Nov −2012 1 6 : 0 1 : 1 5

% Begin i n i t i a l i z a t i o n c o d e − DO NOT EDIT g u i S i n g l e t o n = 1 ;

g u i S t a t e = s t r u c t ( ’ gui Name ’ , mfilename , . . .

’ g u i S i n g l e t o n ’ , g u i S i n g l e t o n , . . .

’ g u i O p e n i n g F c n ’ ,

@GUIAnnealing OpeningFcn , . . .

(47)

’ gui OutputFcn ’ ,

@GUIAnnealing OutputFcn , . . .

’ g u i L a y o u t F c n ’ , [ ] , . . .

’ g u i C a l l b a c k ’ , [ ] ) ; i f nargin && i s c h a r ( v a r a r g i n { 1 } )

g u i S t a t e . g u i C a l l b a c k = s t r 2 f u n c ( v a r a r g i n { 1 } ) ; end

i f nargout

[ v a r a r g o u t { 1 : nargout } ] = g u i m a i n f c n ( g u i S t a t e , v a r a r g i n { : } ) ;

e l s e

g u i m a i n f c n ( g u i S t a t e , v a r a r g i n { : } ) ; end

% End i n i t i a l i z a t i o n c o d e − DO NOT EDIT

% −−− E x e c u t e s j u s t b e f o r e GUIAnnealing i s made v i s i b l e . function GUIAnnealing OpeningFcn ( hObject , e v e n t d a t a ,

h a n d l e s , v a r a r g i n )

% T h i s f u n c t i o n h a s no o u t p u t a r g s , s e e OutputFcn .

% h O b j e c t h a n d l e t o f i g u r e

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% v a r a r g i n u n r e c o g n i z e d PropertyName / P r o p e r t y V a l u e p a i r s from t h e

% command l i n e ( s e e VARARGIN)

% Choose d e f a u l t command l i n e o u t p u t f o r GUIAnnealing h a n d l e s . o u t p u t = h O b j e c t ;

% Update h a n d l e s s t r u c t u r e g u i d a t a ( hObject , h a n d l e s ) ;

% UIWAIT makes GUIAnnealing w a i t f o r u s e r r e s p o n s e ( s e e UIRESUME)

% u i w a i t ( h a n d l e s . f i g u r e 1 ) ;

% −−− O u t p u t s from t h i s f u n c t i o n a r e r e t u r n e d t o t h e command l i n e .

function v a r a r g o u t = GUIAnnealing OutputFcn ( hObject , e v e n t d a t a , h a n d l e s )

(48)

% v a r a r g o u t c e l l a r r a y f o r r e t u r n i n g o u t p u t a r g s ( s e e VARARGOUT) ;

% h O b j e c t h a n d l e t o f i g u r e

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% Get d e f a u l t command l i n e o u t p u t from h a n d l e s s t r u c t u r e v a r a r g o u t {1} = h a n d l e s . o u t p u t ;

% −−− E x e c u t e s on b u t t o n p r e s s i n PLOT.

function PLOT Callback ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o PLOT ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

function S P C a l l b a c k ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o SP ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% H i n t s : g e t ( h O b j e c t , ’ S t r i n g ’ ) r e t u r n s c o n t e n t s o f SP a s t e x t

% s t r 2 d o u b l e ( g e t ( h O b j e c t , ’ S t r i n g ’ ) ) r e t u r n s c o n t e n t s o f SP a s a d o u b l e

% −−− E x e c u t e s d u r i n g o b j e c t c r e a t i o n , a f t e r s e t t i n g a l l p r o p e r t i e s .

function SP CreateFcn ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o SP ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s empty − h a n d l e s n o t c r e a t e d u n t i l a f t e r a l l C r e a t e F c n s c a l l e d

% H i n t : e d i t c o n t r o l s u s u a l l y h a v e a w h i t e b a c k g r o u n d on Windows .

(49)

% See ISPC and COMPUTER.

i f i s p c && i s e q u a l ( get ( hObject , ’ BackgroundColor ’ ) , get ( 0 ,

’ d e f a u l t U i c o n t r o l B a c k g r o u n d C o l o r ’ ) ) s e t ( hObject , ’ BackgroundColor ’ , ’ w h i t e ’ ) ; end

function NP Callback ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o NP ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% H i n t s : g e t ( h O b j e c t , ’ S t r i n g ’ ) r e t u r n s c o n t e n t s o f NP a s t e x t

% s t r 2 d o u b l e ( g e t ( h O b j e c t , ’ S t r i n g ’ ) ) r e t u r n s c o n t e n t s o f NP a s a d o u b l e

% −−− E x e c u t e s d u r i n g o b j e c t c r e a t i o n , a f t e r s e t t i n g a l l p r o p e r t i e s .

function NP CreateFcn ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o NP ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s empty − h a n d l e s n o t c r e a t e d u n t i l a f t e r a l l C r e a t e F c n s c a l l e d

% H i n t : e d i t c o n t r o l s u s u a l l y h a v e a w h i t e b a c k g r o u n d on Windows .

% See ISPC and COMPUTER.

i f i s p c && i s e q u a l ( get ( hObject , ’ BackgroundColor ’ ) , get ( 0 ,

’ d e f a u l t U i c o n t r o l B a c k g r o u n d C o l o r ’ ) ) s e t ( hObject , ’ BackgroundColor ’ , ’ w h i t e ’ ) ; end

function FK Callback ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o FK ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

(50)

% H i n t s : g e t ( h O b j e c t , ’ S t r i n g ’ ) r e t u r n s c o n t e n t s o f FK a s t e x t

% s t r 2 d o u b l e ( g e t ( h O b j e c t , ’ S t r i n g ’ ) ) r e t u r n s c o n t e n t s o f FK a s a d o u b l e

% −−− E x e c u t e s d u r i n g o b j e c t c r e a t i o n , a f t e r s e t t i n g a l l p r o p e r t i e s .

function FK CreateFcn ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o FK ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s empty − h a n d l e s n o t c r e a t e d u n t i l a f t e r a l l C r e a t e F c n s c a l l e d

% H i n t : e d i t c o n t r o l s u s u a l l y h a v e a w h i t e b a c k g r o u n d on Windows .

% See ISPC and COMPUTER.

i f i s p c && i s e q u a l ( get ( hObject , ’ BackgroundColor ’ ) , get ( 0 ,

’ d e f a u l t U i c o n t r o l B a c k g r o u n d C o l o r ’ ) ) s e t ( hObject , ’ BackgroundColor ’ , ’ w h i t e ’ ) ; end

function FT Ca llba ck ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o FT ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% H i n t s : g e t ( h O b j e c t , ’ S t r i n g ’ ) r e t u r n s c o n t e n t s o f FT a s t e x t

% s t r 2 d o u b l e ( g e t ( h O b j e c t , ’ S t r i n g ’ ) ) r e t u r n s c o n t e n t s o f FT a s a d o u b l e

% −−− E x e c u t e s d u r i n g o b j e c t c r e a t i o n , a f t e r s e t t i n g a l l p r o p e r t i e s .

function FT CreateFcn ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o FT ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

(51)

% h a n d l e s empty − h a n d l e s n o t c r e a t e d u n t i l a f t e r a l l C r e a t e F c n s c a l l e d

% H i n t : e d i t c o n t r o l s u s u a l l y h a v e a w h i t e b a c k g r o u n d on Windows .

% See ISPC and COMPUTER.

i f i s p c && i s e q u a l ( get ( hObject , ’ BackgroundColor ’ ) , get ( 0 ,

’ d e f a u l t U i c o n t r o l B a c k g r o u n d C o l o r ’ ) ) s e t ( hObject , ’ BackgroundColor ’ , ’ w h i t e ’ ) ; end

function T s t a r t C a l l b a c k ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o T s t a r t ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s s t r u c t u r e w i t h h a n d l e s and u s e r d a t a ( s e e GUIDATA)

% H i n t s : g e t ( h O b j e c t , ’ S t r i n g ’ ) r e t u r n s c o n t e n t s o f T s t a r t a s t e x t

% s t r 2 d o u b l e ( g e t ( h O b j e c t , ’ S t r i n g ’ ) ) r e t u r n s c o n t e n t s o f T s t a r t a s a d o u b l e

% −−− E x e c u t e s d u r i n g o b j e c t c r e a t i o n , a f t e r s e t t i n g a l l p r o p e r t i e s .

function T s t a r t C r e a t e F c n ( hObject , e v e n t d a t a , h a n d l e s )

% h O b j e c t h a n d l e t o T s t a r t ( s e e GCBO)

% e v e n t d a t a r e s e r v e d − t o b e d e f i n e d i n a f u t u r e v e r s i o n o f MATLAB

% h a n d l e s empty − h a n d l e s n o t c r e a t e d u n t i l a f t e r a l l C r e a t e F c n s c a l l e d

% H i n t : e d i t c o n t r o l s u s u a l l y h a v e a w h i t e b a c k g r o u n d on Windows .

% See ISPC and COMPUTER.

i f i s p c && i s e q u a l ( get ( hObject , ’ BackgroundColor ’ ) , get ( 0 ,

’ d e f a u l t U i c o n t r o l B a c k g r o u n d C o l o r ’ ) ) s e t ( hObject , ’ BackgroundColor ’ , ’ w h i t e ’ ) ; end

function T s t o p C a l l b a c k ( hObject , e v e n t d a t a , h a n d l e s )

References

Related documents

Conclusion: The Swedish version of the Reflective Capacity Scale of the Reflective Practice Questionnaire has a degree of reliability and validity that is satisfactory, in-

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i