• No results found

Even Small Optimisations Pay O

The major part of the execution time ofUppaal is spent during veri cation on operations on symbolic states, either during state-space traversals or computation of successor states.

These operations are called at least one time for every symbolic state generated. Even if these operations consumes very little time one by one they are called so many times that a small speed-up of such an operation will increase the performance quite noticeably. Therefore even small optimisations pay o . Five or six optimisations that reduce the veri cation time about 10% each will together cut down the veri cation time by a factor of two.

Imposing Restrictions

Improvements in performance may be achieved if dynamic memory allocation is kept to a minimum. It is better to impose restrictions on how many components that can be veri ed, how many clocks may a DBM contain, how large may domains of data variables be, etc.

In chapter 4 we studied the result of compressing the discrete part of a symbolic state. By restricting the number of reachable states the tool can handle we were able to use the built-in data types and operators in the implementation language more eciently. We might loose the possibility of verifying all possible timed automata, which we cannot do anyway, but we obtain good performance for many examples. One way of achieving the goal of ecient mapping to static data types and still keep generality would be to use partial evaluation and given a model and set of properties produce a veri er optimised for exactly that model and set of properties.

Symbolic States

In chapter 3 we examined the use of static analysis to build data structures that helped us to speed up the veri cation. Similarly the technique of nding entry states in chapter 5 led to a better performance with respect to both space and time. Perhaps it is possible to nd an even more ecient representation of the transition relation or the state-space by utilising such static pre-processing.

A transformation that preserves the relationship between matrix elements would be useful to reduce the size of the DBMs and increase the eciency of handling them.

Symbolic State-Spaces

The hash function used in Uppaaltoday assumes that all states will be visited merely the same number of times. This is obviously not the case and the question if it is possible to nd better hash functions that utilise the network structure to determine what states will be reachable more often and hence spreading the states better reducing the chains in the hash table, is motivated. The use of heuristics and approximations for probabilistic veri cation might be worth studying further.

Memory Management

One interesting approach to examine is the use of free-lists to avoid the \reallocation" of memory blocks that occurs. However the studies in chapter 6 will still be useful because we still need a strategy to determine what part of the free-list to re-use at a given time, in order to achieve good locality.

Garbage collection can be used to reduce fragmentation. We can allocate larger chunks of memory, that can hold many symbolic states, at one time. Deallocation of a symbolic state is postponed until a suciently large block of consecutive states is ready for deallocation.

It may be the case that we end up writing our own Special-purpose Memory manager. It has a lot of advantages: We get total control over the allocation order and memory layout. We also reduce memory consumption and fragmentation because we will be able to allocate memory blocks no larger than what is needed to store a symbolic state. We saw in chapter 4 that the reduction technique for DBMs seems to rely on a memory manager in order to achieve a lower memory consumption.

Architectures of Veri ers

We have recently started to look into client-server design of the veri er. Veri cation tools require large computer resources when performing the computations and users often need to buy expensive machines. However, the tools used for drawing and developing the model does not require that much of computer power. It therefore makes sense to run the veri cation engine on one machine and the user interface part on another. The two parts do not need to be implemented in the same programming language either. When a model is developed it is done interactively and it is an iterative process of verifying, simulating and re ning the model.

To make this development more ecient it makes sense to ask what parts of the state-space may be re-used if the model or property changes slightly.

Measurements and Examples

This appendix describes the examples used throughout the thesis in the performed experi-ments. There is a short overview of each example together with references to more extensive treatments. The appendix also contains information on how the measurements were per-formed, e.g. details about the equipment used etc.

A.1 The Measurement Scheme

Memory consumption is measured by counting the maximum number of pages allocated by the process during execution and multiply it with the page size of the operating system. This will not give us the exact number of bytes required to represent the data inUppaal, it rather gives the number of bytes that the operating system has returned in response to memory allocation requests from the process runningUppaal. Since the counting of pages is done by polling, there is a small probability that the process requests and get a few additional pages during the remaining execution time after the last polling has occurred. By choosing short polling intervals the error can be reduced to a few pages.

When measuring process execution time in a multi-tasking environment one has to be a bit careful. The elapsed time, from whenUppaalstarts until it exits, reported by the operating system may be a bad measure of the actual execution time. This is because other processes get scheduled, interrupts Uppaal and executes. To avoid that, we use the sum of the user time and system time reported for the Uppaal process by the operating system. The user time is the CPU time devoted to the process and the system time is the time that the process is blocked waiting for the operating system to perform requests from the application but not including time spent by other processes. To make sure that the times are accurate the process is executed multiple times and a mean value is used. Due to the complex cache management strategies used by operating systems it sometimes happens that a single execution produces a result with high deviation from the others. Such statistically insigni cant values have been removed before computing the mean value. It makes sense to estimate errors to a part of a

percentage.

The measurements in chapters 3 to 5 all follow the described method and are performed on a 200 MHz Pentium Pro with 256 MB of physical ram running Linux with 2.2.x kernels. The measurements in chapter 6 on memory management are performed in a slightly di erent way.

The hardware used is a 75 MHz Pentium with 8 MB of physical ram running Linux with 2.0.x kernels. More important is the fact that the time spent by the operating system in deallocating memory used by a process is not accounted for correctly in the system time returned by the operating system. Therefore we are forced to measure the elapsed time instead and attempts have been made to reduce unnecessary errors by removing as many other executing processes as possible. Therefore one can expect a bit lower con dence but the error shall still be small and not more than a few percentages.