• No results found

Chapter 7 Future work and conclusions 125

7.8 Real-time issues

It would be interesting to study the difference between schedulers imple-mented in Java and in C. Java schedulers benefit from a Java interface to the scheduler that enables simple scheduling modifications. Schedulers in C, should be faster, but must be compiled together with the machine. The performance, time of context switching, and the size should be compared.

Threads would also be interesting to implement in Java and in C.

A scheduler is responsible for the execution in its environment. Other schedulers may be instantiated inside the environment of another sched-uler. The combination of many schedulers in one application would be studied in deeper detail. Communication between real-time environments within one application has not been studied in detail, to my knowledge.

Native threads co-existing with Java threads

It should be interesting to study threads in Java that are mapped to the thread handling routines of the underlying operating system. Benefits from, for example, a real-time kernel could be integrated into the VM. A combination of native threads and threads handled by Java should be interesting to study in detail.

Native threads require a stack for every thread. The WCLM size of the stacks is relevant to determine. A study of the native thread memory uti-lisation could be integrated into the memory analyser, if the code is anno-tated.

Predictable C-stack sizes

The size of the C-stack may vary during execution, especially if native methods are utilised. A predictable VM must be able to keep track on its WCLM of the C-stack. If native methods do not allocate dynamic memory, or call other methods outside the machine, the memory consumption of the C-stack can be determinable by profiling. To enable recursive native method calls, a more thorough analysis has to be performed to determine the WCLM for the C-stack. With some annotations about the memory behaviour in the native C-methods, a memory profiler can provide the WCLM.

Different context switching points

Currently the context switch may be performed after the execution of a bytecode. Other context switching points would be interesting to study, for example, after each line of source code ([SIM89]), after the execution of a method invocation and backward jump, or in specific context switching bytecodes that are inserted by the class loader. The execution of a given number of bytecodes before a context switch would enable the benefits of RISC architectures. The registers in the processor do only need to be writ-ten back before a context switch.

To allow preemption everywhere during execution, even in the middle of a bytecode, is another interesting approach. The IVM could serve as a test bench for preemption, and it could be compared with other context switching alternatives.

Real-time application debugging

Real-time applications often tend to be more complex to debug than ordi-nary applications. One problem is often to reproduce the error. With the IVM, a debugging context switch could be performed after every bytecode to increase the predictability of the multi-threaded program. This extreme thread switching could also put pressure on the functionality of the application. Some real-time errors could be forced to appear and repeated with this kind of extreme context switching.

Memory efficient synchronisation

The Java Language Specification states that every object should have a lock. However, in embedded systems, the locks take considerable memory space, and not all the locks are utilised during runtime. An idea is to cir-cumvent the unnecessary memory consumption is to give the impression that every object has a lock. Only the necessary objects are equipped with locks. This can be achieved in many ways:

• Lazy-evaluation – create locks as they are needed. This approach is time-consuming, and burdens real-time applications.

• Lock pool – create a limited amount of locks that are reused. This is time efficient since all locks are created during the start of an appli-cation. However, the amount of locks may be difficult to determine.

• Static analysis – the application is analysed before runtime and the necessary locks are created. The ability to download new classes is prohibited with static analysis. Code outside the analysis may uti-lise objects as locks that are not determined as locks by the ana-lyser.

• Dynamic analysis – give every thread its lock that the thread uti-lises to lock objects (see [Blo00]).

It would be interesting to study the efficiency and the memory consump-tion for the different approaches.

WCET analysis

The determination of the worst-case execution time for a Java program should be performed by addition of all the WCETs for the bytecodes in the most time-consuming execution path of the program. The bytecodes exe-cution times are calculated on a deterministic processor by adding the binary code for each bytecode. The WCET of a bytecode must also incorpo-rate the execution of read-and-locate the next bytecode.

After the execution every bytecode, the interpreter checks if there is a pending context switch, and if so, the active thread is rescheduled. The WCET for the scheduler has also to be added in the scheduling analysis.

The scheduling analysis determines if the application is schedulable.

Real-time garbage collector

The exact and incremental RTGC is scheduled as a thread. Higher prior-ity threads can interrupt its execution. Lower priorprior-ity threads are not considered time critical. They are executed after the high-priority threads have had their memory allocations managed by the GC-thread. The

mem-ory management of low-priority threads are performed incrementally as they occur in the code, while high-priority threads only performs minimal memory management work when they are running.

Real-time analysis feedback to the programmer

The real-time analysis could be included in a tool to provide a program-mer with feedback of the real-time analysis. The execution time of a code sequence could be shown and utilised in a scheduling analysis. A real-time expansion of an existing incremental development tool would be preferable, e.g. eclipse (see [Eclipse]) or applab (see [Bja97]).

The tool should also show the worst-case live memory. The program has to be annotated with memory comments to support the WCLM analy-sis. As the annotations are changed, the memory analysis is performed again.

Periodic jitter

The preemptive context switches in the IVM are performed only after the execution of a bytecode. The time to finish the bytecode execution imposes an extra time overhead to take into account during scheduling analysis and in the control loop. The occurrences of the periodic jitter for two threads are depicted in Figure 7.2. The figure also shows a presumed dis-tribution of jitter times, where the worst-case jitter time marks the execu-tion of the longest bytecode.

Figure 7.2 Threads have to finish their currently executing bytecode before a context switch can occur.

thread

jitter time

jitter time

jitter jitter jitter

B A

time

average-case jitter time

jitter time runtime jitter time distribution

frequency

most common jitter time

worst-case jitter time