• No results found

Improving the Schedulability of Real Time Systems under Fixed Preemption Point Scheduling

N/A
N/A
Protected

Academic year: 2021

Share "Improving the Schedulability of Real Time Systems under Fixed Preemption Point Scheduling"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)Mälardalen University Licentiate Thesis 270 Filip Marković IMPROVING THE SCHEDULABILITY OF REAL TIME SYSTEMS UNDER FIXED PREEMPTION POINT SCHEDULING. ISBN 978-91-7485-390-2 ISSN 1651-9256. 2018. Address: P.O. Box 883, SE-721 23 Västerås. Sweden Address: P.O. Box 325, SE-631 05 Eskilstuna. Sweden E-mail: info@mdh.se Web: www.mdh.se. IMPROVING THE SCHEDULABILITY OF REAL TIME SYSTEMS UNDER FIXED PREEMPTION POINT SCHEDULING Filip Marković.

(2)  

(3) 

(4)  

(5)  

(6)     .  

(7) 

(8)             

(9) . ý !"#$. .  

(10)   

(11) !  " !

(12) 

(13) !.

(14) #$

(15) !%&

(16) 

(17) $'

(18) ʉ() *+)+(,)-.+ (/-(+-/ 

(19)  0(3ULQW$%6WRFNKROP3 .

(20) Abstract During the past decades of research in Real-Time systems, non-preemptive scheduling and fully preemptive scheduling have been extensively investigated, as well as compared with each other. However, it has been shown that none of the two scheduling paradigms dominates over the other in terms of schedulability. In this context, Limited Preemptive Scheduling (LPS) has emerged as an attractive alternative with respect to, e.g., increasing the overall system schedulability, efficiently reducing the blocking by lower priority tasks (compared to non-preemptive scheduling) as well as efficiently controlling the number of preemptions, thus controlling the overall preemption-related delay (compared to fully-preemptive scheduling). Several approaches within LPS enable the above mentioned advantages. In our work, we consider the Fixed Preemption Point Scheduling (LP-FPP) as it has been proved to effectively reduce the preemption-related delay compared to other LPS approaches. In particular, LP-FPP facilitates more precise estimation of the preemption-related delays, since the preemption points of a task in LP-FPP are explicitly selected during the design phase, unlike the other LPS approaches where the preemption points are determined at runtime. The main goal of the proposed work is to improve the schedulability of real-time systems under the LP-FPP approach. We investigate its use in different domains, such as: single core hard real-time systems, partitioned multi-core systems and real-time systems which can occasionally tolerate deadline misses. We enrich the state of the art for the single core hard real-time systems by proposing a novel cache-related preemption delay analysis, towards reducing the pessimism of the previously proposed methods. In the context of partitioned multi-core scheduling we propose a novel partitioning criterion for the Worst-Fit Decreasing based partitioning, and we also contribute with the comparison of existing partitioning strategies for LP-FPP scheduling. Finally, in the context of real-time systems which can occasionally tolerate deadline misses, we coni.

(21) ii. tribute with a probabilistic response time analysis for LP-FPP scheduling and a preemption point selection method for reducing the deadline-misses of the tasks..

(22) Sammanfattning Under de senaste decennierna av forskning inom realtidssystem har preemptiv och ickepreemptiv schemaläggning utforskats grundligt och de två paradigmen har jämförts i många olika avseenden. Det har dock visat sig att inget av dem dominerar över det andra när det gäller schemaläggbarhet. Till exempel introducerar ickepreemptiv schemaläggning kostnader för blockering av aktiviteter med lägre prioritet, medan preemptiv schemaläggning medför extra kostnader för avbrottsrelaterade fördröjningar. I det här sammanhanget har begränsad preemptiv schemaläggning presenterats som ett attraktivt alternativ med avseende på exempelvis att öka den övergripande schemaläggningsbarheten, minska blockering från aktiviteter med lägre prioritet samt kontrollera antalet avbrott, och därmed kontrollera den totala avbrottsrelaterade fördröjningen. Det finns flera varianter av begränsad preemptiv schemaläggning som möjliggör ovan nämnda fördelar. I vårt arbete fokuserar vi på Fixed Preemption Point Scheduling (LP-FPP), eftersom det har visat sig effektivt minska den avbrottsrelaterade fördröjningen jämfört med andra alternativ. I synnerhet möjliggör LP-FPP en mer exakt uppskattning av avbrottsrelaterade fördröjningar, eftersom de möjliga avbrottspunkterna för en aktivitet explicit definieras under konstruktionsfasen, till skillnad från de andra varianter av begränsad preemptiv schemaläggning där avbrottspunkterna bestäms vid körning. Huvudsyftet med arbetet är att förbättra schemaläggningsbarheten av realtidssystem som använder LP-FPP. Vi undersöker vidare dess användning i olika domäner, såsom probabilistiska realtidssystem och partitionerade flerkärnsystem, samt föreslår en integrerad analysmetod av cache-relaterad avbrottsfördröjning och schemaläggningsanalys, med mindre pessimism än hos befintliga metoder.. iii.

(23)

(24) Acknowledgment Since my childhood I had a desire to explore, discover and pour my curiosity, which eventually resulted into a more concise term that we often use in recent years of my life – research. I strongly believe that science and art are one of the most important entities of civilisation and moreover research and creation are its most important forces. This belief did not appear in my mind from blank. I owe this premise of the scientific curiosity to many people, from which I will mention a few for whom I believe are the most influential. First is my father Milan Markovi´c who revealed many doors and locks of science to my mind and gave me amazing support throughout the years. Second is my grandfather Rajko Bracanovi´c who showed me the importance of history, the importance of the written words, who gave me the key for opening the doors of questioning everything, without dogmatically believing in anything. Third is my mother Tatjana Markovi´c who gave me unconditional infinite love, without which I would more resemble to a machine than to a human being. Education is a very important gardener of any young mind. Some of the most important persons which cultivated my brain with amazing ideas, facts and thoughts are: Vera Radovi´c (my lovely elementary school teacher), Miomir Andi´ ¯ c (my math professor), Vladan Devedži´c (my programming professor), Žana Kneževi´c (my English language professor), Slobodan Backovi´c (former ˇ dean of the Medieteranean University), Adnan Cauševi´ c (my Master thesis mentor), and all the other persons who thought me anything during my education. The institutions which I admire for their contribution in my life are: Elementary school "Marko Miljanov", High school "Slobodan Škerovi´c" and Mediterranean University from Podgorica. My greatest gratitude goes to Mälardalen University which enabled me to step into the grounds of the scientific research, and also to all people involved in the Euroweb+ project which funded my Master and PhD studies. v.

(25) vi. The persons which I owe my utmost gratitude are professors Radu Dobrin and Jan Carlson. Professor Radu offered me a PhD position at the Mälardalen University a few years ago, after one football game, because that is when you offer them. Professor Jan Carlson is my main PhD supervisor and not only that, but he is the most important force in the transformation of my PhD progress into a growth function. He is also the most patient person who I met in my life and an amazing mentor. I also owe a huge gratitude to professor Björn Lisper, my third PhD supervisor, who opened me the most important doors of my research: Cache-Related Preemption Delay analysis, one day when he stopped by my office with a few papers and a book from his own collection, which helped me a lot during my studies. I also want to thank my brother Lazar Markovi´c for his rare calls and messages, but the ones filled with love and support. He is one of the wittiest persons I know and the amount of smiles he is able to produce is almost infinite. I want to thank a few more family members, my grandmother Mirjana Markovi´c, who was a teacher and who is able to provide the most amazing stories and fiction to the young mind, my grandfather Momir Markovi´c, who believes in persons more than those persons believe in themselves (here I try to avoid his belief in me, since I almost always become shy because of it), my grandmother Sava Bracanovi´c, who is amazingly witty and energetic and shows to all of us that age is just an abstract term to some people. I also want to thank my aunt Ljiljana Bracanovi´c who is one of the most amazing, and kindest persons I know. She gave me some books throughout my childhood which considerably shaped me into the person I am today. Finally, I want to thank to my brother Davor Markovi´c for his support and love throughout the years, and also to my brother Nikola Bracanovi´c, who is a true glue of our families. I also thank to the families of Bracanovi´ci and Nikoli´ci, Tatjana Rikalovi´c and Nikola Pavkovi´c who are my close relatives and a huge love force in my life. A life without friends is barely a fog in existence. I owe my huge gratitude to my best friend Nemanja Jeremi´c who is the most supportive and caring friend, and also the one who judges and criticises my actions most than anyone. Also, I owe my gratitude to Goran Rajta who is my very good friend who I do not see much, but our every conversation and meeting is a treasure for itself. A few more persons who made a huge impact on my PhD studies are my colleagues. Especially I want to thank to Irfan Šljivo who is an amazing colleague and person, and my true friend. His advices throughout my PhD studies were vital in many ways, and our sport sessions are also an important part which should be mentioned. Next to him, a few more persons deserve a huge gratitude and those are: Branko Miloradovi´c, Omar Jaradat, Gabriel Campeanu, Husni Khan-.

(26) vii. far, Afshin Ameri, Julieth Patricia Castellanos Ardila, Saad Mubeen, Abhilash Thekkilakattil, and all the other colleagues from Mälardalen University. Finally, the last year (mid 2017 - mid 2018) is the happiest year of my life and the most productive year of my PhD studies which for now resulted in this thesis. This did not happen without a reason, I most certainly owe it to the main force of my happiness, depicted into the most loving and beautiful woman I met – Tijana Vujiˇci´c. One thing I am certain of is that her heart and soul are bigger than the universe, which shakes up a bit my scientific beliefs. Also, Tijana introduced me to two more persons who showed an amazing empathy and kindness towards me, and the support for our love and my days abroad, those are: Dragica Vujiˇci´c, who sends me an infinite amount of gifts from my home country, and Marinko Aleksi´c, whose gift of motivation, his own thesis, lays on my working desk to this date. Filip Markovi´c June, 2018 Västerås, Sweden.

(27)

(28) List of publications Publications included in the Licentiate thesis1 Paper A: Tightening the Bounds on Cache-Related Preemption Delay in Fixed Preemption Point Scheduling – Filip Markovi´c, Jan Carlson, Radu Dobrin. In the Proceedings of the 17th International Workshop on Worst-Case Execution Time Analysis, WCET 2017. Paper B: Improved Cache-Related Preemption Delay Estimation for Fixed Preemption Point Scheduling – Filip Markovi´c, Jan Carlson, Radu Dobrin. In the Proceedings of the 23th International Conference on Reliable Software Technologies, Ada-Europe 2018. Paper C: A Comparison of Partitioning Scheduling Strategies for Fixed Pointsbased Limited Preemptive Scheduling – Filip Markovi´c, Jan Carlson, Radu Dobrin. Accepted in the IEEE Transactions on Industrial Informatics. (June 7th) Paper D: Probabilistic Response Time Analysis for Fixed Preemption Point Selection – Filip Markovi´c, Jan Carlson, Radu Dobrin, Abhilash Thekkillakattil, Björn Lisper. In the Proceedings of the 13th International Symposium on Industrial Embedded Systems, SIES 2018.. 1 The. included articles were reformatted to comply with the licentiate page settings. ix.

(29) x. Additional publications, not included in the thesis Preemption Point Selection in Limited Preemptive Scheduling using Probabilistic Preemption Costs - Filip Markovi´c, Jan Carlson, Radu Dobrin. In the proceedings of 28th Euromicro Conference on Real-Time Systems ECRTS 2016, Work in Progress section..

(30) Contents I. Thesis. 1. 1 Introduction. 3. 2 Background 2.1 Real-time systems . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Real-time tasks . . . . . . . . . . . . . . . . . . . 2.1.2 Classification of real time systems and scheduling . 2.1.3 Feasibility and Schedulability . . . . . . . . . . . 2.2 Preemptive and non-preemptive scheduling . . . . . . . . 2.2.1 Preemption . . . . . . . . . . . . . . . . . . . . . 2.2.2 Fully-preemptive vs Non-preemptive Scheduling . 2.3 Limited-Preemptive Scheduling . . . . . . . . . . . . . . 2.4 Fixed Preemption Points Scheduling . . . . . . . . . . . . 2.4.1 Preemption Point Selection . . . . . . . . . . . . . 2.5 Cache-related Preemption Delay Analysis . . . . . . . . . 2.5.1 CRPD-aware task model under LP-FPPS . . . . . 2.6 Multi-core real-time systems . . . . . . . . . . . . . . . . 2.6.1 Global scheduling . . . . . . . . . . . . . . . . . 2.6.2 Partitioned scheduling . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. 7 7 8 12 12 13 13 15 17 19 19 21 23 24 24 24. 3 Research description 3.1 Problem statement and research goals 3.1.1 Research Goal 1 . . . . . . . 3.1.2 Research Goal 2 . . . . . . . 3.1.3 Research Goal 3 . . . . . . . 3.2 Research process . . . . . . . . . . . 3.3 Thesis contributions . . . . . . . . . .. . . . . . .. . . . . . .. 27 27 27 28 28 28 30. xi. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . .. . . . . . ..

(31) xii. Contents. 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.3.7 4. 5. Research contribution C1 . Research contribution C2 . Research contribution C3 . Research contribution C4 . Research contribution C5 . Included Papers . . . . . . Other Papers . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. 31 33 34 35 35 36 40. Related work 4.1 Cache-related Preemption Delay Analysis . . . . . . . . . . . 4.2 Schedulability analysis . . . . . . . . . . . . . . . . . . . . . 4.2.1 Preemption point selection . . . . . . . . . . . . . . . 4.2.2 Analysis of real-time systems which can occasionally tolerate deadline misses . . . . . . . . . . . . . . . . 4.3 Multicore scheduling . . . . . . . . . . . . . . . . . . . . . .. 41 41 42 43. Conclusions and Future Work 5.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . .. 47 48. 43 44. Bibliography. 49. II. 55. 6. Included Papers. Paper A: Tightening the Bounds on Cache-Related Preemption Delay in Fixed Preemption Point Scheduling 57 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.4 Computing CRPD bounds . . . . . . . . . . . . . . . . . . . 63 6.4.1 Variable declaration . . . . . . . . . . . . . . . . . . 65 6.4.2 Constraint formulation . . . . . . . . . . . . . . . . . 65 6.4.3 Goal function formulation . . . . . . . . . . . . . . . 67 6.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.

(32) Contents. 7 Paper B: Improved Cache-Related Preemption Delay Estimation Preemption Point Scheduling 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 7.2 System Model . . . . . . . . . . . . . . . . . . . . . 7.3 Sources of CRPD over-approximation . . . . . . . . 7.3.1 Infeasible preemptions . . . . . . . . . . . . 7.3.2 Infeasible useful cache block reloads . . . . 7.4 Computing tighter CRPD bounds . . . . . . . . . . . 7.4.1 Variables . . . . . . . . . . . . . . . . . . . 7.4.2 Constraints . . . . . . . . . . . . . . . . . . 7.4.3 Goal function . . . . . . . . . . . . . . . . . 7.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . 7.6 Related Work . . . . . . . . . . . . . . . . . . . . . 7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . 8. xiii. for Fixed . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . .. 75 77 78 80 80 81 83 84 84 85 88 90 91 91. Paper C: A Comparison of Partitioning Strategies for Fixed Points-based Limited Preemptive Scheduling 95 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.2.1 Preemption point selection . . . . . . . . . . . . . . . 99 8.2.2 Feasibility analysis overview . . . . . . . . . . . . . . 100 8.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 101 8.4 Partitioned scheduling under FPPS . . . . . . . . . . . . . . . 102 8.4.1 Partitioning test . . . . . . . . . . . . . . . . . . . . . 103 8.4.2 Task partitioning . . . . . . . . . . . . . . . . . . . . 105 8.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 8.5.1 A comparison of LP-FPPS partitioning strategies . . . 110 8.5.2 A comparison with fully-preemptive and non-preemptive partitioning strategies . . . . . . . . . . . . . . . . . . 117 8.5.3 Effect of the maximum CRPD on partitioning strategies 119 8.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 121 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.

(33) xiv. 9. Contents. Paper D: Probabilistic Response Time Analysis for Fixed Preemption Point Selection 127 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 9.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 132 9.3.1 Task Model . . . . . . . . . . . . . . . . . . . . . . . 132 9.3.2 Probabilistic Model . . . . . . . . . . . . . . . . . . . 133 9.4 Schedulability Analysis . . . . . . . . . . . . . . . . . . . . . 135 9.4.1 Probabilistic Pending workload computation . . . . . 138 9.4.2 Probabilistic execution time of preemptable part of a job 142 9.4.3 Higher priority interference computation . . . . . . . 142 9.5 Preemption Point Selection Algorithm . . . . . . . . . . . . . 145 9.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . 148 9.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.

(34) I Thesis. 1.

(35)

(36) Chapter 1. Introduction Computing systems are widely used in a modern society to solve many of its problems and improve the speed and precision of information processing. In such systems, it is important that the computations provide correct results, i.e. that they comply with the intended goal of the computation, but there are computing systems where it is also important that the computation is finished within a specified time interval. Such systems are called real-time systems and they play an integral part in many industrial domains, e.g., nuclear power plants, railway, automotive, aviation industry, etc., where the timing requirements are of critical importance. In order to fulfil the timing requirements, real-time systems are designed with a specified scheduling in mind, which is the method of controlling and arranging the tasks (work or workloads) in the computation process. Scheduling algorithms may be classified considering many criteria. In this thesis, we consider online fixed-priority scheduling. This means that the scheduling decisions are made during runtime, using pre-assigned task priorities. In this case, a task with a higher priority would have a preference over the task with a lower priority when they are both ready to be executed. Another classification is according to the interruption policy being used, where the two mostly used and researched scheduling alternatives are:  Non-preemptive Scheduling – tasks execute without preemption, i.e. they cannot be interrupted by the other tasks.  Fully-preemptive Scheduling – tasks may be preempted by other tasks during their execution. 3.

(37) 4. Chapter 1. Introduction. Non-preemptive scheduling introduces the problem of blocking from lower priority tasks. The blocking occurs whenever a lower priority task starts to execute, and a higher priority task is released during its execution. Since the higher priority task needs to wait for the finalisation of the lower priority task, it might miss its deadline. Considering the fully-preemptive scheduling, the preemption takes time to be performed, and this time interval is called preemption-related delay. The taskset might miss deadline because of high preemption-related delays, since the preemptions may occur at any given point in the task. Preemptions can introduce a significant runtime overhead which may lead to high variations in the execution time of the tasks. The maximum time needed for the task execution, called the worst case execution time, may increase by as much as 33% [1] when we take into account preemption overheads. Limited Preemptive Scheduling (LPS) has emerged as schedulability paradigm which generalises the two main real-time scheduling approaches by allowing preemption but controlling the number of preemptions in order to balance affordable blocking from the lower priority tasks, and affordable preemption-related delay for each task. There are several proposed LPS approaches and in this thesis we consider Fixed Preemption Points Scheduling (LP-FPP), proposed by Burns [2]. This approach enables a more precise estimation of the preemption-related delay, compared to the other LPS approaches because the preemption points of a task in LP-FPP are explicitly selected (defined) during the design phase, unlike the other LPS approaches where the preemption points are determined at runtime. Since the LP-FPP is able to most efficiently increase the schedulability of a taskset, as shown by Buttazzo et al. [3], it is the main topic of the thesis. In general, the LP-FPP approach consists of the following design steps:  Selection of preemption points – Where a subset of the potential preemption points is selected such that they may be preempted at runtime.  Estimation of the preemption-related delay considering the selected preemption points.  Evaluation of the taskset schedulability considering the estimated preemption related delay. The overall goal of the thesis is: To improve the schedulability of real-time systems under LP-FPP scheduling. Since we address different types of real-time systems, we primarily distinguish among real-time systems with hard and soft timing requirements. Also, when considering the hardware architecture of a system, we distinguish among single core and multicore systems..

(38) 5. In the domain of single core real-time systems with hard real-time guarantees under LP-FPP, we contribute with:  A Cache-related preemption delay (CRPD) analysis: We enrich the current state of the art for real-time systems under LP-FPP by proposing a novel cache-related preemption delay (CRPD) estimation analysis, which reduces the pessimism compared to the existing approaches by considering two potential sources of over-approximation: – Infeasible preemption combinations: A preempting task is not always able to preempt an instance of the preempted task at all of its preemption points. In order to more precisely estimate a CRPD, it is important to remove the infeasible preemption combinations which may increase the over-approximation in which the analysis results. – Infeasible cache block reloads: A memory cache block may be accessed several times during the execution of a preempted task. However, at most one reload of this memory block should be accounted by the CRPD analysis between two consecutive accesses of the memory block. This is the case because only one preemption between the two accesses may result in an eviction of the memory block. Accounting for this fact may significantly reduce the upper bound on the CRPD estimation. Considering single core real-time systems which can occasionally tolerate deadline misses under LP-FPP, we contribute with:  A novel probabilistic response time analysis: In order to facilitate the use of LP-FPP in such systems, we propose a probabilistic response time analysis under the LP-FPP approach. This analysis estimates a safe maximum deadline miss probability for each task of a taskset.  A new preemption point selection algorithm: We propose a selection algorithm which takes into account the variability of the task parameters and reduces the deadline miss probabilities compared to existing methods. Considering multi-core real-time systems with hard real-time guarantees under LP-FPP, we contribute with:  A partitioning criterion for the allocation based on Worst Fit Decreasing (WFD) heuristics: We propose a partitioning criterion based on the maximum blocking tolerance, which is beneficial to the WFD allocation heuristics..

(39) 6. Chapter 1. Introduction.  A comparison of the partitioning strategies under the LP-FPP approach: We contribute to the partitioned scheduling (where the tasks are allocated to the specified cores, prior the runtime) by comparing the different partitioning strategies with LP-FPP scheduling. The remainder of the thesis is organised as follows: In Chapter 2 we describe the background of the thesis. The research method, goals and contributions are defined in Chapter 3. The related work is described in Chapter 4 and the conclusions and future work are presented in Chapter 5. The thesis is concluded with the collection of the papers..

(40) Chapter 2. Background 2.1. Real-time systems. Computing systems are playing an important role in a modern society. The ability of computing systems to increase the speed of manufacturing, decisionmaking, development, or even reducing the cost of those processes, has made them being widely used in many industrial domains. Some of those domains require precision, predictability, and conformation to certain safety critical requirements, i.e. it is important that the computation is valid, computed on time, and that it cannot provide harm or an error beyond the threshold defined by the safety authorities. Some of the industrial domains that follow those principles are: avionic systems, telecommunication systems, robotics, automotive systems, chemical and nuclear plant controls, etc. In order to provide precision, predictability, and functionality in such industrial domains, computing systems are often designed as entities with a dedicated function within a mechanical or electrical system, called an embedded system. Such systems are often designed considering real-time computing constraints which must not be violated. This means that it is not always enough to produce a correct result of the computation, but it is important to deliver the result on time. Such system is called a real-time system, and according to a definition by Stankovic et al. [4]: "A real-time system is a system that reacts upon outside events and performs a function based on these and gives a response within a certain time. Correctness of the function does not only depend on correctness of the result, but also on the time when these are produced.". 7.

(41) 8. Chapter 2. Background. Metaphors of real-time systems can be found in many sports, e.g., basketball, formula one cars, etc.. In basketball, it is not just important that a player scores when the ball is in the possession of its team, but it is also important that the points are scored within the 24 second time interval from the start time of the possession, which is a time constraint given by the basketball rules. Many industrial domains have similar timing constraints. For example, in a nuclear power plant, the system consists of many sensors which constantly measure the information about the radiation levels, electricity generation, steam line pressure, etc. However, if some of the measured values, e.g. amount of radiation (see Figure 2.1), exceeds the predefined safety threshold, the system needs to alarm the personnel of the plant within a specified time interval so that adequate measures can be taken. If the system does not compute and inform about the leakage in the predefined time interval, then the consequences can be catastrophic, regardless of the computation precision which is delivered late.. 

(42) 

(43)       

(44) . 

(45) 

(46)  

(47)    

(48)  

(49) . 

(50)  

(51) 

(52)  

(53)   

(54)   

(55)  

(56) 

(57)  . Figure 2.1. Real-time computation example.. 2.1.1 Real-time tasks A real-time task is a main building unit of a real-time system. It represents a program which performs certain computations. The term task is often considered as a synonym to thread. However, with task we describe a program at the design and the analysis level of real-time systems, and with thread we describe the implementation of a task in an operating system. Also, the term task refers to the general design description (system model description) of a program and is denoted with τi , while a specific instance of a task (called a task instance in the.

(58) 2.1 Real-time systems. 9. remainder of the thesis) is assumed to be executed in a system at the certain time point and is denoted with τi,j . Task types The most common task types which are used in real-time systems are:  Periodic task: A task type whose consecutive task instances are activated with exact time periods, i.e. the activations of any two consecutive instances are separated by the same time interval.  Sporadic task: A task which handles events which arrive at arbitrary time points, but with predefined maximum frequency, i.e. the activations of two consecutive instances are separated by at least the predefined minimum time interval.  Aperiodic task: A task for which we know nothing about the time between the activation of its instances. Task parameters Each task τi is described with the specified parameters, which can vary between different types of real-time systems. Depending on whether the parameters change during run-time of the system, they can be static or dynamic. The general task parameters which we use in this thesis to describe a task are:  Ti : period or minimum inter-arrival time – The minimum time interval between consecutive invocations of a sporadic task. Or the exact time interval (period) between the two consecutive invocations of a periodic task.  Di : deadline – A timing constraint representing the latest time point relative to the arrival time of the task, at which the task must complete its execution and deliver a result.  Ci : worst-case execution time (WCET) – which is the longest possible time interval needed for the completion of the task execution from its start time, without any interruptions from the other tasks.  Pi : priority – The priority of a task. Task with higher priority has the advantage of being executed prior to the task with lower priority in the ready queue..

(59) 10. Chapter 2. Background.  Ri : Maximum response time – The maximum time interval between the arrival time and the finishing time of among the all possible task instances. Task instance parameters Each task instance τi,j has the same parameters as the task, but is also described with the following additional parameters:  ri,j : arrival time – The time when the task instance gets activated, i.e. when the instance is ready to execute.  si,j : start time – The time when the task instance enters the executing state, i.e. when the instance starts to run.  fi,j : finishing time – The time when the task has completed its execution.  Ri,j : response time – The time interval between the arrival time and the finishing time of a task instance. In Figure 2.2 we show an instance of a sporadic task, which means that the minimum time interval between the consecutive task instances is at least equal to Ti . The task execution is depicted with the grey rectangle on a time line, whose length represents the WCET of τi . The arrival time and period start are the same time instant. Finishing time and the response time are also the same. In the remainder of the thesis we will denote the arrival time of a task instance with the arrow pointed upwards, and the absolute deadline with the arrow pointed downwards. Also, in the remainder of the thesis we consider sporadic tasks.                        .  .        .  .     

(60)   .   .  

(61)    . Figure 2.2. Task parameters..       

(62)   .

(63) 2.1 Real-time systems. 11. Task parameters with probabilistic distributions In some cases we have information about the probabilistic distribution of task parameters, and for those we define the following notations:  Ci : probabilistic execution time – which is a probabilistic distribution of execution time values such that each execution time value is assigned with a probability of its occurrence. For task instance of such systems, we define the following parameter:  Si,j : probabilistic start time – The probabilistic distribution of the possible start times of a task instance. Each start time is assigned with a probability of its occurrence.  Fi,j : probabilistic finish time – The probabilistic distribution of the possible finish times of a task instance. Each finish time is assigned with a probability of its occurrence.  Ri,j : probabilistic response time (WCRT) – which is a probabilistic distribution of the response times of a task instance.  DMP i : deadline miss probability – which is a maximum probability that a task instance will miss its deadline. For example, probabilistic execution time of a task can be described with the following notation:   10 11 12 Ci = 0.2 0.3 0.5 This means that the probability that a task will execute for 10 time units, is 0.2, for 11 the probability is 0.3, and the probability that it will execute for 12 time units is 0.5..

(64) 12. Chapter 2. Background. 2.1.2 Classification of real time systems and scheduling Real time systems can be classified according to many criteria. Depending on the factors outside the computer system, more precisely depending on the potential consequences due to a deadline miss, we distinguish between:  Hard real-time systems: Systems where a deadline miss may lead to catastrophic consequences, and therefore all the deadlines must be met.  Soft real-time systems: Systems where an occasional deadline miss is acceptable. Depending on the design and the implementation of the system we differentiate:  Event-triggered systems: Task activations are depending on the relevant events which are external to the system, e.g., sensor readings, etc.  Time-triggered systems: Task activations are handled at predefined time points. In order to fulfill the timing requirements, real-time systems are designed with a specified scheduling in mind, which is the method of controlling and arranging the tasks in the computation process. Depending on when the scheduling decision is made, real-time scheduling algorithms are classified as:  Online Scheduling – scheduling decisions are made at runtime, using a specified criteria (e.g., priorities).  Offline Scheduling – scheduling decisions are made offline and the schedule is stored. In this thesis we consider online scheduling, moreover fixed-priority scheduling which means that the task priorities are assigned before the run-time. The alternative is to have priorities that change dynamically, e.g., based on the remaining time to the deadline.. 2.1.3 Feasibility and Schedulability Since the real-time tasks can interact in many different ways by interrupting or blocking each other, in order to guarantee that all of them can satisfy the given temporal constraints we use feasibility and schedulability tests. Those tests are defined for a set of tasks Γ which is called a taskset..

(65) 2.2 Preemptive and non-preemptive scheduling. 13.  Feasibility test – A test which determines whether there exists a schedule such that all the tasks of a taskset satisfy the temporal and any other constraints.  Schedulability test – A test which determines whether all the tasks of a taskset satisfy their temporal and any other constraints when scheduled with a given scheduling algorithm. Response Time Analysis In order to guarantee that all of the tasks can meet their deadlines, in this thesis we mostly use the response time analysis. For each task of a taskset, this analysis calculates the maximum response time. If for all tasks of a taskset the maximum response time of each task is less or equal to its relative deadline (Ri < Di ) it means that the taskset is schedulable.. 2.2. Preemptive and non-preemptive scheduling. Scheduling algorithms can further be classified according to the interruption policy being used (can the currently executing task be suspended or not). In this context, the mostly used and researched scheduling paradigms are:  Non-preemptive Scheduling – tasks execute without preemption, once they are allocated to the processing unit.  Fully-preemptive Scheduling – tasks may be preempted by other tasks during their execution. The preemption takes time to be performed, and this time interval is called a preemption-related delay.  Limited-preemptive Scheduling – tasks may be preempted by other tasks but only for a limited amount of times during their execution. In this section we further describe the preemptive and the non-preemptive scheduling and what are the differences between the two. We start by describing the act of preemption.. 2.2.1. Preemption. Preemption is the act of temporary interrupting a task execution with the intention of resuming the task execution at some later time point. This interruption.

(66) 14. Chapter 2. Background. may be performed for various of reasons but in this thesis we will consider only the interruptions due to the arrival of the higher priority task which takes over the processing unit of the system. In Figure 2.3 we show two tasks: τi , and the higher priority task τh . In this example, τi starts to execute immediately upon its arrival. However, during its execution, τh arrives as well, and since it has a higher priority, it preempts τi which resumes its execution after the complete execution of τh .. .  . . 

(67). Figure 2.3. Example of a task τi being preempted by a task τh with higher priority than τi .. Preemption related delay When a preemption occurs in a real-time system, it can introduce significant runtime overhead which is called a preemption related delay. This is the case because during a preemption, many processes and hardware components need to perform adequate procedures in order to achieve a valid act of preemption, and this takes time. Therefore, when we account for a preemption in real-time systems, we account for the following delays, as described by Buttazzo [5]:  cache-related delay – the time needed to reload all the memory cache blocks which are evicted after the preemption, when they are reused in the remaining execution time of a task.  pipeline-related delay – the time needed to flush the pipeline of the processor when the task is interrupted and the time needed to refill the pipeline upon its resume.  scheduling-related delay – the time which is needed by the scheduling algorithm to suspend the running task, insert it into the ready queue, switch the context, and dispatching the incoming task..

(68) 2.2 Preemptive and non-preemptive scheduling. 15.  bus-related delay – the time which accounts for the extra bus interference when the RAM memory is accessed due to the additional cache misses caused by the preemption. In Figure 2.4 we present the same tasks from Figure 2.3 but now we account for the preemption related delay due to preemption from τh on τi . Each preemption related delay needs to account for all of the delays mentioned above, and it can be the reason why a certain task cannot satisfy its timing constraints (meeting its deadline), as depicted in the example below.. .

(69)  . 

(70)   

(71) . .  . Figure 2.4. Example of a preemption related delay.. 2.2.2. Fully-preemptive vs Non-preemptive Scheduling. Fully-preemptive- and Non-preemptive Scheduling are widely used approaches in real-time systems. However, where one approach sets the drawbacks, the other provides advantages, and vice versa. Let us illustrate that with two examples. In the first example, shown in Figure 2.5, we illustrate two tasks: τi and τh , where τh has a higher priority than τi . During one period of τi , τh is released two times, and since we use a preemptive scheduler, τh preempts τi twice and causes two preemption related delays. These preemption related delays are long enough to cause a deadline miss of τi . In real-time systems where preemption can lead to high or even frequent preemption related delays, it is a greater probability that the schedulability of some task may be jeopardised by the introduced delays. In those cases it might be better to use a non-preemptive scheduling, since the drawback of the fully-preemptive scheduling is emphasised..

(72) 16. Chapter 2. Background. . . . Figure 2.5. Example of the fully-preemptive scheduling drawback.. . . . Figure 2.6. Example of the non-preemptive scheduling drawback.. In the second example, shown in Figure 2.6, we illustrate the same tasks as in the previous example, however now we use a non-preemptive scheduler. Here we show the drawback of the non-preemptive scheduling and that is a blocking from the lower priority tasks. In this example, τi arrives before the higher priority task τh . Therefore, τh waits for τi before it is able to start to execute, and this event is called blocking. Since the blocking from the lower priority task τi is long, τh misses its deadline. In order to overcome the drawbacks of the fully-preemptive scheduling (high preemption related delays) and non-preemptive scheduling (high lower priority blocking) a new scheduling approach emerged at the time, called Limited Preemptive Scheduling. This paradigm resolves the drawback of the two above mentioned scheduling algorithms and it is described in the following section..

(73) 2.3 Limited-Preemptive Scheduling. 2.3. 17. Limited-Preemptive Scheduling. Instead of always enabling preemption (fully-preemptive scheduling) or never enabling preemption (non-preemptive scheduling), in some cases we may improve a taskset schedulability if we combine both approaches. LimitedPreemptive Scheduling (LPS) is based on the observation that in order to improve the schedulability of a taskset, we can choose when to enable or disable a preemption. E.g., given the tasks from Figures 2.5 and 2.6, LPS can guarantee that all the tasks meet their deadlines, as shown in Figure 2.7. In this example, the lower priority task τi starts to execute first, and during its execution, a higher priority task τh arrives in the ready queue. At this point, a preemption is enabled, and it introduces a preemption related delay, and τi continues its execution. At the second arrival of τh , a preemption related delay would jeopardise the schedulability of τi , but the remaining execution of τi would not produce the blocking which would jeopardise the schedulability of τh . Therefore, the preemption is disabled at this point and both of the tasks are able to meet their deadlines. Butazzo et al. [3] have shown that LPS may significantly improve a taskset schedulability compared to fully-preemptive and non-preemptive scheduling. Also, LPS may be seen as the superset of those approaches, since if any taskset is schedulable with the fully-preemptive or non-preemptive scheduling, it will also be schedulable with LPS. However, some tasksets may be schedulable only with LPS. Many different approaches are introduced in order to enable LPS, such as:  Preemption Thresholds Scheduling (LP-PTS) – Approach proposed by. . . . Figure 2.7. Example of the limited-preemptive scheduling benefit..

(74) 18. Chapter 2. Background. Wang and Saksena [6], where for each task τi we assign a preemption threshold such that τi can be preempted only by those tasks which have priority higher than the predefined threshold.  Deferred Preemption Scheduling (LP-DPS) – Approach proposed by Baruah [7], where for each task τi we assign a maximum non-preemptive interval. When a higher priority task arrives during the execution of τi , it is able to preempt it only after the finalisation of this time interval.  Fixed Preemption Points Scheduling (LP-FPPS) – Approach proposed by Burns [2], where each task is divided into non-preemptive regions, which are obtained by enabling preemptions at specified points in the task. It has been shown by Butazzo et al. [3] that LP-FPPS provides better schedulability results compared to LP-PTS and LP-DPS. In this thesis we select LP-FPPS as the approach of interest and in the following section we describe it in more details..

(75) 2.4 Fixed Preemption Points Scheduling. 2.4. 19. Fixed Preemption Points Scheduling. Fixed Preemption Points Scheduling (LP-FPPS) is a Limited Preemptive Scheduling approach where the preemption points of a task are selected and known prior to the runtime. According to the LP-FPPS model, each task τi is first divided into d non-preemptive regions by d − 1 potential preemption points. For example, as shown in Figure 2.8, τi consists of three non-preemptive regions (δi,1 , δi,2 and δi,3 ) which are separated by two potential preemption points PP i,1 and PP i,2 . The WCET of the non-preemptive regions are denoted by qi,1 , qi,2 and qi,3 respectively, while the worst case preemption overheads, associated with the three preemption points, are denoted by ξi,1 , ξi,2 and ξi,3 .  .  . .  .    .  .  .  .  .  . Figure 2.8. Task τi with selected preemption points and estimated preemption related delays.. 2.4.1. Preemption Point Selection. Once we know the potential preemption points for each task, we may select a subset of those in order to improve the schedulability of a system. Several preemption point algorithms are proposed in state of the art for hard-real time systems, e.g., [8, 9, 10]. For each task τi of a taskset, these algorithms first compute the maximum blocking tolerance of the tasks with higher or equal priority to τi and try to select preemption points such that the maximum length of any created non-preemptive region is not greater than the calculated tolerance. Maximum blocking tolerance represents the longest time interval for which a task can be blocked before it is able to miss a deadline. For example, given the tasks shown in Figure 2.7, we show in Figure 2.9 the maximum blocking tolerance of τh , which is also the length of the possible maximum non-preemptive region for any task with lower priority than τh . We notice that the maximum blocking tolerance of τh is smaller than the WCET of τi , therefore if we want to guarantee that τh meets its deadline, we need to split τi into at least two actual non-preemptive regions. We also need to consider the worst case preemption delay that the preemption may cause..

(76) 20. Chapter 2. Background. .

(77) 

(78) 

(79)      .

(80) 

(81) 

(82)      

(83)     . Figure 2.9. Correlation between the maximum blocking tolerance and maximum length of the non-preemptive region.. Assuming that τi has two potential preemption points, shown in Figure 2.8, we can select none, only one of those, or both. Selecting none of the points would result in a deadline miss from τh , because the largest non-preemptive region of τi is larger than the maximum blocking tolerance of τi , as shown in the example from Figure 2.6 and Figure 2.9 . Selecting both points would result in a deadline miss of τi , since the introduced preemption delay is large enough to jeopardise its schedulability. We show this case in Figure 2.10.. .   .   . Figure 2.10. Example when all the potential preemption points are selected.. However, by selecting only one point, e.g., PP i,1 , we reduce the maximum length of the non-preemptive region of τi , which is then smaller than the maximum blocking tolerance of τh , as shown in Figure 2.11. This means that τh meets its deadline, because of the fact that τi may only be preempted at PP i,1 . Considering the introduced preemption delay, schedulability of τi is not jeopardised since we account for the worst case preemption delay only for PP i,1 ..

(84) 2.5 Cache-related Preemption Delay Analysis. 21. .  .  . . . Figure 2.11. Example of the LP-FPPS benefit.. To estimate the worst case preemption delay at a preemption point or for a whole task under the LP-FPP approach we need to consider a detailed task model which accounts for the memory cache blocks, which is presented in the following subsection.. 2.5. Cache-related Preemption Delay Analysis. Commonly, in computing systems with cache, the largest part of the preemption related delay consists of the cache-related preemption delay (CRPD). CRPD is defined as the time needed to reload all the memory cache blocks which are evicted after the preemption.. .  .  . .   

(85)  . .  . Figure 2.12. Example of a cache-related preemption delay.. For example, in Figure 2.12 we show the preemption of τi by τh considering only CRPD. The circle denotes that a certain cache block is being accessed by the task. We notice that during the execution of τi , it evicts the content from.

(86) 22. Chapter 2. Background. the memory cache block m. Upon preemption, τh evicts the previous content from m, which does not anymore belong to τi . Since in the remainder of the execution of τi the memory block m will be accessed again, we need to account for an extra delay associated with the preemption, as illustrated by the black rectangle in Figure 2.12. In general case, each task accesses and evicts some memory blocks throughout its execution, and if it is preempted, the preempting task may evict the content of the memory blocks that are previously used by the preempted task. If the preempted task uses some of those evicted blocks in its remaining execution, they need to be reloaded from the higher memory units in order to continue the correct execution of the task. Therefore, we denote the CRPD of a task τi with γi , defined as: γi = g × BRT where g represents the upper bound on the number of necessary cache block reloads caused by preemptions, and BRT represents the block reload time, i.e. time needed for reloading a single memory block from the higher memory units. From the previous observations we may notice that three most important information about memory cache blocks considering CRPD are:  Which memory cache blocks are accessed by a preempted task, before preemption?  Which memory cache blocks are evicted by the preempting task?  Which memory cache blocks are used in the remaining execution of the preempted task, after preemption. For this reason, in the state of the art for CRPD analysis, we distinguish among two types of cache memory blocks:  Useful Cache Block (UCB): As proposed by Lee et al. [11] and superseded by Altmeyer et al. [12], UCB at preemption point is a memory block m such that: a) m must be cached at the preemption point, and b) m may be reused at the program point that is reachable from the preemption point without the self-eviction of m on this path.  Evicting Cache Block (ECB): a memory block m that may be accessed during the execution of a task..

(87) 2.5 Cache-related Preemption Delay Analysis. 23. The main goal of the CRPD analysis is to derive the upper bound on the number of memory blocks which can be evicted by the preempting tasks and might be used after preemption, thus resulting in cache block reloads.. 2.5.1. CRPD-aware task model under LP-FPPS. To address CRPD under LP-FPPS, we extend it with parameters that account for the memory cache blocks accesses. For each preemption point PP i,k of a task τi we define:  UCB i,k – The set of useful cache blocks at PP i,k . For each non-preemptive region δi,k of a task τi we define:  ECB i,k – The set of evicting cache blocks during the execution of δi,k . For a task τi we define:  ECB i – The set of evicting cache blocks during the execution of τi , d+1 defined as: ECB i = k=1 ECB i,k In Figure 2.13 we show an example of the memory cache block access representation used in the remainder of the thesis. Memory cache blocks are represented with integer values and we notice that during the first nonpreemptive region δi,1 of τi , memory blocks 1 and 2 are accessed, therefore ECB i,1 = {1, 2}. Regarding the useful cache block set UCB i,1 of the first preemption point PP i,1 we notice that both of memory blocks 1 and 2 are used after PP i,1 , therefore UCB i,1 = {1, 2}. However, UCB i,2 = {5} since only the memory block 5 is used after PP i,2 . The set of evicting cache blocks of τi is ECB i = {1, 2, 3, 4, 5}.  

(88).  . . . . . . . . 

(89). . . . . Figure 2.13. Example of cache block access representation in the LP-FPP task model..

(90) 24. Chapter 2. Background. 2.6. Multi-core real-time systems. A multi-core processor is a computing component which consists of a number of independent processing units (cores). Typically, at some level the cores share a single memory unit, but may be directly connected via bus with lower memory units. Multi-core architectures are widely used in embedded and real-time system, and may significantly improve system performance. We distinguish between homogenous multi-core architecture, meaning that all the processors on a multi-core have identical characteristics, and heterogenous, meaning that processors have different characteristics (speed, performance, etc.). In the state of the art for multi-core real-time systems there are three main approaches for task scheduling: global scheduling, partitioned scheduling and hybrid scheduling, which is the combination of two.. 2.6.1 Global scheduling Global scheduling is an approach where all the tasks from a taskset are controlled by a single scheduler and each instance of any task may be executed on any processor. All the task jobs are stored in a single queue before they start to execute. In any moment, the number of jobs which are executing is equal to the number m of processors in a system. If a job is preempted at any processor, it may continue its execution on some other, which means that the resources and the execution of the job is transferred to a processor. This act is called a migration. Migration needs time to be performed and may result in significant delays which may not be easy to calculate since the tasks share same memory busses and resources. Bastoni et al. [4] show that the upper bound on migration and preemption delays in global scheduling may be very high.. 2.6.2 Partitioned scheduling Partitioned scheduling is an approach where a task from a taskset may execute only on one dedicated processor. Before the runtime, a taskset is partitioned in m subsets and each subset is dedicated to a single processor. Migration is not possible in partitioned scheduling and this reduces the scheduling problem to a set of uni-processor cases. However, one of the problems with partitioned scheduling is that when the tasks need to be allocated to the appropriate processors it becomes a variation of the bin-packing problem, proved to be NP-hard. In Figure 2.14, we show a general process of task partitioning. First the taskset Γ is ordered according to the selected criteria (decreasing priority, in-.

(91) 2.6 Multi-core real-time systems. 25.   

(92)      .    .    .   . . . . . . Figure 2.14. Example of task partitioning.. creasing deadline, etc.). Then the tasks are allocated one by one, according to the partitioning criteria. Before any of the tasks is allocated to the processor, partitioning criteria have to guarantee that the allocation will result in a schedulable subset of tasks on that specific processor. Partitioning criteria is typically based on some of the bin-packing heuristics, which are proved to be near-optimal in bin-packing problem, as shown by Johnson [13]. In this thesis, we consider First Fit Decreasing (FFD) and Worst Fit Decreasing (WFD) bin-packing heuristics. FFD heuristics a task to the lowest indexed processor that satisfies some condition, e.g. are the tasks of the processor schedulable upon the allocation of the task to the processor? WFD assigns a task to the processor such that some preselected parameter is minimised, e.g. utilisation..

(93)

(94) Chapter 3. Research description 3.1. Problem statement and research goals. The overall goal of the thesis is: To improve the schedulability of real-time systems under LP-FPP scheduling. We formulate three research goals by identifying several improvement possibilities within current state of the art, within different types of real-time systems (hard and soft real-time systems, considering the timing requirements, and single core and multicore real-time systems considering hardware architecture).. 3.1.1. Research Goal 1. RG1: To improve the timing analysis for LP-FPP scheduling in single core real-time systems, accounting for a more precise estimation of cache-related preemption delays. The existing approaches for computing the cache-related preemption delay under LP-FPP scheduling do not take into account two important sources of over-approximation. The goal is to tighten the upper bound on the CRPD estimation by considering that some preemption combinations are infeasible, and also that some cache block reloads should not be accounted by the CRPD analysis. In this thesis we propose a methodology for a more precise CRPD analysis under LP-FPP scheduling, considering both sources of over-approximation. 27.

(95) 28. Chapter 3. Research description. 3.1.2 Research Goal 2 RG2: To improve the schedulability of tasks under LP-FPP scheduling in partitioned multi-core systems. Partitioned scheduling can be seen as a set of several single core cases, and it is widely used in the industrial domain. Since it is shown by Butazzo et al. [3] that in single core case LP-FPP scheduling dominates over the other LPS approaches, and also over the non-preemptive and the fully-preemptive scheduling, our goal is to investigate possible schedulability improvements when using LP-FPP and partitioning in the multi-core real-time systems.. 3.1.3 Research Goal 3 RG3: To use LP-FPP scheduling in order to reduce the risk of experiencing a deadline miss in systems which can occasionally tolerate them. In the real-time systems which can occasionally tolerate a deadline miss, it is important to decrease the deadline miss probability of a task, if possible. Our goal is to use LP-FPP scheduling to decrease a deadline miss probability of a task, by defining non-preemptive regions, i.e. selecting preemption points.. 3.2. Research process. In this thesis, we define and evaluate new methods, techniques and theoretical foundations in the context of Fixed Preemption Point Scheduling, considering the research goals stated above. Such type of research that combines design and development of the solutions is typically referred to as a constructive research. A constructive research process consists of the following four steps: 1. Problem formulation, which answers to the following questions: • What is the problem that we want to solve? • Is the problem relevant from theoretical and practical perspective? • What is the current state of the art and state of the practice with respect to the problem? 2. Solution proposal:.

(96) 3.2 Research process. 29. • What is the existing knowledge that can be used in order to solve the problem? • What is the void in the current knowledge? • How can we fill the void? • What is the theoretical solution? 3. Implementation of the solution: • How can we construct a practical solution from the theoretical construct? 4. Evaluation: • Does the proposed solution solve the problem? • Is the proposed solution better than the other solutions (if they exist)?.  .  

(97)  

(98)   .     .   .    .

(99)  

(100)   . Figure 3.1. Research process steps.. Additionally to the described four steps, in our work the research process of each goal (see Figure 3.1) starts with a literature survey where we explore the current state of the art. Later we identify the problems, such as sources of over-approximation for CRPD estimation in LP-FPP context. We also identified the areas in state of the art of real-time systems where schedulability can be improved when using LP-FPP, such as real time system which can occasionally tolerate deadline misses, and also partitioned multi-core systems. For each of the defined problems we proposed a solution which is later implemented and evaluated. For the evaluation we used methods for empirical evaluation which.

(101) 30. Chapter 3. Research description. are well accepted in real-time research, e.g., the UUnifast algorithm [14] for generation of task utilisations. The research process flow may differ depending on the proposed problems, i.e. research goals. The first research goal was achieved with two iterations in the research flow graph, since the identified problem formulation about infeasible useful cache block reloads was discovered after the evaluation for the first contribution to RG1, which is infeasible preemption combinations. Research goals RG2 and RG3 are currently addressed considering only one iteration of the research flow graph, but from their evaluations we may further identify new research problems.. 3.3. Thesis contributions. Further, we define the thesis contributions, mapping them to the research goals they contribute to, which are defined in Section 3.1. We first list the contributions, which are described in detail in the following subsections. To achieve the research goal RG1, we define:  Contribution C1: A method for tightening the bounds on CRPD in LPFPP scheduling, taking into account infeasible preemption combinations and infeasible useful cache block reloads. To achieve the research goal RG2, we define:  Contribution C2: Partitioning criterion for the Worst-Fit Decreasing based partitioning.  Contribution C3: A comparison of existing partitioning strategies for LP-FPP scheduling. To achieve the research goal RG3, we define:  Contribution C4: A probabilistic response time analysis for LP-FPP scheduling.  Contribution C5: A preemption point selection method for the real-time systems which can occasionally tolerate a deadline miss. In Table 3.1 we enlist the mapping between the research goals and the contributions defined in the thesis..

(102) 3.3 Thesis contributions. 31. Table 3.1. Mapping between the research goals and the contributions. RG1 RG2 RG3. 3.3.1. C1 x. C2. C3. x. x. C4. C5. x. x. Research contribution C1. C1: A method for tightening the bounds on CRPD in LP-FPP scheduling, taking into account infeasible preemption combinations and infeasible useful cache block reloads.. Infeasible preemption combinations By accounting only for the feasible preemption combinations, an upper bound on CRPD can be reduced in many cases. The problem consists of two main subproblems:  How to identify infeasible preemption combinations?  How to determine which feasible combination results in the worst-case CRPD? In Figure 3.2 we show a task τi with three preemption points and three useful cache block sets at each point. Let us assume that τi can be preempted by two higher priority tasks with short WCET and long periods. The proposed method would first capture the fact that, e.g., the instances of the higher priority tasks cannot preempt at all three preemption points, but only at most two preemption  

(103)

(104) .            

(105)

(106) .           .  .  .  .  .  . . Figure 3.2. Motivating example for the C1..

(107) 32. Chapter 3. Research description. points. Furthermore, the method would detect what preemption combination results in maximum CRPD, e.g., preempting at PP i,2 and PP i,3 , based on the evicting cache block sets of the preempting tasks and the useful cache block sets of the non-preemptive regions of the preempted task τi . In order to address the first subproblem (identifying feasible combinations), we defined a condition for identifying infeasible preemption combinations, and for the second subproblem we defined a constraint satisfaction problem which derives the maximum CRPD based on the preemption combinations not deemed infeasible. Infeasible useful cache block reloads The existing approaches for CRPD estimation in the context of LP-FPP scheduling do not consider the fact that once we account for the eviction of the useful cache block at some preemption point, the reload of the same memory block should be accounted at most once until the succeeding non-preemptive region where the memory block is re-accessed by the same task. Let us consider the example shown in Figure 3.3. Let us assume that the memory block m is accessed by task τi during its execution, and that m is evicted by a preempting task τh during the preemption at PP i,k . Also, memory block m is in the useful cache block set at each preemption point from PP i,k until PP i,l−1 , since m must be cached and m may be reused during the remaining execution of τi starting from PP i,k . However, accounting for the eviction and the reload of m at each of those preemption points would be an over-approximation since m is re-accessed only after PP i,l−1 , and therefore can be reloaded at most once during the interval from the point where it is evicted (PP i,k ) until the point where it is re-accessed (δi,l ).. . .     . . . .  . . 

(108). .         

(109) 

(110)  .  .     

(111)  . Figure 3.3. Example for the infeasible memory cache block reloads.. .

(112) 3.3 Thesis contributions. 33. Therefore, we formulate the following sub-problems:  How to identify infeasible useful cache block reloads?  How to determine what feasible preemption combinations and useful cache block reloads result in a worst-case CRPD? In order to address the first subproblem, we defined a condition for identifying infeasible useful cache block reloads, and for the second subproblem we defined a constraint satisfaction model which derives the maximum CRPD based on the feasible useful cache block reloads together with the feasible preemption combinations. The feasible useful cache block reloads depend on the feasible preemption combinations, which is why we use the constraint satisfaction problem in order to compute an upper bound on the CRPD.. 3.3.2. Research contribution C2. C2: Partitioning criterion for the Worst-Fit Decreasing based partitioning. The existing approaches for LP-FPP multicore scheduling do not consider partitioned multicore systems. These systems are well accepted in industry and they can be seen as a set of single core scheduling cases, since a subset of a taskset is allocated to a specified processor. The exact allocation of tasks to the processors affects the schedulability of a real-time system, and this problem is proven to be NP-hard, as it is a variation of the bin packing problem.   

(113)      .    .  .       .  .   . . . . . Figure 3.4. Multicore partitioning of the tasks, using LP-FPP. .

(114) 34. Chapter 3. Research description. In Figure 3.4 we show the overall process which first orders a taskset according to a specified task parameter, then the tasks are allocated one by one considering LP-FPP approach and specified partitioning criterion. Partitioning criteria are based on the known bin-packing heuristics: First Fit Decreasing (FFD) and Worst Fit Decreasing (WFD). We contribute to RG3 by integrating the LP-FPP approach with the partitioned scheduling, and we propose a novel partitioning criterion which is used during the taskset allocation considering WFD bin packing heuristics. Unlike, FFD based partitioning, which needs only a binary schedulability result in order to allocate a task to a processor, WFD based partitioning needs a quantitative value which should provide a schedulability metrics for different processors. Based on this metrics, it allocates a task to the processor where this quantitative value is lowest. The proposed criterion is based on the maximum blocking tolerance of the tasks in a processor because the higher the maximum blocking tolerance is, the greater the possibility is that the next task assignment to a processor still results in a schedulable taskset. This is the case because the maximum blocking tolerance quantifies the peak processing demand which would be able to jeopardise the schedulability of the taskset in a processor.. 3.3.3 Research contribution C3 C3: A comparison of existing partitioning strategies for LP-FPP scheduling We also contribute to RG3 by comparing different combinations of bin packing heuristics and taskset orderings to investigate which one improves schedulability the most under different system parameters. The evaluation performed on randomly generated tasksets shows that in the general case, no single partitioning strategy fully dominates the others. However, the evaluation results reveal that certain partitioning strategies perform significantly better with respect to the overall schedulability for specific taskset characteristics. While the density based partitioning increases the schedulability of the tasksets consisting of high density tasks, priority based partitioning increases the schedulability of the tasksets with higher number of tasks and lower average utilisations. On average, FFD based heuristics dominates the WFD heuristics, however WFD increases the schedulability of the tasksets with lower average utilisations, since the tasks may be evenly distributed among the cores. The results also reveal that the proposed partitioning strategies dominate over Fully Preemptive and Non-Preemptive partitioned scheduling. Moreover, the proposed approach is affected significantly less by increase of the CRPD compared to Fully-Preemptive.

References

Related documents

Därför är det viktigt att sjuksköterskor som arbetar inom palliativ vård har kunskap om musikterapins inverkan och hur den används i vården för att patienten skall

Valmännen lade sin röst endera för fortsatt socialdemokratisk regering, för ett förnyat och förlängt mandat för rege- ringen Palme, eller för en regering Fäll-

Vad som nu trycker honom - och s01 fått honom att skriva boken om del svenska välståndets uppgång och ned- gång - är det övermod och det lättsillliC som han

SA, som hade kallats in för att ge go- da råd och nu ville lätta upp stäm- ningen, började berätta en historia om hur Tage Erlander en gång hade tvingats lägga ner

Våld på akutmottagningar är ett ämne som det behöver bedrivas forskning kring, för att kunna se effekten av åtgärderna och om prevalens av våldet har förändrats. Annan

utredningsdag. Dagen börjar med en kort information, i grupp, där patienten får veta vad en utredning innebär och hur deras utredningar kommer att se ut. Efter detta träffar

Vi kan nu erbjuda energimätning på enskilda maskin- grupper eller hela linjer under förutbestämda 8dsin- tervaller, för a7 kunna analysera poten8alen a7 spara energi i

Interface I: Concept development phase and pilot and demonstration phase Interface II: Pilot and demonstration phase and market formation phase Interface III: Market formation phase