• No results found

System Analysis of Energy-Constrained Quality of Service and Power Management Techniques

N/A
N/A
Protected

Academic year: 2022

Share "System Analysis of Energy-Constrained Quality of Service and Power Management Techniques"

Copied!
71
0
0

Loading.... (view fulltext now)

Full text

(1)

System Analysis of Energy-Constrained

Quality of Service and Power Management Techniques

SIMON ERIKSSON

Master’s Thesis at ENEA and the department of Machine Design KTH Supervisor: Magnus Persson, Barbro Claesson, Detlef Scholle

Examiner: Martin Törngren

MMK 2010:68 MDA 377

(2)
(3)

.

Master’s thesis MMK 2010:68 MDA 377

System Analysis of Energy-Constrained Quality of Service and Power Management Techniques

Simon Eriksson

Approved: Examiner: Supervisor:

2010-06-21 Martin Törngren Magnus Persson, Barbro

Claesson, Detlef Scholle

Commissioner: Contact Person:

ENEA Barbro Claesson, Detlef

Scholle .

Abstract

This master’s thesis is a part of the GEODES project, Global Energy Optimization for Dis- tributed Embedded Systems, and deals with the issues regarding energy-constrained Quality of Service, QoS, in real time embedded systems.

Due to the development in the areas of embedded systems, new demands have arisen.

Especially for mobile devices, the increase of functionality has overrun the development of batteries. To deal with these kinds of problems a new aspect of QoS has become of interest.

The new aspect is to bring power awareness into the system. Power awareness means to enable enhancements or guarantees for the lifetime of a device through intelligent software.

For online decision making a software component, an energy-constrained QoS manager, is needed. The manager module may be distributed in a system or dispatched locally. The mission is to decrease power consumption by the cost of lesser performance of the system.

This procedure is called to introduce system degradation into the system.

A lot the research in the power management field targets single components. This the- sis aims to investigate the possibilities in system level management. An analysis of existing energy-constrained QoS and power management frameworks and techniques has been made, both on system level and on component level. As a result of the analysis a specification for the interface and policies of an energy-constrained QoS power manager is presented.

A fundamental choice in the specification and design was that the system in hand consist of power manageable components, PMCs, which can be seen as atomic black boxes. This modeling choice was shown to be complicated to apply in real life systems and a solution proposal for interdependencies of PMCs is discussed.

A design and a prototype, the Power Manager E, PME, module, has been implemented

that tries to fulfill the specifications created from the analysis. The design handles both

hardware and software as PMCs and optimizes for system performance under an energy

constraint, expressed through desired system runtime, set by the user of the system. The

prototype is implemented on an i.MX31 platform from Freescale and runs as a module to

operating system OSE 5.4 Delta provided by ENEA.

(4)

.

Examensarbete MMK 2010:68 MDA 377 Systemanalys av energistyrd QoS och medveten

energikonsumtion för inbyggda system Simon Eriksson

Godkänt: Examinator: Handledare:

2010-06-21 Martin Törngren Magnus Persson, Barbro

Claesson, Detlef Scholle

Uppdragsgivare: Kontaktperson:

ENEA Barbro Claesson, Detlef

Scholle .

Sammanfattning

Examensarbetet är en del av GEODES projektet, Global Energy Optimization for Dis- tributed Embedded Systems. Arbetet hanterar frågor som rör energistyrd Quality of Ser- vice, QoS, i inbyggda realtidssystem.

Genom den senaste utvecklingen inom området inbyggda system har nya krav introduc- erats. Speciellt för mobila enheter, där tillkomsten av ny funktionalitet har gått snabbare än utvecklingen av batterikapacitet. För att hantera detta så har en ny aspekt av QoS fått större intresse. Den nya aspekten är att utveckla system med en medveten energikonsum- tion. Energimedvetenhet innebär mjukvara som möjliggör förbättringar eller garantier för körtiden av systemet, online eller offline. För beslut online krävs en mjukvarukomponent, en mjukvarumodul för energistyrd QoS. Uppdraget för mjukvarumodulen är att minska strömförbrukningen till kostnaden av lägre prestanda hos systemet. Detta förfarande kallas för att införa kontrollerad system degradering i systemet.

Forskningen inom området koncentrerar sig mest på att minska energikonsumtionen för enskilda komponenter. I detta examensarbete så är siktet ställt på att hitta metoder som fungerar på systemnivå. En analys av befintliga ramverk och tekniker för mjukvarukon- trollerad energikonsumtion samt energistyrd QoS har genomförts, både på systemnivå och på komponentnivå. Resultatet av analysen presenteras av en specifikation för en ener- gistyrd QoS styrmodul anpassad för kontroll på systemnivå. Ett grundläggande antagande i specifikationen och implementeringen var att systemet i fråga består av komponenter med ställbar energiförbrukning. Dessa komponenter kan ses som fristående oberoende funktion- aliteter. Modellantagande visade sig vara komplicerat att tillämpa på verkliga system och ett lösningsförslag vid beroenden mellan komponenter diskuteras.

En design och en prototyp Power Manager E, PME, har skapats enligt den specifika- tion som tagits fram i rapporten. Den hanterar både hårdvara och mjukvara som ställbara komponenter och försöker optimera systemets prestanda under krav på energikonsumtion.

Kravet ställs av användaren genom en önskad körtid av systemet. Protoypen är byggd på

en i.MX31-plattform från Freescale och är en tilläggsmodul till OSE5.4 Delta från ENEA.

(5)

Contents

1 Introduction 5

1.1 Problem Statement . . . . 5

1.1.1 Energy-Constrained Quality of Service . . . . 5

1.1.2 Demonstration . . . . 6

1.2 Method . . . . 6

1.3 Delimitations . . . . 7

1.4 Use Case: Rescue worker aid system . . . . 7

1.4.1 Features . . . . 8

1.4.2 Workload Description . . . . 8

1.4.3 General Power Management Policy . . . . 8

2 Introduction to Energy-Constrained QoS 9 2.1 Models and Definitions . . . . 9

2.2 System Degradation - To Scale System Performance . . . . 11

2.2.1 Application-defined Performance Model . . . . 11

2.2.2 System-defined Performance Model . . . . 11

2.3 Polices and Policy Optimization . . . . 12

3 Advanced Configuration and Power Interface - ACPI 13 3.1 Description . . . . 13

3.1.1 Global States - Gx . . . . 13

3.1.2 Sleep States - Sx . . . . 15

3.1.3 Device States - Dx . . . . 15

3.1.4 Processor States - Cx . . . . 15

3.1.5 Device and Processor Performance States - Px . . . . 16

3.1.6 Device Control . . . . 16

3.2 Summary . . . . 17

4 Energy-Constrained QoS and Power Management - Techniques and Methods 19 4.1 Proposal of System Architecture . . . . 19

4.2 Fundamental Models and Definitions . . . . 19

4.3 Management Policies - How to Use Energy . . . . 21

4.3.1 Multiple-choice Knapsack Problem . . . . 21

4.3.2 Dynamic Power Managements Methods - How to Make Use of Idle Time . . . . 24

4.4 Summary . . . . 27

5 Specification of an Energy-Constrained QoS Power Manager Module 29

(6)

5.1 Interface Specification . . . . 29

5.1.1 Predictive Techniques . . . . 29

5.1.2 Stochastic Control . . . . 29

5.1.3 Adaptive task set, EQoS framework . . . . 30

5.1.4 ECOSystem . . . . 30

5.1.5 Interface specification conclusions . . . . 30

5.2 Data Structures . . . . 31

5.3 Policy Specification . . . . 33

6 Design of the Power Manager E, PME, Module 35 6.1 PME Policy Design . . . . 37

6.1.1 Fundamental Inputs to the PME module . . . . 38

6.1.2 System Benefit Definition . . . . 39

6.1.3 PME Dynamic Power Management Policy . . . . 39

6.2 Interface Design . . . . 40

6.2.1 Application Programmers Interface . . . . 41

6.2.2 Power Manager Portability Interface . . . . 41

6.2.3 OSE Signal Interface . . . . 41

6.3 Description of PME Policy Algorithm . . . . 41

6.3.1 DPM Algorithm . . . . 42

6.3.2 Power Redistribution and Time Slicing . . . . 43

7 Implementation of the PME module 45 7.1 Software Architecture . . . . 45

7.2 PME Policy Implementation . . . . 45

7.3 Interface Implementation . . . . 46

7.3.1 PME API . . . . 47

7.3.2 PME Portability Interface . . . . 47

7.4 Results . . . . 47

7.4.1 Verification Interface Specification . . . . 47

7.4.2 Verification Policy Specification . . . . 50

8 Conclusions 53 8.1 Discussion . . . . 55

8.2 Future Work . . . . 57

Bibliography 59 A Signals Description - PME Module 61 A.1 OSE Signals . . . . 61

B Work flow - PME Module 65

(7)

List of Tables

2.1 Performance model . . . . 12

3.1 Global system states specified in the ACPI. . . . 14

3.2 Sleep states specified in the ACPI. . . . 15

3.3 Device states specified in the ACPI. . . . 15

3.4 Processor states specified in the ACPI. . . . 16

3.5 Device and processor performance states specified in the ACPI. . . . 16

5.1 Interface specifications for an energy-constrained QoS power manager . . . . . 31

5.2 Policy specifications for an energy-constrained QoS power manager . . . . 33

6.1 Component classification suggestion. . . . 38

6.2 PME API description. . . . 41

6.3 PME portability interface description. . . . 41

6.4 Table showing the characteristic matrices of three PMCs . . . . 42

7.1 Data structure used to store the system state . . . . 46

7.2 Verification of PME API. . . . 47

7.4 Verification of interface specifications, PME module . . . . 47

7.4 Verification of interface specifications, PME module . . . . 48

7.4 Verification of interface specifications, PME module . . . . 49

7.3 Verification of PME portability interface. . . . . 50

7.5 Verification of policy specifications, PME module . . . . 50

7.5 Verification of policy specifications, PME module . . . . 51

8.1 Examples of components that can be modeled as PMCs. . . . 55

A.1 PME OSE Signal Interface Description . . . . 61

A.1 PME OSE Signal Interface Description . . . . 62

A.1 PME OSE Signal Interface Description . . . . 63

(8)

List of Figures

2.1 Conceptual figure of relationship between an energy-constrained power manager and power manageable components. . . . 10 3.1 Architecture of an ACPI-compliant system. Image Source: ACPI Specifica-

tions 4.0, www.acpi.info/spec.htm . . . . 14 3.2 State relationship of a ACPI-compliant system. Image source: ACPI Specifi-

cations 4.0, www.acpi.info/spec.htm . . . . 17 4.1 Conceptual software architecture of the energy-constrained QoS Manager Module. 20 6.1 Model of an idling PMC with three power consumption levels. . . . . 36 6.2 How the estimated power consumption relates to actual power consumption. . 38 6.3 Overview of the PME design and its interfaces. . . . 40 6.4 System overview illustrating the time slicing concept. . . . . 44 6.5 Calculation of the threshold value P

save

. . . . 44 8.1 The PME design has room for implementing the use of a mode dependency graph

when selecting which upgrades to apply to the system. . . . 56

B.1 Overview of PME work flow diagram . . . . 65

(9)

List of Abbreviations

Abbreviation Description

ACPI Advanced Configuration and Power Interface API Application Programmer Interface

DPM Dynamic Power Management

DVFS Dynamic Voltage and Frequency Scaling EQoS Energy-aware Quality of Service

IPC Interprocess Communication

IRIS Increasing Service Increasing Reward MCKP Multiple Choice Knapsack Problem

OSE Operating System E

PA Power-aware

PMC Power Manageable Component

PME Power Manager E

PI Performance Index

WCET Worst Case Execution Time

(10)
(11)

Chapter 1

Introduction

This thesis is a part of the GEODES

1

project and deals with the issues regarding energy- constrained QoS

2

, in real time embedded systems. The work is a cooperation between ENEA and the department of Machine Design at the Royal Institute of Technology, Stockholm, Sweden.

Due to the development in the areas of embedded systems, new demands have arisen.

Especially for mobile devices, the increase of functionality has overrun the development of batteries. To deal with these kinds of problems a new aspect of QoS has become of interest.

The new aspect is to bring Power Awareness, PA, into the system. Power Awareness means to be able to make intelligent software design decisions that enables enhancements or guarantees for the lifetime of a device, online or offline. For online decisions, creating a software component, an energy QoS power manager, is needed. New functionality needs to be implemented in the system software architecture to deal with these decisions. The goal is to decrease power consumption to the cost of lesser performance of the system as a result. This procedure will be referred to as introducing a form of system degradation into the system. Meaning that the power manager will have the ability to degenerate the system’s functionality with a increased system runtime as a result.

There are standards concerning Power Management, PM. A consortium of international companies has constructed a standard called ACPI

3

[1], which provide specifications for both hardware and software development. In an attempt to comply with these standards, adoptions to the design of the software component for energy-constrained QoS management are to be considered.

1.1 Problem Statement

1.1.1 Energy-Constrained Quality of Service

Functionality to decide which actions to take to lower the power consumption is in this thesis called an energy QoS manager module. Its responsibility lies, given information from both lower and higher level of the system, in taking proper decisions for attaining desired power consumption and applying them to the system.

Given the nature of GEODES, a distributed approach is kept in mind for the specification of the power manager module and the algorithms it implements. Most of the solutions

1Global Energy Optimization for Distributed Embedded Systems

2Quality of Service

3Advanced Configuration and Power Interface

(12)

CHAPTER 1. INTRODUCTION

presented in this research field [15, 10, 25] suggest that the power manager module should be OS directed, that is, it should be implemented as module in the OS-layer or in a middleware layer. Hence, the module specified in this thesis will probably be a part of a middleware layer that interfaces both downwards against the operating system as well as to a higher interface against the application layer.

The main task of the thesis is to specify and design a software component, an energy- constrained QoS manager module to be used in an aid system for rescue workers, see section 1.4. This means that a thorough investigation on system level needs to be conducted and the behavior for the software component needs to be specified. A concept will be created that will be proven feasible in theory. A design will be implemented that is feasible considering the constraints of this thesis. Questions that need answers for solving this task are:

• How shall the concept of power consumption be defined, given the use case?

• How shall the concept of performance of the embedded system be defined, given the use case?

• What kind of effects do different decision policies have on a real time system and on its real time performance?

• Is there a optimal solution, given the determined definitions for the use case?

• In the existence of such a solution, is it feasible given the limitations of this thesis?

See section 1.3

• What kind of models are going to be used for the system subcomponents and their power consumption, if online measuring/estimation is not available?

Since the location of the energy-constrained QoS manager component will be situated in a middleware that interfaces both applications and underlaying operating system, one subtask of the thesis will be to specify and implement an Application Programmers Inter- face, API, and a portability API. To apply with related standards in the area of power management, the implemented design will, if possible, comply with the ACPI standard. A relevant aspect of this task is to analyze the limitations and possibilities of the ACPI.

1.1.2 Demonstration

To be able to test and verify the consequences of the manager’s actions, a secondary subtask is to implement a platform that enables a procedure to demonstrate and verify the software.

This is done to show a management policy strategy that is feasible, and successful, in terms of decreasing power consumption as tradeoff for system performance. Given that the state of the GEODES project is coming closer to its end, a demo is a important part to provide the feature of actually showing the gain of a power aware system in a way that is easy to comprehend.

1.2 Method

The work process of the thesis is threefold. It starts with an in depth literature study

covering the topics of design and real time implications of Power Management policies

mechanisms on embedded systems. This is done to really gain knowledge of the dynamic

and limitations that lies ahead when constructing an energy-constrained QoS manager for a

power aware embedded system. Key aspects of interfacing with common hardware devices

(13)

1.3. DELIMITATIONS

such as the CPU, storage systems, the GPU and display systems are also researched. The phase results in a theoretical report as well as a specification for the energy-constrained QoS manager component and the interfaces connected to it.

The second phase is devoted to design a implementation fulfilling the specifications, given in phase one, to be able to implement the Energy QoS Manager module which will control the behavior of a power aware real time embedded system that is based on the use case.

The third phase constitutes in implementing the design in the second phase, resulting in the demonstration discussed in the problem statement. Continuous monitoring of the work from all the three phases will be documented in the thesis.

1.3 Delimitations

The time assigned for writing this thesis is 20 weeks. This means that the scope of the thesis will have to comply with the given time limitation. The decision policy implemented may not be the optimal one if the remaining time is not sufficient to implement it or for feasibility constraints due to calculation time. A use case has been provided from the GEODES project. The use case has been more precisely specified in section 1.4. The use case serves as a scene to set the outer thesis perimeter. The scenario that has been used describes a system where the user of the system desires a system runtime that is not available when utilizing the system on full performance. The task for the power manager is then to reduce system functionality to enable the runtime request from the user. Under restrictions on desired runtime from the user, the mission is to optimize the system performance. Given the aim of presenting a viable demonstration, the implementation serves as a “proof of concept”. Therefore, different kinds of simplifications may be made regarding modeling different inputs to the system such as measuring the energy consumption of the system, measuring the remaining energy available to the system or some characteristics of systems components. Every simplification will be acknowledged in the report.

After ten weeks a report of the literature study must be complete. The second and third phase must be completed within the remaining ten weeks.

The hardware for the test platform is limited to a freescale board with an i.MX31 processor. The operating system used is OSE

4

5.4 Delta provided by ENEA.

In the implemented manager module, Power Manager E, PME, a fundamental choice has been made. It is built around the concept of power manageable components, PMCs. The implemented design does not handle explicit interdependencies between PMCs. Although, there is room for implementing such procedures in the design.

1.4 Use Case: Rescue worker aid system

The use case consist of a battery-driven system to help the rescue worker by providing information. Both remote information as well as information generated locally (by other equipment at hand to the rescue worker). Local information such as position and other measurements of the environment of the rescue worker is monitored and sent back to the operating central. The system also monitors the rescue worker in terms of different health conditions and it provides a link for speech and visual communication with other rescue workers and the operating central.

The system is power aware, i.e. has the ability to account for the available energy resources and deploy its functionality accordingly to be able to guarantee a certain lifetime

4Operating System E

(14)

CHAPTER 1. INTRODUCTION

of the system. Therefore the system has to support some form of ability to degrade its functionality, which means that the system follows a power management policy during runtime. The policy limits the submodules of the system in their power consumption as a tradeoff for performance. The management policy is carried out on the system by the power manager.

1.4.1 Features

The main idea is to be able to have a system with a lot of functionality that can be degraded to improve the battery lifetime.

• Voice communication

– With other rescue workers – With the operating central

• Video communication

– With other rescue workers – With the operating central

• Display

– Local information from other equipment – Remote information

• Monitoring

– Health conditions

∗ Heart rate

∗ Breathing – Local environment

∗ Temperature

∗ Acceleration of the rescue worker’s body – Positioning

1.4.2 Workload Description

The need for the system to be able to handle tasks with both hard and soft deadlines will be assumed. Since the user can be a person as well as another system the tasks may be periodic as well as aperiodic. Therefore, the system needs to be able to handle both a periodic and an aperiodic workload.

1.4.3 General Power Management Policy

The general policy is to be able to guarantee a certain desired lifetime of the system (at

the cost of performance). The policy mission is for performance under an energy constraint

thus, accepting minimum system performance if necessary.

(15)

Chapter 2

Introduction to Energy-Constrained QoS

The notion of Energy-Aware Quality of Service, EQoS, was created by Pillai et al. EQoS, implemented in [10], is a framework for maximizing the benefits of a real time system for a certain amount of energy available to it. The concept is that a system should be able to provide multiple levels of quality that are coupled with different levels of energy consumption. The idea comes from the Quality of Service notion, QoS, originated in the field of computer network performance. In that sense, QoS means providing dynamic allocation of the network resources given some criteria connected to the tasks carried out on the network. Today, the term is commonly used in other areas such as multimedia applications and real time systems. The reason for bringing EQoS into mobile embedded systems is that they are often systems that have limited power resources. If it is not a possibility, to change batteries or provide some other source of power when the system’s energy resources are running out, the situation can become quite problematic. Therefore is it very beneficial to have the possibility to adapt the systems functionality to hit the desired runtime. The mechanism needed to make these decisions is implemented by a energy-constrained QoS power manager module.

Work done in this field [10] address the problem by assigning the tasks running on the system different Quality of Service, QoS, levels to decrease power consumption. Pillai et al. uses a notion of utility of a task assigned to each QoS level it provides. This is an abstract notion, without real metric units, that relates the importance or benefit to the system between tasks relative to each other. One drawback in the method proposed by Pillai et al. [10] is that it does not support the ability to set constraints on utility given the presence of another task, e.g. Task A has utility x only if task B is running etc. How the assignment of utility to a task is done is related to how the QoS degradation of a task is implemented [10]. See section 2.2 for a closer description.

2.1 Models and Definitions

The foundation for bringing power awareness into a system lies in the existence of system

component(s) that has/have ability to adapt themselves in regard to their power consump-

tion, thus they are power manageable. A power manageable component is by Benini et al

[4] a hardware device in the system. In line with the definition in [4] the granularity of a

system will in this thesis be a power manageable component, a PMC, which is component

classified as an atomic black box. That is, from system level, the PMCs are just different

forms of functionality that can provide various levels of performance as a tradeoff for power

consumption.

(16)

CHAPTER 2. INTRODUCTION TO ENERGY-CONSTRAINED QOS

To include software tasks in the concept of PMCs is to expand the definition given in [4] by Benin et al. This should not be conflicting with the original definition and provides a beneficial addition to the concept in terms of the EQoS framework. As described earlier, the work done by Pillai et al [10] investigates methods in adapting the tasks running on a system to be able to control its power consumption.

Figure 2.1 illustrates the relationship between the power manageable components and a manager module. In the registration process at boot up of a system, the PMCs supply the energy-constrained QoS manager module with information of their power consumption levels. The registered PMCs are governed by a "Master/Slave" relationship in the sense that the energy-constrained QoS manager is the master and conduct orders. Orders contain directions on which power consumption level the PMC shall work at.

Figure 2.1. Conceptual figure of relationship between an energy-constrained power manager and power manageable components.

The energy-constrained QoS manager module’s assignment is to choose the operating

levels of the power manageable components of the system so that the total amount of energy

does not become larger than the amount supplied by the energy source. These choices

are governed by Management Policies, MP. The management policies determines how and

when the portion of energy consumption shall be divided between the power manageable

components, see chapter 4. As a result, the policies provides the solution to a performance

optimization under energy constraints, or vice versa. In the scope of this thesis the former,

i.e. performance optimization under energy constraints, will be chosen as optimization

(17)

2.2. SYSTEM DEGRADATION - TO SCALE SYSTEM PERFORMANCE

criteria.

2.2 System Degradation - To Scale System Performance

The key idea in a power manageable system is to trade performance of the system against its power consumption. The idea of system degradation is borrowed from the field of fail- safe systems, which uses the term graceful degradation as way to express that the system can handle some failures and still be functional. Hence it degrades gracefully in case of an unforeseen event of failure. System degradation is used in a sense that the system can scale, i.e. trade, its performance in the presence of some constraint on its energy consumption.

Performance is a dimension of freedom strongly dependent on the tasks running on the system. To model the performance as tradeoff for power consumption a performance model is needed. There are two model proposals, Application-defined performance model and System-defined performance model, explained below. These models can be used in designing the tradeoff policies for the power manager.

2.2.1 Application-defined Performance Model

For real-time control applications, a simple way to gracefully degrade performance is to poll values of the controlled environment less frequently. To do that, the period of the task responsible for polling is changed to a larger value, thus invoked less often. This must of course be done in a controlled fashion, since a sampling frequency set to low can lead to instability. The utility can be derived by applying a performance index (PI) [21] on the operation of elongating the polling period. When the task of the system is to calculate a value based on an iterative algorithm, then the tradeoff translates in the number of iterations calculated. The utility can be set as a monotonically-increasing function, taking service time as argument, for the task. This is known as imprecise computation, which provides Increasing Rewards for Increasing Service, IRIS. For modeling graceful degradation on IRIS tasks the WCET is shortened, thus allowing less time for computations, and the utility of the task is decreased [10].

2.2.2 System-defined Performance Model

Another aspect of system performance degradation is to look at more abstract system properties. A model given in [5] provides some system metric that can be used to compare the effects of a power management policy against the same system without any power management. The performance model describes stochastic system variables and consist of four characteristics given in table 2.1.

One assumption is made using this model. The assumption is that the system has be ca- pable to handle the system load without the power management policy being implemented.

Top performance is given by the system with all PMCs in their highest state. Hence, the metrics are an incremental comparison against the default system.

These are interesting metric that can be helpful in evaluating a management policy in

the design phase. To calculate the metric, some kind of workload has to be provided. The

metric reveal information to the designer on how the system will degrade in an abstract sense

but it does not give explicit control on the impact of system functionality. One approach to

investigate the impact of a management policy is to calculate the performance metric using

event-driven simulator techniques [5].

(18)

CHAPTER 2. INTRODUCTION TO ENERGY-CONSTRAINED QOS

Table 2.1. Performance model

Metric Description

Collision probability Given that the PMC has to transition it self to a higher state to serve a work request, the incremen- tal collision probability is a measure on how high the probability is that another request is issued during the transition time and thus keeps two or more request pending while the transition is com- pleted.

Average latency penalty The metric state how long the average wait is for a request before it being served due to state tran- sitions.

Throughput penalty How the number of request served per time unit is increased when the power management policy is applied.

Average transition time The average time spent in a given state transition for a given PMC.

2.3 Polices and Policy Optimization

A policy is a set of rules that defines the proper actions to take, provided state information of the system and its workload, to achieve some mission objectives. They can for example be implemented as a look up table where each state combination of interest is stated together with the desired action or through other algorithms. The main thing is that these policy algorithms take decisions based on the system state as input.

Policy optimization is to find the optimal actions to take given the mission objectives.

To be able to do this a measurable metric has to be defined to compare the different valid

actions against each other.

(19)

Chapter 3

Advanced Configuration and Power Interface - ACPI

A consortium of companies (Hewlett-Packard Corporation,Microsoft, Intel Corporation, Phoenix Technologies Ltd., Toshiba Corporation) has tried to pave the way for easier imple- mentations of energy efficient computer systems. The result is the Advanced Configuration and Power Interface, ACPI [1], a standard that specifies a framework for constructing an en- ergy aware system. The rest of this chapter serves a summary of the extensive specifications that can be downloaded at www.acpi.info.

3.1 Description

The approach of power management from ACPI is that the software component controlling the system, regarding to energy consumption and configuration, shall be implemented in the operating system layer, thus provide Operating System-Directed configuration Power Management. ACPI describes interfaces towards hardware and software as well as data structures needed to implement a energy efficient system. The architecture for a ACPI- compliant system can be seen in Figure 3.1. It shows how the different interfaces are defined by the ACPI and how which parts of a system relates to each other.

The fundamental idea in ACPI is that it defines a set of states that a system can transition between. There are five different classes of states. Most of the states represents some inactive form of state. The states are enumerated and the lowest number indicates most active state in a class. A description of each state class is given below and the relationship between them is depicted in Figure 3.2.

3.1.1 Global States - Gx

In an ACPI-compliant system, there are five states that are visible to the user. These are called global states and constitutes of one working state, G0, and four non-working states.

The different levels of sleep is characterized on how much context that is saved and how the context is saved. The G2 state, soft off, is equal to the sleep state S5 and does not save system context and must reboot to transition to higher state. In state G1, sleeping, system context is saved and no reboot is needed to transition back to the working state.

Inside the sleep state G1 are there several forms of sleep states defined, which are closer

described in section 3.1.2. The G0 is the working state, meaning this is the state where user

mode applications dispatches. The performance states, Px, resides in the G0 state. The

(20)

CHAPTER 3. ADVANCED CONFIGURATION AND POWER INTERFACE - ACPI

Figure 3.1. Architecture of an ACPI-compliant system. Image Source: ACPI Specifications 4.0, www.acpi.info/spec.htm

Px states are the one of greatest interests in this thesis while they describe performance tradeoff against lesser energy consumption. The global states are given in table 3.1.

Table 3.1. Global system states specified in the ACPI.

Class Description

G0 Working

G1 Sleeping G2/(S5) Soft off

G3 Mechanical off

S4 Non-Volatile Sleep

(21)

3.1. DESCRIPTION

3.1.2 Sleep States - Sx

There are five sleeping states, four of which resides in the global state G1 (sleeping). The main characteristic differentiated in the sleeping states are the amount of context that is lost upon entering the state, and how long the latency is when exiting the state to a higher state (waking up). The sleep states are given in table 3.2,

Table 3.2. Sleep states specified in the ACPI.

Class Description

S1 No system context is lost and provides low latency at wake-up.

S2 Low latency in waking up, but the context for CPU and system cache is lost. The OS is responsible for saving the context before entering this state.

S3 Still low latency for waking up but all system context is lost except system memory.

Hardware remain their memory context.

S4 Non-Volatile sleep. The deepest sleep state. It is presumed that all hardware are powered off but platform context i maintained. The state saves the system context to a non-volatile memory and power off the system almost completely.

S5 Soft off. Similar to the S4 state but it does not save any context. Requires a full reboot to exit from the state.

3.1.3 Device States - Dx

For all hardware resources connected to the system ACPI defines a class of device states.

These five states resides inside the G0 state and are not visible to the user of the system.

Table 3.3. Device states specified in the ACPI.

Class Description

D0 The device is fully active.

D1 State definition is left to the developer with the constraint that the state consumes more power and preserves more context than D2.

D2 State definition is left to the developer with the constraint that the state consumes less power and preserves less context than D1.

D3(hot) The state consumes more energy than D3 and has a lower latency for returning to the D0 state. Most importantly, the power supply is kept on, thus the device is reachable through software calls.

D3 Device context and power is lost. The OS has to re-initialize the device when existing it out of the state.

3.1.4 Processor States - Cx

The CPU gets special treatment in the ACPI specifications. Since it is the core of a system

it can not be considered just as another hardware resource. Therefore, differ the processor

states, Cx, from the device states, Dx.

(22)

CHAPTER 3. ADVANCED CONFIGURATION AND POWER INTERFACE - ACPI

Table 3.4. Processor states specified in the ACPI.

Class Description

C0 The processor is fully operational and is executing instructions.

C1 No instructions are executed and the latency for waking up and enter C0 shall be short enough to neglect.

C2 No instructions are executed, consumes less power than C1. The worst case latency for returning back to the C0 state is provided to OS.

C3 Even lesser consumption and larger latency for returning to the active state than C2. The worst case latency has to be provided to the OS. CPU caches are main- tained but ignore snoop activity. The OS is responsible for ensuring processor cache coherency.

3.1.5 Device and Processor Performance States - Px

The device and processor performance states lies within the states D0 and C0. Their characteristic is that they trade performance/capabilities for energy consumption but still residing within the active device state D0.

Table 3.5. Device and processor performance states specified in the ACPI.

Class Description

P0 The state provides full activity and may have the highest power consumption.

P1 The performance is limited below the level provided in P0. It consumes less power than maximum power.

Pn The scale n = [2, ..., 16] for device and processor performance levels consist of down to a maximum of 16 states. The states are sorted on power consumption, where Pn represents minimal power consumption and the lowest performance while still residing in the active states D0 or C0.

3.1.6 Device Control

The interface against the hardware described in Figure 3.1 consist of three conceptual blocks,

the ACPI Tables, the ACPI BIOS and the ACPI Registers. The ACPI Tables contains

descriptions of the functionality from the hardware that have registered with the power

manager. The descriptions of the hardware interfaces in the ACPI tables are encoded in

ACPI Machine Language, AML, which is a pseudo-code machine language. The instructions

describe how to interact with the device. The higher level commands are connected with the

AML and can therefore be interpreted by the AML interpreter module to execute necessary

commands. When the AML code is executed, the ACPI Registers are written to or read

from. The ACPI Register holds the registers that a ACPI compliant hardware device must

supply. The ACPI driver uses these registers for controlling and monitoring a device through

the AML interpreter module.

(23)

3.2. SUMMARY

Figure 3.2. State relationship of a ACPI-compliant system. Image source: ACPI Specifi- cations 4.0, www.acpi.info/spec.htm

3.2 Summary

The ACPI specification is a very detailed and extensive specification for constructing a

power-manageable system. For the scope of this thesis, the specifications imply too much

overhead for implementation but inspiration and useful ideas from the framework has been

gathered. The ACPI specifications does not restrict the designer in the choice of manage-

ment policies but merely creates a common framework for developers and manufacturers to

work against to speed up the development process and keeping the costs down in creating

power manageable systems and components.

(24)
(25)

Chapter 4

Energy-Constrained QoS and Power

Management - Techniques and Methods

The chapter describes methods and techniques to manage energy-constrained quality of service. It gives a more explicit view on the problems in hand when designing a power manager. It lies the ground for the fundamental concepts of the power manageable system to be designed in this thesis like, how to model the energy dissipated from the system E

Diss

and general management policies. The chapter also reviews techniques and paradigms presented in the research field of the area.

4.1 Proposal of System Architecture

The architecture for the energy-constrained QoS manager module can consist of four basic features that are set up in a feedback loop with the system. Inspiration is taken from [3] where the structure is used for monitoring an embedded system. The architecture is depicted in the figure 4.1.

The controller block gets state info from the registered PMCs and processes it to a format that is understandable for the policies- and decision-block. In the policies- and decision-block are the decisions from the manager taken. Actions are sent to the actuator block that communicates with the power manageable components, i.e. power manageable hardware components or software tasks.

4.2 Fundamental Models and Definitions

Given that the assignment of the energy-constrained QoS manager module is to set the levels for the power manageable components in such a way that performance is maximized under the energy constraints restricting the system, then there is a pretty straight forward approach in applying this to the system. The energy-constrained QoS manager module calculates an energy distribution, given information on the available system energy, and divides it between the components. The difficulty lies in how the energy shall be divided between components. This task is far from straight forward and is the main issue for the policy designer to solve.

The distribution of energy can either be fair, equal for all components, or weighted

according to some policy between the components. This procedures sets the maximum

allowed consumption of each component given an estimation of remaining energy. The

integral of the power consumed of the system over time equals the dissipated energy.

(26)

CHAPTER 4. ENERGY-CONSTRAINED QOS AND POWER MANAGEMENT - TECHNIQUES AND METHODS

Figure 4.1. Conceptual software architecture of the energy-constrained QoS Manager Mod- ule.

E

DissEst

= Z

T

T 0

P t dt (4.1)

From the energy-constrained QoS manager module’s point of view, the sum of the n power manageable components consumption level P

i

multiplied by the time spent at that level t

i

for all components, plus the static power consumption from non-PMCs P

f ixed

multi- plied by the runtime for the system, equals an estimation of the energy E

DissEst

dissipated from the system.

E

DissEst

= P

f ixed

· t

run

+

n

X

i

P

i

t

i

(4.2)

For the energy-constrained QoS manager module to know the actual consumption of the system, the system has to provide feedback from the energy source. The reason for moni- toring the energy resource, is to verify how much time was spent at the maximum allowed level of consumption set by the manager module. Depending the workload, it is possible that a component does not spend all its time at the highest allowed level of consumption.

The difference between these two estimations of the energy dissipated (maximum allowed level and actual working level) from the system is a potential surplus of energy resources.

In case of an excess in energy resources, there are various actions to take in how to handle

it. A closer description on possible usage of excessive energy is discussed in section 4.3.

(27)

4.3. MANAGEMENT POLICIES - HOW TO USE ENERGY

4.3 Management Policies - How to Use Energy

The task of a management policy in an energy-constrained QoS power manager module is to decide how the system’s available energy shall be distributed among the components in the system. A distinction between the two main decisions to be handled by a power management policy is given below.

• General Policy - How to treat excessive energy, i.e. when to use it.

• Specific Policy - How to distribute the available energy within the system, i.e. what amount to which components.

On the subject on how to deal with excessive energy, there are two different approaches.

There is the possibility to allow some components to work at higher performance levels for a certain amount of time by introducing the concept of time slicing. Time slicing means that the runtime of the system is divided into time slices. The time spent at the higher performance levels is restricted so that energy consumed matches the surplus of unused energy resources from earlier time slices still available to the system. The notion of an energy budget, which constitutes the excessive unused energy, is discussed in this manner in [25]. This general policy of optimizing performance under energy constraints, will in this thesis be referred to as a pro-performance policy. On the other side, there is the possibility to use the excess of unused energy resources to prolong the lifetime of the system. By remaining at the calculated level settings throughout the original desired runtime can the energy-constrained QoS manager module recalculate a new set of levels for the system based on the energy that is still available at the end of the desired runtime. This general policy will in this thesis be referred to as pro-lifetime.

The other aspect of a management policy is to provide specific rules for how the available energy shall be distributed between the components that are included in the system. If each PMC provides discrete levels of energy-constrained QoS, thus are able to guarantee a discrete number of maximal energy consumption levels with different performance for each level, where higher energy consumption enables higher performance. Then, the problem consists in utilizing the energy available to the system as far as possible by selecting an allowed set of energy-constrained QoS levels for the PMCs. The assumption that higher performance of a PMC is connected with a higher energy consumption is made here. As described earlier in section 2, one can get around this assumption by assigning an utility value to each energy-constrained QoS level. Although, the core problem to solve does not change in theory. The combinatorial task to solve is nondeterministic polynomial-time Hard, NP-Hard, called a Multiple-Choice Knapsack Problem, MCKP. The MCKP is a version of the 0-1 knapsack problem [10]. Pillai et al. explores the possibilities to adapt only the task set of a real-time system to achieve a desired runtime [10]. Thus assigning each task, T

1

,...,T

n

in the task set different levels of QoS j. Every QoS-version of a task T

i,j

has it own real-time characteristics, period t

i,j

and WCET C

i,j

. In addition to these characteristics they also assign an utility value U

i,j

for each task’s QoS level along with an average energy consumption E

i,j

per invocation of the task. By implementing a power manager that can control these power manageable components, i.e. the task set, they can adapt the power consumption of the system to be able hit the desired runtime. They do not explore the possibilities to implement and use hardware devices as PMCs.

4.3.1 Multiple-choice Knapsack Problem

The optimization problem at hand consists of choosing a consumption level for each PMC

registered at the energy-constrained QoS Manager Module. The choice shall optimize the

(28)

CHAPTER 4. ENERGY-CONSTRAINED QOS AND POWER MANAGEMENT - TECHNIQUES AND METHODS utility of the system but still remain within allowed power consumption P

sys

. The allowed power consumption is determined by the general policy taken on how to make use of available energy resources e.g. a pro-life or a pro-performance policy. Both the general policies calculate a certain consumption that enables the system to run for a desired amount of time t

run

. The desired system runtime is given by equation 4.3.

t

run

E

sys

P

sys

(4.3)

Where E

sys

equals the total amount of energy available to the system. It should be noted that equation 4.3 gives a limit on the average power consumption P

sys

. That opens up the possibility to tweak the power consumption within the desired system runtime by using the concept of time slicing.

Applied to the problem where PMCs only consist of the tasks running on the system, and not hardware, the optimization problem is modeled as following [10].

Task adaption in a "Known Time-to-Charge" problem

In the "Known Time-to-Charge" problem the scenario is that an external power source will either become available or is not needed, and the system will go down as a result, after a known time. To be able to keep the system running during this time the power manageable task set executing on the system is adapted to meet this goal. The power manageable task set contains n tasks, each with m different QoS levels, thus different utility U

i,j

and average energy consumption E

i,j

coupled to them. The number of times a task is run under the desired time interval t

run

lies in the range of hj

trun

ti

k , l

trun

ti

mi

. By adding the variable energy consumed to the amount of fixed power consumption of the system over the time interval t

run

, the average power is bounded by 4.4,

1 t

run

t

run

P

f ixed

+

n

X

i=1

 t

run

t

i

 E

i,j

!

≤ P

sys

≤ ...

1

t

run

t

run

P

f ixed

+

n

X

i=1

 t

run

t

i

 E

i,j

!

(4.4) It is assumed that the desired runtime t

run

is much larger than the periods t

i,j

, which makes the average power P

sys

converge to,

P

sys

= P

f ixed

+

n

X

i=1

E

i,ji

t

i,ji

(4.5)

The utility U

sys

of the system is defined as the sum of each task T

i,j

’s, set at QoS level j, utility rate

Uti,j

i,j

multiplied by the system runtime t

run

. See equation 4.6, U

sys

= t

run

n

X

i=1

U

i,ji

t

i,ji

= E

sys

P

sys

n

X

i=1

U

i,ji

t

i,ji

=

E

sys

P

n i

Ui,ji ti,ji

P

f ixed

+ P

n 1

Ei,ji ti,ji

(4.6) The mission is to optimize U

sys

under the constraints that the power does not exceed the available power budget. The power budget is defined according to 4.7,

P

budget

= E

sys

t

run

− P

f ixed

E

i,ji

t

i,ji

(4.7)

(29)

4.3. MANAGEMENT POLICIES - HOW TO USE ENERGY

and the MCKP is stated as,

max U

sys

= t

run n

X

i

U

i,ji

t

i,ji

(4.8)

given P

budget

E

i,ji

t

i,ji

(4.9)

where

Uti,ji

i,ji

is the value v

i,j

of each choice of QoS level for a task and

Eti,ji

i,ji

equals the weight w

i,j

of each level chosen. The power budget, i.e. the knapsack size, is denoted as K. The adaption is performed once at start up of the system to calculate the power distribution among the tasks in the task set. But calculation of power distribution can be performed during system runtime as well as to handle the case of modeling uncertainties and unused power saved by dynamic power management techniques, e.g. DVFS [10] or time-out policies etc. The amount of unused power is dependent of workload. Therefore, during system runtime can a periodic adaption calculation be beneficial.

Solutions to the MCKP

There are several approaches for solving the MCKP problem. A review of the solutions presented in [10] is given below.

State-space Search One solution is to do a search throughout the entire state-space to find the optimal solution. The state-space is represented by a search tree where each level of the tree represents one of n tasks and each node at tree level i corresponds to one of the m QoS levels of a task. This requires as complete traverse of the state-space, which has a time-complexity of O(m

n

), provided that all the task have the same number of QoS levels.

This approach is not efficient as n grows.

Dynamic Programming Solution Using dynamic programming techniques, i.e. using an optimal solution to a subproblem of the original problem, the solution of the MCKP can be casted as a solution with pseudo-polynomial-time complexity of O(nmk), where k equals all possible knapsack sizes, n is the number of power manageable tasks in the task set and m is the number of QoS level supported by the power manageable tasks. The solution is feasible under the constraints that the weights, i.e. the power levels of the each QoS level, is an integer value. The problem of this solution is that it can be hard to implement it in practice for embedded systems. The space complexity is O(nk) and as k may be large, thus require a lot of memory which may not be provided in embedded controller solutions.

Branch and Bound By going through the decision tree, where every level is represented with a task and each node at a level i has m branches representing the m QoS levels available for that task.

The way to speed up the procedure of finding the solution is to introduce a technique

called linear MCKP, LMCKP [19]. The technique utilizes a linear programming function for

an in depth tree search, to decide if a branch is promising or not. This is done by dropping

the integer limitation on the weights (power) of each choice of a setting for a task. Linear

relaxation allows for solutions consisting of the type 0.5T

2,4

, where T

2,4

is task number two

with QoS level number four. The next step is to sort the different levels for all tasks in order

of utility-change to power-change ratio, compared to the lowest active level, in descending

(30)

CHAPTER 4. ENERGY-CONSTRAINED QOS AND POWER MANAGEMENT - TECHNIQUES AND METHODS order. The list contains all possible upgrades that can be made from the lowest active level for each task. The highest ratios are selected from the list until every task has a level setting. If the combination of levels chosen requires more power than the one allowed by the desired runtime, the level for the last task is scaled to fit the power constraints. The other remaining tasks are not upgraded. This is an optimal solution that fully uses the power available and maximizing the utility of the system. This solution is used as an upper bound in the branch and bound algorithm. The search through the search tree is guided so that only branches where the LMCKP gives a higher value than ever provided before by any earlier branch. The branch and bound algorithm gives an optimal solution but can in the worst case lead to the same exponential search time complexity as an complete state- space search because the selection of promising branches (pruning the search tree) does not guarantee the fastest way towards the optimal solution.

Heuristics There are some algorithms based on heuristics that do not guarantee an op- timal solution, they can even be arbitrarily poor, but have been shown to be able provide competitive solutions [10], especially a greedy algorithm. One simple heuristic, the linear algorithm, uses the LMCKP but instead of scaling down the setting of a task selection to fill out the budget it is choosing the closest lower step that fits in the allowed power budget.

This is not an optimal solution and can be far from the optimal solution. Another heuristic is a greedy algorithm, which also uses the LMCKP but instead of just taking the closest lower level feasible it goes through the sorted utility per energy ratio lists of all task levels and applies all possible upgrades. Both of the heuristics are linear in their complexity, thus suitable for resource scarce systems.

The overhead implied with both the branch and bound algorithm and the heuristics lies in sorting the QoS levels for all the tasks. This has to be done once for each task set so for an infrequently changing task set the overhead costs are low.

4.3.2 Dynamic Power Managements Methods - How to Make Use of Idle Time

There has been a lot of research done in the field of Dynamic Power Management, DPM [4, 22, 23]. Most of the research focuses on a specific component in a system and how to manage its power consumption. Usually these techniques are transitioning the component into a state of lesser consumption when the governing policy decides so. The main idea is that it is a matter of putting the component in different levels of inactive sleep modes to make it consume less power at the cost of wakeup latency when receiving a service request [23]. Many different approaches exist, but the main idea is to be able to model the system and its components by their different states and corresponding characteristics, e.g. state transition times, state power consumption and system workload to be able to decide when and at what level the system state should be in during every time instance. Since the transitions between states are not instantaneous, they cost both in power and time, making these methods quite complex.

There are two main categories in which DPM techniques can be categorized, predictive

methods and stochastic control. The main focus of DPM techniques, especially for predictive

methods, is on how to transition a component in an inactive less power-consuming state

when the system is in idle. Stochastic control techniques also supports the possibility of

multiple power states, but handles active and/or inactive states to a further extent than

predictive techniques [4]. But both paradigms rely on the same form of data to implement

their polices, namely the time and power characteristics of the power states and transitioning

between them, provided from the PMCs available to the system.

(31)

4.3. MANAGEMENT POLICIES - HOW TO USE ENERGY

Predictive Techniques

Due to the cost associated with putting a device in a certain sleep mode, the predictive techniques use knowledge about the workload to predict if it is worth transitioning to a lower level. The decision is based on the probability that the next idle period is long enough to gain from a transition to a lower consumption level s

j

instead of staying in the active state during the near future. The time needed in a lower, less consuming, state instead of remaining in a higher state s

i

, is called T

Breakeven,s=j

. In [23] they provide data to from a StrongArm processor and T

Breakeven,s=j

is defined as

T

Breakeven,s=j

= 1 2



τ

T r down

+  P

i

+ P

j

P

i

− P

j

 τ

T r up



(4.10) , where τ

T r down

is the transition time from a higher state s

i

to a lower s

j

, τ

T r up

is the transition time from the lower state s

j

to a higher s

i

, P

i

is the power consumption in state s = i and P

j

is the power consumption in state s = j.

It should be noted that this definition is done provided that the device (i.e. the processor) has a power consumption that is equal to the mean of the two states that are transitioned between during the transition. This is not always the case, in [4] they state that the power consumption during transition time can be greater than the origin state, e.g. spinning down mechanical hard drives. They give a more general definition of T

Breakeven,s=j

T

T r

= T

i to j

+ T

j to i

(4.11)

P

T r

= T

i to j

P

i to j

+ T

j to i

P

j to i

T

T r

(4.12)

T

Breakeven,s=j

= (

τ

T r

+ 

P

T r−Pi

Pi−Pj



τ

T r

if P

T r

> P

i

T

T r

if P

T r

≤ P

i

(4.13) , where T

T r

is the transition time to enter and exit a state and P

T r

is the power consumption during these transitions.

There are adaptive techniques that adjust the timeout threshold according to past be- havior of the system [24, 11]. The main idea is, given a certain workload, to predict when to go down to a lower sleep mode i.e. applying predictive shutdowns. If the predictive shutdown technique is extended with a predictive wake up there will be a performance loss is in the system due to the latency of waking up from the lower state, i.e. to transition from the lower state to the active state when a request is received. A static timeout technique proposed by Karlin [12] is to set the timeout timer T

T imeout

equal to T

Breakeven

. The time out approach is quite common, e.g. in hard disk devices, and by using the design specified by Karlin (T

T imeout

= T

Breakeven

) it has been proven that the worst gains achievable is half the performance of a perfect estimator that has complete knowledge of the future use of the system.

For PMCs with several power modes, i.e. multiple active and/or inactive states, is it not trivial to select which power mode the device should be transitioned to in case of an idle period. Research on this matter is provided in [14]. They present algorithms for optimizing a whole transition sequence given a PMC and also algorithms dealing with correlations between idle intervals of system components.

The other category of DPM is to implement stochastic control using the framework of

controlled Markov processes. An extensive survey of DPM is given by Benini et al in [4].

References

Related documents

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av