• No results found

Log query server: a design proposal

N/A
N/A
Protected

Academic year: 2022

Share "Log query server: a design proposal"

Copied!
66
0
0

Loading.... (view fulltext now)

Full text

(1)

2006:173 CIV

M A S T E R ' S T H E S I S

Log Query Server

A design proposal

Mattias Frännfors

Luleå University of Technology MSc Programmes in Engineering Computer Science and Engineering

Department of Computer Science and Electrical Engineering Division of Media Technology

2006:173 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--06/173--SE

(2)

Log Query Server - A Design Proposal

Mattias Fr¨annfors

Lule˚a University of Technology April 11, 2006

(3)

Abstract

Clavister AB’s present product line of firewalls all include management software for administering a wide range of network and security related settings such as rules, routes, permissions and logging.

Due to new demands and requirements for security, functionality and scalability, a next-generation administration software system is currently being developed.

Part of this initiative, is the desire for a Log Query Server, which is a standalone component to carry out filtering (searching) of firewall log data according to log data queries received from clients.

The Log Query Server must provide good log search performance, data security as well as platform-independency. To meet these requirements, an investigation into evaluating the performance and feasibility of a number of implementation tech- niques is conducted in this thesis. The results are analysed to finally serve as a design proposal for an implementation of the Log Query Server.

The proposed design involves a combination of .NET/C# and C/C++ technology, where .NET provides the higher-level structure and logic, while C libraries encap- sulate the most intensive and critical functions.

(4)

Contents

1 Introduction 1

1.1 Firewalls & the need for security . . . 1

1.2 Firewall event logging . . . 2

1.3 Problem statement . . . 2

1.4 Objectives . . . 3

1.5 Thesis solution . . . 3

1.6 Chapter outline . . . 3

2 Background 5 2.1 Clavister Security Management System . . . 5

2.2 Log Query Server at a glance . . . 5

2.3 Technology . . . 5

2.3.1 Programming Languages . . . 6

2.3.2 Cross-platform development . . . 7

2.3.3 Cryptography . . . 10

2.4 Tools . . . 11

2.4.1 Compilers . . . 11

2.4.2 Code profilers . . . 11

2.4.3 Miscellaneous . . . 11

3 Log Query Server 12 3.1 Log Query Server . . . 12

3.1.1 Log Query Language . . . 12

3.1.2 Log query process . . . 13

3.1.3 Real-time log query process . . . 14

3.2 External components . . . 14

3.2.1 Log receivers & data formats . . . 14

3.2.2 Firewall log generation . . . 14

4 Prototype Implementation 16 4.1 Prototype overview . . . 16

4.2 Prototype design . . . 16

4.2.1 Database design . . . 17 iii

(5)

4.3 Prototype implementation . . . 17

4.3.1 C++ and C# . . . 18

4.3.2 Netcon C# . . . 18

4.3.3 Hardware setup . . . 18

4.3.4 Software setup . . . 19

4.4 Alternatives . . . 19

5 Evaluation 21 5.1 Introduction . . . 21

5.2 Test procedure . . . 21

5.3 Experimental errors . . . 22

5.4 Search performance . . . 22

5.5 Data transfer performance . . . 24

5.6 Total performance . . . 27

5.7 Cryptography performance . . . 29

5.8 Summary . . . 30

6 Design Proposal 32 6.1 Overview . . . 32

6.2 Design decisions . . . 32

6.2.1 Cross-platform compatibility . . . 32

6.2.2 Authentication & cryptography . . . 37

6.2.3 Filtering . . . 38

6.2.4 External interfaces . . . 39

6.2.5 Configuration . . . 39

6.3 Design summary . . . 39

6.3.1 Overall design . . . 39

6.3.2 Internal Components . . . 40

6.4 Data flow . . . 40

6.4.1 Incoming firewall data & data buffers . . . 40

6.4.2 Incoming realtime query . . . 41

6.4.3 Incoming non-realtime query . . . 42

6.5 Performance considerations . . . 43

6.6 Alternative design choices . . . 44

6.6.1 Managed C++ and the upcoming C++/CLI . . . 44

6.6.2 Portable C/C++ . . . 45

6.7 Design revision . . . 47

7 Discussion and conclusion 49 7.1 Summary . . . 49

7.2 Discussion . . . 50

7.3 Personal experiences . . . 50

7.4 Future work . . . 51

(6)

A Acronyms 54

B Design Overview 56

(7)

List of Figures

2.1 How .NET fits into the big picture. . . 8

2.2 The .NET compilation and execution process. . . 9

3.1 The log query operation at a glance. . . 13

3.2 General flow of generated log messages. . . 15

5.1 Timing values for the search operation. . . 23

5.2 Timing values for named pipe data transfers (send). . . 24

5.3 Timing values for named pipe data transfers (receive). . . 25

5.4 Timing values for TCP socket data transfers. (send) . . . 26

5.5 Timing values for TCP socket data transfers. (receive) . . . 26

5.6 Total named pipe operational time. . . 27

5.7 Total TCP socket operational time. . . 27

5.8 Total time for cryptography and transfers. . . 29

5.9 Total time of encryption/decryption. . . 30

B-1 A schematic overview of the LQS design. . . 57

B-2 An overview of the revised LQS design. . . 58

List of Tables

5.1 Profiler results for the C# search function. . . 23

vi

(8)

Preface

This thesis project was carried out in the spring and summer of 2005 at the devel- opment offices of Clavister AB in ¨Ornsk¨oldsvik. It is part of the Master of Science education in Computer Science and Engineering at the Lule˚a University of Tech- nology, Sweden.

Purpose

The main object of the work behind this thesis is to investigate how to best imple- ment a Log Query Server (LQS) - a stand alone system component to serve clients requesting firewall log data.

In the current generation of the Clavister Firewall Manager (CFWM), unnecessar- ily large amounts of log data is sent over the network for each log query due to the fact that the filtering of the data is performed client-side.

In the new management system currently being developed, filtering will be per- formed server-side, before any log data is passed on to clients over the network.

This calls for a separate component to independently handle the potentially resource- heavy log filtering process.

The goal is to present a Log Query Server design proposal that accomodates needs for performance, security and platform-independency.

Acknowledgements

I would first and foremost like to thank Magnus Norberg, my supervisor, for his support and guidance in this project. Always ready to assist and explain whenever confusion strikes the mind. I would like to thank my examiner Johan Kristiansson for valuable hints on the examination and report writing processes.

I am also thankful to Clavister Vice President John Vestberg for giving me the opportunity to do my thesis project at the office in ¨Ornsk¨oldsvik. It has been a most rewarding experience.

Thanks also to my other co-workers at Clavister, especially the members of the Operations & Maintenance team, for helpful advice, technical support and many joyful moments.

vii

(9)

Chapter 1

Introduction

Not long ago, computer security was generally a neglected matter. Today, it is of vital importance to companies and individuals all over the world. Assets, such as documents, software and other digital information must be protected from unau- thorized access. This calls for different means of protection. Firewalls play a big part in this.

The beginning of this thesis will establish the basic terminology and introduce the problem domain as well as the thesis objectives.

1.1 Firewalls & the need for security

Today, the Internet is used on a daily basis in every part of the world. Individu- als and companies depend on the connectivity and ease of communication that it provides. At any given time there is a flow of information travelling from one net- work to another. Some of this information may be of a more sensitive nature than the rest, and may for example only be intended for the users on a single network.

Some companies may have policies dictating that only certain types of information may freely flow back and forth between the company network and the rest of the Internet. This is where the idea of a firewall comes in.

A firewall is a piece of software or hardware that is able to block and/or permit different types of network traffic. For example, a network administrator may setup a firewall rule that only permits email traffic to be passed through the network. All other traffic, such as regular web surfing (HTTP traffic) is blocked. This would prevent and protect against any malicious attacks on the network other than attacks on the email service.

In short, the firewall serves as a gate, guarded by a gate-keeper that checks the ID badge of anyone trying to pass through the gate.

1

(10)

2 1.2. FIREWALL EVENT LOGGING

1.2 Firewall event logging

Most, if not all firewalls provide some kind of logging functionality. This gives the administrator a means to investigate network traffic passing through the firewall in detail. In the Clavister Firewall Manager, logging functionality may be turned on for any existing firewall access rule. When one of these rules are triggered by an incoming event (such as a user sending an email for example), all the relevant information pertaining to this event is sent to one or several log receivers. The log receivers may then take appropriate action, usually writing the event data to a local database for long- or short-term storage.

At a later point in time, the administrator may decide to review the logs. De- pending on the amount of data logged by the administrator, the log database may be very large in terms of disk or memory usage which has an important implica- tion, i.e. searching the data takes time.

The administrator may be looking for very specific information in the logs, for ex- ample, all requests matching a certain source IP address requesting HTTP data in some specified time period. This is an example of a log query made by the admin- istrator, and the system component handling the query and performing the database search is called a Log Query Server in the case of the upcoming Clavister Security Management System (CSMS) firewall administration software.

1.3 Problem statement

The upcoming next-generation firewall management software currently in devel- opment at Clavister needs to support efficient log searching functionality. It will supersede the client-side log filtering of the current generation firewall manage- ment software. The responsibility of the LQS will be to receive log data queries and perform data searches accordingly.

Requirements of the LQS are:

• Provide fast log data search performance.

• Support data queries made in the LQL query language.

• Support authentication and secure data transfers.

• Support for multiple platforms. Minimum is: Win2K/XP/2003, Linux and Solaris.

With these requirements in mind, how should the Log Query Server be designed to make sure it meets the above requirements?

(11)

CHAPTER 1. INTRODUCTION 3

1.4 Objectives

The objective of this thesis is to establish a design proposal for the future LQS component. The intention is not to have a full implementation or comparing full implementations against each other. The component is simply too expansive to fully design and implement several complete candidate solutions in the time frame of the thesis project.

Instead, the focus will be on investigating different programming languages/solutions (also network transfer methods and cryptographic algorithms), comparing these against each other and evaluating which are the most suitable choices for usage in a future LQS implementation. This procedure is tackled by constructing a num- ber of smaller prototype applications, designed to mimic a log query operation as close as possible. The prototype functionality will basically be a reduced subset of a possible LQS implementation, and thus they will not be complete implementa- tions. Their main function is to serve as test applications, to provide the necessary information in order to make decisions on and choose the best technology solu- tions for a final LQS design proposal. This proposal will finally serve as a basis for a future full implementation of the LQS. The choices made, are presented and motivated in Chapter 5, where they are also discussed in the context of the final high-level design proposal.

1.5 Thesis solution

The work presented in this thesis is a proposal for the design of an implemen- tation of the LQS component. A number of different implementation languages, network transfer methods and cryptography algorithms are evaluated and tested for performance. The evaluation of the tests along with extensive research regard- ing multi-platform software architecture is the basis for the design proposal. It is designed to provide adequate performance, security, and support for multiple platforms/computer architectures while at the same time achieving a clean and ex- tensible system structure as well as a high level of source code maintainability.

Alternative design strategies will also be discussed.

1.6 Chapter outline

This thesis report is organized into chapters as follows:

• Chapter 1 - Introduction. The first chapter gives an introduction to the thesis project and the problem domain.

• Chapter 2 - Background. Presents project background and overview of the relevant software technology, tools and theories.

(12)

4 1.6. CHAPTER OUTLINE

• Chapter 3 - Log Query Server. Gives a more detailed description of a LQS and other supporting systems. It also gives an explanation of the general process that will help establish the final LQS design proposal in chapter 5.

• Chapter 4 - Prototype Implementation. Describes the different test prototype implementations as well as the test procedures.

• Chapter 5 - Evaluation. Describes the test procedure, analyzes and presents an evaluation of the prototype test results.

• Chapter 6 - Design Proposal. This chapter presents the final design proposal, motivates choices made, and discusses alternative design choices and their implications.

• Chapter 7 - Discussion and conclusion. This chapter concludes the thesis with a description of personal experiences and concluding thoughts.

(13)

Chapter 2

Background

This chapter will give an overview of the project domain, and also present the software methods and technologies of interest and relevance to the task at hand.

2.1 Clavister Security Management System

The CSMS is a software suite for managing, configuring and monitoring firewalls over a network. It is currently in development as a next-generation solution de- signed to supersede CFWM, the firewall management product in use today. Fea- tures include: Multiple language support, multi-site management, 100 simulta- neously active clients, multiple interfaces (GUI, Web as well as command line), multi-platform support (Win2K/XP/2003, Linux, Sun Solaris), auditing, alerting, monitoring, logging, user authentication and access control, reports and third-party application integration.

2.2 Log Query Server at a glance

The LQS is a future component of the CSMS and is intended to handle log data interactivity. This component, or rather, a design proposal of this component will be the focus of this thesis. The LQS should be a stand alone component to handle requests for log data from clients. Its main task is to perform performance-intensive searches in extensive binary log data files. The LQS will be presented in more detail in the next chapter.

2.3 Technology

Below follows a general high-level overview of the software and technologies of relevance to the thesis. These will pop up every now and then in later chapters.

5

(14)

6 2.3. TECHNOLOGY

2.3.1 Programming Languages

Today, there exists a multitude of different programming languages. Some of these are more suitable to certain kinds of tasks than others. For example, recent years have seen a strong increase and development in the area of script languages and other high-level languages particularly suited for non speed critical tasks, often found in Graphical User Interface (GUI) projects. This effort is partly due to a widespread desire to write program code faster and safer, but processor technol- ogy has also played a part in this. As computers become faster, the negative speed impact of a feature-heavy programming language is lessened. Still, today the low- level C/C++ is in heavy use as a general-purpose tool when it comes to program- ming, and will probably continue to do so for many years to come.

In this thesis, efforts are concentrated on C/C++ and C#.

C/C++

The C programming language is an imperative language developed originally for the unix operating system in the early 1970s by Dennis Ritchie. Usage of the language quickly became very popular in the Unix community, and during the 80’s it spread widely, compilers became available on every machine architecture and it became a particularly popular programming tool for personal computers [5]. The language specification was standardized starting in 1983 by the ANSI X3J11 committee. Generally it is a portable, simple and small language, which certainly contributed to its success, but the most important factor however, is the success of Unix itself. It made the language available to a lot of people. Still, some critique has accumulated over the years. Ritchie comments on some of the biggest complaints:

”Two ideas are most characteristic of C among languages of its class:

the relationship between arrays and pointers, and the way in which declaration syntax mimics expression syntax. They are also among its most frequently criticized features, and often serve as stumbling blocks to the beginner. In both cases, historical accidents or mistakes have exacerbated their difficulty. The most important of these has been the tolerance of C compilers to errors in type.” [5]

When it comes to C++, C is a subset, and the entire standard function library of C is inherited by C++. Thus, all C functions are also available to programs being written in C++.

C++ was initially designed and implemented by Bjarne Stroustrup at AT&T Bell Labs with a first release in 1985. The growth of C++ users followed an explosive trend in the late 80s and throughout the 90s. In 1997 it was formally standardized by the International Standards Organization (ISO).

In short, C++ has a firm base in C with the addition of object-oriented program- ming facilities. It is characterized by efficiency and abstraction facilities making

(15)

CHAPTER 2. BACKGROUND 7

it a potent language both in terms of run-time performance and general ease of expressing application design thought.

C#

C# is a new language by Microsoft designed specifically for the .NET-platform (see 2.3.2). It can be seen as a simple, object-oriented and type-safe language with bits and pieces derived from C, C++ and also Java. The syntax generally resembles C++ and Java a great deal, however, when it comes to complexity, it is best compa- rable to Java. Various helpful mechanisms exist to help the programmer minimize error, such as automatic memory handling (garbage collection), no dealing with pointers (unless you want to). All in all, C# is a powerful language with full access to the .NET base class library, being very easy to learn and to use in the process.

However, it might not be the most suitable choice for very high performance appli- cations:

”The one area the language is not designed for is time-critical or ex- tremely high performance code - the kind where you really are wor- ried about whether a loop takes 1000 or 1050 machine cycles to run through ... C++ is likely to continue to reign supreme amongst low- level languages in this area. C# lacks certain key facilities needed for extremely high performance apps, including inline functions and de- structors that are guaranteed to run at particular points in the code.”

[6]

Managed C++

Managed C++ is basically C++ with the possibility to use the .NET class library.

In this way, the flexibility and power of C++ can be combined with all the features of the .NET framework. Also, Managed C++ is currently the only language where it is possible to mix both managed code (memory managed code via .NET) and un- managed code in the same solution. However, usage of MC++ as a cross-platform language is limited by lacking support in the Mono (see 2.3.2) runtime engine, as will be mentioned in later chapters.

2.3.2 Cross-platform development

In contrast to software development in the past, today there is a widespread con- sensus to not bind applications to any particular platform. It is desirable that the application is portable between different architectures, to provide multi-platform support and to reach as big an audience as possible for the product in question.

.NET

.NET, a development platform by Microsoft, is two things: first, it is a function li- brary in the same way as the Windows API. It differs by being fully object-oriented,

(16)

8 2.3. TECHNOLOGY

! " # $ %&

Figure 2.1: How .NET fits into the big picture.

containing numerous classes implementing useful programming constructs, such as data structures, file handling, threading etc. Second, .NET provides a runtime engine, an environment where the application is run. This environment, or .NET runtime is often called the Common Language Runtime (CLR). This piece of soft- ware handles the execution of any application written for .NET. It will start up the code, manage threads, perform background tasks and more. Essentially, it provides an abstraction layer between the application and the operating system. See figure 2.1.

.NET’s biggest competitor is Java, and they both share a lot of similarities and con- cepts. Both produce their own intermediate byte-code, which later upon execution is translated by the runtime engine into native machine code typically via a process called just-in-time compilation. The .NET bytecode is called Common Intermedi- ate Language (CIL). See figure 2.2 for an overview of the compilation process.

Strictly speaking, Microsoft .NET is not really a cross-platform solution, a bet- ter description would be cross-language platform. Basically, it provides cross- platform capabilities only after there exist a platform-specific program (runtime) to enable compatibility. However, the core parts of .NET (the C# language and the Common Language Infrastructure (CLI)) have been released in specifications submitted to the standards organization ECMA, making them open standards and available for re-implementation to other platforms by anyone. The two most promi- nent efforts doing just that is the Mono Project, and DotGNU Portable.NET.

(17)

CHAPTER 2. BACKGROUND 9

Figure 2.2: The .NET compilation and execution process.

Mono

Mono is an open source implementation of the .NET Development Framework. It is an effort led by Ximian (part of Novell, Inc.) to develop a set of .NET-compatible tools compliant to the ECMA Standards 334 and 335, describing specifications of the C# programming language and the CLI respectively. At the time of writing, Mono runs on Microsoft Windows, Sun Solaris, BSD, Linux and Mac OS X. It contains JIT-engines (Just-In-Time compilation) for a range of different processor architectures: s390, SPARC (32 and 64 bits), PowerPC, x86, x86-64, IA64 and ARM.

Portable.NET

DotGNU Portable.NET shares many of the same goals as Mono. It is focused on compatibility with the ECMA specifications for C# and CLI. As with Mono, implementations of the runtime engine exist for a wide range of platforms and ar- chitectures. However, there is not yet any tested & tried JIT-functionality in the runtime engine, except for a very early add-on component still in heavy develop- ment. This means that CIL bytecode is interpreted directly instead of first being

(18)

10 2.3. TECHNOLOGY

compiled to native machine code. Of course, this has a big performance impact which can be seen in later chapters.

Others

There are other solutions to the cross-platform issue. Below are some common choices. However, these will only be mentioned here briefly and will not be sub- ject to evaluation in the rest of the thesis.

Java

Probably the most well-known cross-platform technology existing today that won’t be discussed or addressed to much extent in the thesis is Java.

Java is both a programming language as well as a platform in itself. The language shares many similarities with C# and like .NET it depends on a runtime engine, the Java VM (Virtual Machine), to be able to run any compiled bytecode.

Java is particularly popular in a web-environment, with a lot of supporting technol- ogy for making web-based applications, called servlets or applets.

However, the decision was made early on to focus evaluation on .NET/C# rather than Java. Time constraints and also long-term strategic views played a part in this decision.

Several other less known solutions exist. For example, Qt , which is a cross- platform C++ application framework. It comes with a comprehensive C++ class library, containing all the common functionality needed for just about any type of application. However, it requires licensing for any commercial product develop- ment. Licensing fees are typically in the range of 1500-4000 euro depending on requirements.

2.3.3 Cryptography

Secure transfers of log data will be important. This calls for cryptographic func- tionality, whereby all data which travels across an insecure network is first en- crypted by the sender, and later decrypted when the data arrives at the intended destination.

Netcon

Netcon is an encryption protocol developed internally at Clavister. It provides encrypted and authenticated communication on top of the TCP protocol between a server and a client. A session is established by an initial handshaking procedure and exchange of so called proposals. When the session has been setup correctly, data can be passed through the channel. Any data sent is broken up in chunks of variable length, where each chunk is encrypted and authenticated.

(19)

CHAPTER 2. BACKGROUND 11

When it comes to cryptographic functionality, the interest in this thesis will be focused on a recently finished implementation of Netcon in C# by other co-workers at Clavister.

2.4 Tools

An assortment of tools has been used throughout the thesis project. Most impor- tantly, a number of source code compilers or IDEs (Integrated Development Envi- ronments) have been used on an almost daily basis. Also, source code profilers and a number of miscellaneous desktop applications have been put to good use.

2.4.1 Compilers

The primary development tool used was Microsoft Visual Studio 2003. This is an IDE (Integrated Development Environment) where editing, compilation and project configuration are all integrated into one single environment. Microsoft Vi- sual C++ 6.0 has also been used at times to test the functionality of older existing projects with relevance to log handling.

2.4.2 Code profilers

Code profilers allow the programmer to investigate the inner workings of an ap- plication at runtime. Many things can be tracked and a lot of useful performance information can be gathered in short time, such as memory usage, function execu- tion times, average timings and sometimes processor usage and more.

The two primary profilers used during the project are: ANTS Profiler by Red Gate Software, and nProf, an open source profiler by Matthew Mastracci.

2.4.3 Miscellaneous

Various other desktop applications have been used in the thesis. For presentations, diagrams and spreadsheet figures I have used Microsoft PowerPoint 2000, Visio 2002 and Excel 2000.

During the prototype implementation process I found TcpView (www.sysinternals.com) to be a pretty useful tool for monitoring network port activity.

The thesis report itself has been written in LATEX, using WinEdt (www.winedt.com) as text editor, and GhostScript (www.cs.wisc.edu/ ghost/) for preparing images.

(20)

Chapter 3

Log Query Server

This chapter will present a general overview of the intended functionality of the LQS and also an overview of the general thesis methodology; technology pro- totyping and evaluation to determine the most suitable programming languages and techniques to employ for a final LQS design. Below follows a more detailed overview of the LQS and supporting technologies.

3.1 Log Query Server

The LQS is the component of the CSMS intended to handle log data interactivity.

Its most important task is to carry out searches for log data in binary databases. The other main task is to support queries against continuous real-time log data. This is explained in more detail later.

The LQS should be able to run as a stand-alone process or service, which effec- tively means that it would not have to be run on the same computer as the rest of the CSMS. In this way, the logging process may function in a distributed manner, and the processor and resource load that inevitably comes from search operations may be lifted from the CSMS server computer to another server computer solely dedicated to handling log queries.

3.1.1 Log Query Language

Log Query Language (LQL) is the language format for all data queries. Today it is used in the (soon-to-be-replaced) CFWM system, but any future log manag- ing component will have to include LQL support as well to maintain backwards- compatibility.

LQL is in many ways the Clavister equivalent to the well-known Structured Query Language (SQL). A standard computer language widely used for accessing and manipulating database systems. Like SQL, LQL is used to retrieve data from a database. However, LQL does not provide any functionality for updating or insert- ing data into a database. It is strictly used as a query language to fetch specific

12

(21)

CHAPTER 3. LOG QUERY SERVER 13

data from pre-built databases in a Clavister proprietary binary format called Enter- net Firewall Log (EFWLog). Also, LQL has a large number of Clavister specific keywords and statements.

The syntax of an LQL query is:

SELECT <outputtype> [, <outputtype>]

FROM <firewall_and_time_statement>

[WHERE <logical_statement>]

All LQL queries are expected to begin with a SELECT statement, followed by one or more output types. This is the type of data that will be returned by the query, i.e., source computer IP address, destination computer port address, length of data, protocol type and many other types.

The FROM statement allows users to specify which firewall(s) he wants to query, as well as the desired time period for the log data in question.

The WHERE statement is optional and allows logical statements to be used in the query for matching against the data.

3.1.2 Log query process

Figure 3.1: The log query operation at a glance.

The process of making a log query in the new CSMS system is intended to work in the following manner: The end user enters a LQL query string into the client application (Log Analyzer) and submits the query. The string is passed on to the Log Manager component of the CSMS for further processing, and then passed on via network to the LQS. The query string is received and the search parameters are parsed by the LQS, whereupon a database connection is initiated to retrieve the relevant log data. A filtering operation is performed where the query is matched against the available log data to find the data of interest.

This log data is then passed back to the CSMS and the client application for dis- playing. See figure 3.1 for a general overview.

(22)

14 3.2. EXTERNAL COMPONENTS

3.1.3 Real-time log query process

The real-time queries are somewhat more intricate than a regular database query.

The idea is to, from the client application, be able to submit a LQL query to the LQS and match this query against continuous log data arriving from firewalls to the LQS in real-time. This scenario is very similar to the non-realtime search process, except that the searching is not made in the binary database. Instead, single, or chunks of log data entries must be searched as they arrive in real-time directly from one or several firewalls.

3.2 External components

Besides the LQS and the CSMS, the logging system encompasses a few other external components. These must be kept in mind since they impact the design.

3.2.1 Log receivers & data formats

Log Receivers are software components designed to receive log data from firewalls.

Log data is sent from a firewall to a log receiver whenever a firewall rule with an attached logging option is triggered. There are two available types of log receivers in this system: FWLogger, a Clavister log receiver that accepts log data in the binary EFWLog format and Syslog log receivers (e.g. syslogd on Unix systems) accepting log data in the plain-text Syslog format. See [2] for more information about this general purpose log format.

When log data has been received by a log receiver, data is usually written to a file database or simply displayed in some way or another. In the case of FWLogger, it writes the data to a binary file database. In the Syslog case, it is up to the specific Syslog receiver what action to take.

When it comes to a regular database search for log data, the search is carried out in a binary database where data is stored in EFWLog format. This implicates that the LQS must support a way of searching or filtering this type of data.

Real-time queries however, must be made against data in Syslog format, due to the way real-time logging is implemented in the Clavister firewall kernel.

To summarize: In order to support database searches, as well as real-time log searches, we need log filtering functionality that is able to parse and interpret logs in both the EFWLog format and the Syslog format. More about this in Chapter 5.

3.2.2 Firewall log generation

The actual software or hardware firewall is where all log data is constructed, and then sent to the pre-defined log receivers. The scheme for real-time queries how- ever, is slightly different; Whenever a client or some other system component desires to receive real-time log data, it must send a subscription message to the

(23)

CHAPTER 3. LOG QUERY SERVER 15

firewall. And later in the same way, send another message to terminate the sub- scription.

All transactions and communication with the Clavister firewall is carried out using the proprietary Netcon protocol, therefore, the LQS must be able to interface with a hardware firewall via the Netcon protocol in order to send subscription and ter- mination messages, and to receive the actual real-time log data (in Syslog format).

! "

Figure 3.2: General flow of generated log messages.

Figure 3.2 displays an overview of an incoming network event that generates a log entry sent to all available types of log receivers.

(24)

Chapter 4

Prototype Implementation

This chapter will present the different prototypes used for testing, the experimental setup and procedure, and also the general research methodology. Some alternative setups will be discussed at the end of the chapter.

As mentioned in section 1.4, the primary goal of the investigative part of the thesis, is to find out which technologies that will best meet the requirements for the LQS in terms of performance. The biggest concern is the speed of log searching, and also how to best establish the log data transfers between different components.

4.1 Prototype overview

The investigation is conducted by developing a number of functional application prototypes, performing a number of log searching tests where a query is sent from client to server and subsequently data is sent back to client. Timer functionality built into the prototypes measure the total time it takes for the search operation and the network transfer operation to complete, which makes it possible to input these into a spreadsheet program for convenient data point presentation. Finally the resulting performance graphs are compared and evaluated.

4.2 Prototype design

The primary implementation languages to be tested for performance are C# and also C/C++. (Java is left out for reasons stated in 2.3.2). The .NET-framework will be used in the case of C# while Windows library functionality will be used for the C/C++ prototype, e.g. Winsock.

Tests will also be conducted for different methods of inter-application data trans- fer. TCP sockets and named pipes will be tested as well as the Netcon protocol for testing the added performance impact of a secure encryption layer. Due to time constraints, the decision was made to focus on Netcon for C#, even though an im- plementation in C/C++ existed. If time allows, an adaption of this implementation

16

(25)

CHAPTER 4. PROTOTYPE IMPLEMENTATION 17

will be looked into at a later point.

The prototype implementations will be designed to mimic a (non-real time) log query operation as closely as possible, even though it will work in a scaled down fashion compared to a full log search process in the final solution of the LQS.

The prototypes will consist of a server and client component for each implemen- tation language. The choice of network protocol should be interchangeable using simple command line arguments at startup.

In a similar vain to how the LQS is going to work, the client should be able to make queries for log data. The server performs the search in a log database and sends back the matching data to client. The database will consist of binary structures of randomized data, described in the next section.

4.2.1 Database design

The database is the repository for all log data. It is a binary data structure con- taining various numbers of log data entries to simulate packets. In turn, each entry contains a number of data fields to give it some individual characteristics:

• Creation time (8-byte Timestamp)

• Size (4-byte DWORD)

• ID Number (4-byte DWORD)

• Data Number (4-byte DWORD)

• Category (String)

• Description (String)

In the prototypes it will be possible to search and match against any of the above given data field(s) if supplied in the LQL search string.

In the search tests the database will consist of a number of data files of varying sizes from 10,000 up to 1 million packets which equals to sizes up to 65 MB. All files will be generated randomly by supplying a command line parameter to the prototype application, however a random seed will be used for all generations to keep the data in every file uniform and identical up to the maximum size of each file.

4.3 Prototype implementation

The client and server prototypes were developed and continuously maintained us- ing the Microsoft Visual Studio 2003 integrated development environment. Mi- crosoft Visual C++ v6.0 was briefly used alongside as well, to compare and study functionality in previous Clavister software projects.

(26)

18 4.3. PROTOTYPE IMPLEMENTATION

4.3.1 C++ and C#

The C++ and C# implementations are very similar in their general design and func- tionality. Both are configured using a number of command line parameters parsed at run-time. For the clients this includes the choice of network transfer method (Sockets/Pipes), which end-point to connect to, i.e. IP address and Port or a Pipe name. Also, the actual LQL query string is supplied as a parameter.

The point where the implementations display the biggest differences is the opera- tion of reading and parsing binary packet data in the search sub system of the server implementation. The reason for this is due to the memory managed nature of C#

and the difficulty of finding a way to parse the log data in a way that would give the most justification to a comparison between the respective languages.

In the C++ implementation, the quickest way of parsing the data into a format that makes sense, is to read a portion of the binary stream (the log data) into memory and then cast a pointer to this memory location to a defined packet structure.

The way of doing this in C# is slightly more cumbersome, because in order to cast memory to structures in a similar vein, the marshalling functionality of .NET must be invoked. These provide functions to deal with unmanaged memory, such as al- locating, copying and converting between unmanaged memory and managed data types. The marshalling classes introduce a small overhead in the packet parsing process, still this can be considered acceptable and good enough for purposes of comparison, since the differences between the memory and structure handling in C++ and C# makes it hard to achieve completely identical internal implementa- tions.

The C# server and client implementations were also compiled and tested under the cross-platform virtual machines Mono and dotGNU Portable.NET.

4.3.2 Netcon C#

The Netcon prototype implementation is based on a previously existing C# imple- mentation of a Netcon client implemented by a co-worker at Clavister. The client was used as a base and modified with extra handshaking logic to provide com- patible server functionality. Various parts of the Netcon parsing mechanism were modified to support transfers of bigger data chunks, and facilities were added to pick and choose the encryption algorithm to be used for data transfers.

The actual database search functionality of this prototype is the same as in the above mentioned C# prototype. The difference and the interest here lies in the Net- con transfer protocol, which provides authentication and encryption services for data transfers.

4.3.3 Hardware setup

Software development and test runs were conducted on Pentium/Windows 2000 SP4. All machine specific details of interest are listed below:

(27)

CHAPTER 4. PROTOTYPE IMPLEMENTATION 19

• Intel Pentium 4 3.0 GHz Processor

• 512MB RAM

• Western Digital 800JD Hard Disk Drive

4.3.4 Software setup Software specific details:

• Microsoft Windows 2000 v5.00.2195 SP4

• Microsoft Visual Studio 2003 v7.1.3088

• .NET Framework v1.1.4322 SP1

• Mono v1.1.9.1

• DotGNU Portable.NET v0.7.2

The tests were conducted using Visual Studio release builds with compiler opti- mizations turned on in order to attain the best performance for each implementa- tion. In addition to the above, Microsoft Excel was used to collect and graph the actual timing values.

4.4 Alternatives

Choices made at the start dictated the direction and scope of the project. However, some project aspects might have been approached in other ways. Implications of these choices are discussed below.

Full functionality

In its current shape and form, the prototype and database implementations are just that - prototypes with incomplete functionality. The log databases used in a real- world situation are more extensive, with more data types demanding more intricate search behaviour. Still, the basic principle and general functionality is the same, whereby packet structures are searched for in binary files by the server and returned to the client. The implementation and method is the same for all prototypes, there- fore a comparison would still indicate relative strength and performance.

That being said, the ideal solution would naturally be a complete implementa- tion of the actual real-world search engine using real-world data in the prototypes.

However, implementation of complete functionality would have meant a very time- consuming process before any testing would have been able to start. It was there- fore decided to use a scaled-down version and trying to mimic the query process as close as possible.

(28)

20 4.4. ALTERNATIVES

Other platforms

Due to time constraints, tests have so far been conducted on the Windows platform.

It would have been desirable to test the functionality and perform the same tests on other platforms as well, to spot any major differences in search run-time or source code incompatibility at an early stage.

Testing on other platforms will naturally be a very important part in later stages of the project and all throughout the final development process.

(29)

Chapter 5

Evaluation

This chapter will discuss the results from the prototype tests described in the pre- vious chapter. Performance values are presented for the operations of searching, transferring, and encrypting/decrypting data. At the end of the chapter is a discus- sion about the results and whether it meets our expectations.

5.1 Introduction

As mentioned in the previous chapter, the prototype results will provide a guiding comparison of the relative performances. Since the prototypes are essentially us- ing scaled down test versions of the full-blown log format and search functionality they are not absolute performance indicators of the final implementation.

Further testing and evaluation should therefore be a continuous process during de- velopment to be able to spot bottlenecks as early as possible.

5.2 Test procedure

For each pair of server/client implementations a log query was sent from client to server, to match and fetch all the data available in the log data file. This means all packets will match the search string, and thus, all packets will be parsed and de-serialized from its binary representation into comprehensible data structures. In this way, any performance differences between implementation languages and/or data transfer methods will become more accentuated than if we had only matched for smaller portions of log data.

Each test search was performed at least 15 consecutive times1 to compensate for small (or big) deviations. Below is a list of other measures taken to reduce inter- ference from external factors:

1Number arbitrarily chosen. Additional searches for some operations on the largest log sizes simply proved time-consuming without returning any significantly differing time data.

21

(30)

22 5.3. EXPERIMENTAL ERRORS

• Data is sent locally on the same computer from server to client process

• Other network activity is decreased to a minimum

• The number of programs and services running in the background is mini- mized

• Computer usage during the test is kept to a minimum

Still, despite the measures taken, a few of the test runs displayed quite a lot of timing fluctuations. Reasons for this are unclear, but may have to do with internal process scheduling, memory fragmentation or other hard to isolate issues.

Small design variations in the process of log handling and searching in the server component were also tried and tested for the C# implementation and included in the results. These constitute different ways of accessing the log data - either from file, or, as an alternative, from memory (data in the log file is pre-loaded into mem- ory before any search is commenced). The rationale behind this is of course that reading from memory is theoretically much faster than reading from disk. The question of interest is then how much overhead is added when searching is made in files, compared to searching in memory.

5.3 Experimental errors

Each data point measurement in the graphs below has a degree of uncertainty or error attached to it. The amount of error was determined by calculating the standard deviation and standard error (i.e. the standard deviation adjusted for sample size) alongside the average value. These give a best estimate of how far the experimental quantities might be from the actual ”true values”.

In the graphs, the range of the standard error is included for each curve in the graph label legend. Minimum and maximum error values are included. In general, the figure for the minimum error is the error for the smallest log size, and conversely, the figure for maximum error was almost always obtained for the biggest log size.

The bigger the log size, the more time the operation in question takes, and thus, the chance of faults, memory issues, momentary slowdowns or other external factors creeping in and causing measurement errors is increased.

5.4 Search performance

The search performance describes the execution time of a log search operation where the query is made to match all packets in the database. Only the time for searching is measured, that is: the total time for deserializing binary log data, matching this data against the query, and if it matches place it in an array buffer.

In Figure 5.1 there are two separate curves for Mono and C#. One describes the

(31)

CHAPTER 5. EVALUATION 23

Search time - all packets

0 1000 2000 3000 4000 5000 6000 7000

0 200 400 600 800 1000

log size (k packets)

seek time (ms)

c++ (err: 1-5) c# (err: 2-16) c# (mono) (err: 3-19)

c# (mem) (err: 2-14) c# (mono-mem) (err: 2-18) c# dotGNU (err: 16-5847)

Figure 5.1: Timing values for the search operation.

performance when continuously fetching the data from file, while the other curve describes the performance when fetching data from memory (all log data has been pre-loaded into memory and no file operations are performed while searching).

It is easy to see that C++ has a definite edge in the heavy duty operation of search- ing and parsing. This was also the expected result and no big surprise.

It is also easy to see that due to its interpreted nature (in contrast to compiled), DotGNU Portable.NET can’t compete performancewise with any of the compiled implementations. Its curve rapidly increasing from the start.

100% ClientSearch(query...)

ReadPacket(Packet&)

Marshal.PtrToStructure(ptr, Type) ReadBytes(int)

Allocate GCHandle Free GCHandle other operations WeWantThisPacket(Packet&) other operations, size checks etc.

Time Slice / Method Breakdown

9%

9%

19%

16%

65%

53%

16%

13%

Table 5.1: Profiler results for the C# search function.

(32)

24 5.5. DATA TRANSFER PERFORMANCE

The equivalent C# implementation on the other hand, is about 3-4 times slower than the C++ implementation. By using a number of .NET profiler applications it was possible to break down the search function into parts and quite easily see where the most time was being spent during execution. See Table 5.1.

The table shows that the biggest time slice (65% of the time) is being spent pars- ing packets. In turn, the biggest share of this time slice is spent in the Mar- shal.PtrToStructure function, that is, the .NET-function to convert an unmanaged memory area to a managed data type. In addition to this, the necessary allocation and destruction of garbage collection handles also takes a good chunk of execution time. The actual file access only amounts to 16% of the packet parsing function.

This is the biggest culprit of the C# implementation, and C# in general in my opin- ion. The memory management functionality of C# adds execution overhead, espe- cially in performance-hungry operations such as the above packet filtering/searching test scenarios.

5.5 Data transfer performance

In the graphs below, the data transmission time is measured. On the server end, the send time is measured, and on the client end, the receive time is measured. Any time for preparing the data is included as well, this means the time for serializing the data (and encrypting/decrypting data in the case of Netcon) is included.

The named pipe transmission time graphs, shows similar performance for both the

Send times - Pipes comm - all packets

0 100 200 300 400 500 600 700 800 900 1000

0 200 400 600 800 1000

log size (k packets)

time (ms)

c++ send (err: 2-22) c# send (err: 0-10)

Figure 5.2: Timing values for named pipe data transfers (send).

C# and C++ implementations. However, the curves start diverging at log sizes of

(33)

CHAPTER 5. EVALUATION 25

Receive times - Pipes comm - all packets

0 100 200 300 400 500 600 700 800 900 1000

0 200 400 600 800 1000

log size (k packets)

time (ms)

c# receive (err: 0-12) c++ receive (err: 1-20)

Figure 5.3: Timing values for named pipe data transfers (receive).

approx. 300-400k elements, where the polynomial C++ transmission time curve actually overtakes the linear C# time curve. This result is somewhat unexpected, and reasons are unclear as to why the C++ gets a polynomial curve approximation.

However, this proved to be the test with the most fluctuating time measurements, so it would probably have been wise to do even more test runs for this particular case to get more reliable averaged data. Theoretically there should be minimal to no difference in transmission times between the two implementations because of the fact that in the end, both versions are making use of the same pipe functionality found in the Windows dynamic link library kernel32.

The same graphs for TCP Socket transmissions are dominated by the curves be- longing to the Netcon implementations. The darker shade signifies the initial ver- sion and the lighter shade is a revised and optimized version of the same imple- mentation. It should be kept in mind however, that the Netcon curves include encryption/decryption of the data before actual transmission of data.

We can deduce from this graph that the majority of implementations all hover around 500-1500 ms for the biggest log sizes. If we look at the pipes graph in the previous figures (5.2 & 5.3), the performance is more or less comparable for the C# and C++ implementations. Pipes do seem to have a slight edge however but the difference is hardly dramatic (only around half a second for the biggest log sizes).

It is also apparent that the Netcon implementation adds a lot of overhead in the transmission process. Part of this is due to encryption/decryption, but most of all, the reason is actually the packet parsing procedure. More on this later.

(34)

26 5.5. DATA TRANSFER PERFORMANCE

Send times - TCP comm - all packets

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

0 200 400 600 800 1000

log size (k packets)

time (ms)

c++ send (err: 1-2) c# send (err: 1-2)

c# (netcon) send (err: 10-82) c# (netcon2) send (err: 8-35) c# (mono) send (err: 1-3 )

Figure 5.4: Timing values for TCP socket data transfers. (send)

Receive times - TCP comm - all packets

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

0 200 400 600 800 1000

log size (k packets)

time (ms)

c# receive (err: 1-32) c++ receive (err: 1-9) c# (netcon) receive (err: 28-2925) c# (mono) receive (err: 2-37) c# (netcon2) receive (err: 15-1624)

Figure 5.5: Timing values for TCP socket data transfers. (receive)

(35)

CHAPTER 5. EVALUATION 27

5.6 Total performance

Total (Search+Send+Receive) time - Pipes comm - all packets

0 2000 4000 6000 8000 10000 12000 14000 16000

0 200 400 600 800 1000

log size (k packets)

total time (ms)

c++ (err: 3-15) c# (err: 2-37)

Figure 5.6: Total named pipe operational time.

Total (Search+Send+Receive) time - TCP comm - all packets

0 2000 4000 6000 8000 10000 12000 14000 16000

0 200 400 600 800 1000

log size (k packets)

total time (ms)

c++ (err: 3-15) c# (err: 4-40) c# (netcon) (err: 40-2960) c# (mono) (err: 5-59) c# (netcon2) (err: 35-2127) c# (dotGNU) (err: 19-7493)

Figure 5.7: Total TCP socket operational time.

The figures above describe the total time for the whole query operation. This includes time for sending the query to server, performing a search in the database, collecting the matching data and sending back the data to client (includes encryp- tion in the Netcon impl.) who finally receives the data (includes decryption in the

(36)

28 5.6. TOTAL PERFORMANCE

Netcon impl.). Here we can see that the total duration of the query for the C# im- plementation is about two times as long as the C++ implementation. This is not bad considering that garbage collection mechanism, type safety and other helpful error-minimizing C# features are in use.

If we look at the corresponding graph for TCP socket data transfers (Figure 5.7), the same ratio holds true between C++ and C#, about 1:2. The conclusion here is that the type of data transfer makes small to little difference. At least in the case of local inter-process transfers.

When it comes to the alternative cross-platform .NET runtime engines Mono and DotGNU Portable.NET, we can see that Mono finishes in about 3.2x longer time than C++. However, it is not lagging very far behind the Microsoft runtime (C#).

About 36% slower. The Mono runtime engine is under continuous development and it is likely this difference will shrink as the project grows more mature.

Portable.NET on the other hand, is obviously not very suitable in its present state.

The interpreter engine is simply not up to this kind of task.

The Netcon implementations display somewhat disappointing figures, but it must be kept in mind that these implementations encrypt the data, which of course, im- pacts the speed of data transfers adversely. However, profiling showed that the vast majority of the time for the transfer was not spent encrypting/decrypting, but was instead actually spent copying arrays. This is done on the receiving end in order to receive big chunks of data split over multiple packets. The smaller parts must be assembled back together again in the right order. Still, it does seem like the amount of copy operations is far too great, and it is likely that the code can be modified to decrease excessive operations. More about this in Chapter 5.

(37)

CHAPTER 5. EVALUATION 29

5.7 Cryptography performance

Total Time (encrypt+send+receive+decrypt) of Encryption Algorithm

0 10000 20000 30000 40000 50000 60000

0 50 100 150 200 250

log size (k packets)

total time (ms)

RC2 (64key,64blk) (err: 9-1588) CAST128 (128key,64blk) (err: 12-1613) DES (64key,64blk) (err: 9-1498) 3DES (128key,64blk) (err: 12-1539) Rijndael (128key,128blk) (err: 14-1712)

Figure 5.8: Total time for cryptography and transfers.

Figure 5.8 shows the total time of a query operation, as in the previous Fig- ure 5.7, but instead of comparing different language implementations or runtime engines, a comparison is made of different encryption algorithms for Netcon data transfers. In the figure, it is apparent that the choice of encryption algorithm does not have a very big impact on the overall performance. Of course, the 64bit keysize cryptos are unsurprisingly faster than 128bit cryptos, but any significant difference does not really show up until we use bigger log sizes (150k+ elements).

It was mentioned earlier that the actual time for encryption/decryption in the Net- con tests is dwarfed by the total time consumed just by parsing incoming packet chunks. The process involves excessive calls to array copy functions, which added together, takes a lot of time. This will be more apparent after looking at the next graph, Figure 5.9.

(38)

30 5.8. SUMMARY

Total Encryption+Decryption Time

0 1000 2000 3000 4000 5000 6000

0 50 100 150 200 250

log size (k packets)

total time (ms)

RC2 (64key,64blk) (err: 1-12) CAST128 (128key,64blk) (err: 7-21)

DES (64key,64blk) (err: 4-19) 3DES (128key,64blk) (err: 9-21)

Rijndael (128key,128blk) (err: 3-26)

Figure 5.9: Total time of encryption/decryption.

What this graph shows, is the total actual encryption/decryption time for each different crypto algorithm. The time for sending/receiving/parsing the packets is not included here. By comparing the timings to the previous graph, it is apparent that the operation of cryptography only plays a small part in the overall time du- ration of a Netcon data transfer. For example, the DES algorithm clocks in at 1,5s at 250k log entries. The total time for a DES-encrypted transfer is ca 50s as seen in Figure 5.8. That means the time share actually spent encrypting/decrypting is surprisingly only 3% of the total transmission time. This varies somewhat between algorithms up to 10%.

5.8 Summary

The performance figures presented in this chapter have shown a number of char- acteristics for the different implementation languages and/or runtime engine solu- tions. The main points can be summarized:

• The C++ implementation is unsurprisingly the fastest one for doing database searches. C# is about three to four times slower. C++ would definitely be the better choice for intensive, low-level operations.

• TCP Socket transfers and Named Pipe transfers are roughly equal in terms of speed. Pipes may have a small edge in the above inter-process transfers.

(39)

CHAPTER 5. EVALUATION 31

• At present time, the choice of cryptography algorithm does not have a very big impact due to other factors of the C# Netcon implementation.

• The packet parsing functionality is a bottle-neck of the Netcon implemen- tation. It would be highly desirable if this could be optimized or worked around.

• The Mono engine executes code somewhat slower than the Microsoft run- time engine, roughly 30% in the above test cases. Still, the difference is not enormous and Mono might very well be a viable alternative to the Microsoft runtime.

• The DotGNU Portable.NET runtime engine is not practically usable at present time. The interpreter engine is simply too slow to provide good performance.

(40)

Chapter 6

Design Proposal

This chapter will present the final design proposal of the LQS system component based on results from the evaluation and the research. First, a general overview of the design and peripheral components is presented. Then follows a rundown of the most vital system parts, their intended functionality and the design rationale behind it.

6.1 Overview

The proposal outlined in this chapter is the result of findings in the prototype tests as well as in an investigative process in general software and multi-platform soft- ware design. Efforts have been made to identify the problem areas, and accordingly find appropriate design solutions. In software design, a given problem may often, if not always, have more than one solution. Therefore, at the end of the chapter pros and cons of this as well as various other design approaches are discussed.

This chapter is also accompanied by two high-level design diagrams (see appen- dices B-1 and B-2) which might prove helpful to better grasp the whole picture of the proposal.

6.2 Design decisions

Below follows more detailed descriptions and discussions of each major area that needs to be taken into account for the design of the LQS component.

6.2.1 Cross-platform compatibility

Several options exist for dealing with the problem of cross-platform compatibility.

An overview of the most common solutions is given in section 2.3.2.

What we would like to achieve is to avoid maintaining separate source code sets for each platform. This would mean a lot of duplicate code and effort, and it would

32

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av