• No results found

Testing of a smart transducer network, based upon open-source technology

N/A
N/A
Protected

Academic year: 2021

Share "Testing of a smart transducer network, based upon open-source technology"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)

Malm¨

o University

Faculty of Technology and Society

Testing of a smart transducer network,

based upon open-source technology

Bachelor Thesis, 180 credits

Bachelor of Science in Computer Engineering

Author: Mathias Beckius Supervisor: Magnus Krampell Examiner: Dr. Ulrik Eklund March 15, 2016

(2)

Abstract

Arduino Verkstad AB, the Swedish branch of Arduino, has developed a prototype that demonstrates a smart transducer network with a self-configurable communication protocol. The protocol is called I2C+, since it is based upon I2C. One of the possible areas of application is the EU funded PELARS project, where Arduino Verkstad is responsible for creating educational tools. Within the PELARS project, a maximum number of transducer modules is expected to be 20 modules, connected at the same time.

The aim of this thesis was to create a testing tool and a test suite for the prototype system, which later could be used and also further developed by engineers at Arduino Verkstad, during the development of the final product. Though the testing tool was primarily needed for this particular system, it was considered desirable if it could be reused for similar projects as well.

It was relevant to create the testing tool and the test suite in order to analyse the validity of the I2C+ protocol, which might become Arduino’s future standard in connec-tivity between smart transducers. The performance of the I2C+ protocol also affects the implementation of the smart transducer system within the PELARS project.

This work has been guided by a specification of requirements and also by a systematic top-down approach of solving several subproblems. A testing tool and a test suite was created, which serves as a proof-of-concept. The testing tool has a modular design, which makes the solution reusable for other purposes. The testing tool and the test suite was validated by using Arduino’s prototype as a test object. Although only 10 transducer modules was used during these tests, the results showed that the prototype does not perform well for a growing number of modules. Therefore, improvement of the system and further testing is advised.

(3)

Acknowledgements

I would like to thank Arduino Verkstad for giving me the opportunity of working with their prototype. Though this involved some challenges, this also gave me some valuable experience.

I would like to thank friends, family and co-workers for supporting me and encourag-ing me to finish this thesis.

I would also like to thank Magnus, who always gave me a push in the right direction whenever I got lost. He also gave me a lot of great feedback on my thesis.

Finally, I would like to express my deepest gratitude to Karin, who always has supported me, and also had a lot of patience with me, during my years as a student.

(4)

List of Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Prototype . . . 1 1.3 Problem . . . 2 1.4 Relevance . . . 3 1.5 Thesis outline . . . 3 2 System description 5 2.1 UI . . . 5 2.2 Hub . . . 6 2.3 Modules . . . 6 2.4 Communication . . . 6 2.5 Hardware specifications . . . 7 3 Theoretical background 8 3.1 The purpose of testing . . . 8

3.2 Test case design . . . 8

3.3 Black-box testing . . . 9

3.4 Phases of software testing . . . 10

3.5 Manual and automated testing . . . 10

3.6 Verification and validation . . . 10

3.7 Usability and response time . . . 11

3.8 Measuring execution time . . . 11

3.9 Analysing and visualising data . . . 12

4 Related work 13 4.1 ToLERo: TorX-Tested Lego Robots . . . 13

4.2 A System for Automatic Testing of Embedded Software in Undergraduate Study Exercises . . . 13

4.3 Using Arduino microcontroller boards to measure response latencies . . . . 13

5 Methodology 15 5.1 Systems approach . . . 15

6 Results 17 6.1 Problem breakdown - an overview . . . 17

6.2 Documentation of the prototype . . . 17

6.3 Creation of test suite . . . 18

6.4 Creation of testing tool . . . 21

6.5 Improvement of the prototype . . . 24

6.6 Test results . . . 27

7 Discussion 30 7.1 General discussion . . . 30

7.2 Comparison with related work . . . 31

(5)

8 Conclusion 32 8.1 Answering research questions . . . 32 8.2 Contributions . . . 32

9 Future work 33

9.1 Further improvements . . . 33 9.2 Test suite . . . 33

Appendix A Requirements

Appendix B Test suite

(6)

Chapter 1

Introduction

1.1

Background

Arduino SA [1] is a company that develops open-source hardware and software, made for rapid prototyping of electronic systems with sensors and actuators. The products are designed to be easily used by both professionals and hobbyists, within applications such as home automation and robotics. Some of the most famous products are Arduino Uno, Mega, Y´un and Leonardo.

Arduino is a global organisation with offices in several places around the world. One of them is Arduino Verkstad AB [2], which is Arduino’s Swedish office, located in Malmo. Arduino Verkstad runs projects mainly related to education and research and development (R&D). One of the current projects is the PELARS [3] project, funded by the European Union. In this project, Arduino Verkstad is responsible for creating educational tools with the purpose to aid teachers and students within science, technology, engineering and math. One of the ideas within the PELARS project, is to include sensors and actuators (transducers) in laboratory exercises. To make such components easy to use and to appeal to different levels of education, individual transducers will be mounted on circuit boards. On each board, the transducer will be connected to a microcontroller, which makes it a so-called smart transducer module. The goal is to produce modules that are easy to connect, in order to form a network of transducers [4].

There are many ways of creating networks of smart transducer modules. Configuring these kinds of networks can be time-consuming. Connecting and disconnecting modules would be a lot easier if manual configuration is reduced to a minimum, or preferably, not needed at all. Key requirements for such networks could be as follows:

• Modules initiate communication without manual configuration, i.e. they have ”plug and play” capabilities (RG 1)1.

• The communication protocol is effective, in terms of performance (RG 2).

In order to verify requirements, the system must be tested. By executing a number of test cases, the developers at Arduino Verkstad can verify that a set of specified requirements are satisfied.

Research question (generic problem):

RQ 1: How can a system for self-configurable transducer networks be tested during devel-opment, in order to verify requirements such as RG 1 and RG 2?

1.2

Prototype

Arduino Verkstad has developed a prototype (see Chapter 2 for details) which demon-strates a smart transducer network with a self-configurable communication protocol. The prototype is capable of connecting and disconnecting modules, which enables the user to create varying sizes of transducer networks. Interaction with the modules is limited to setting the output of actuator modules and reading the current input/output of actuator

1

(7)

or sensor modules. This system is supposed to be used within the PELARS project. For this purpose, a maximum amount of modules is expected to be 20 modules, connected at the same time [4].

The protocol, also created by Arduino Verkstad, is called I2C+, due to the fact that the communication is based upon the I2C protocol [5]. This protocol was chosen since it is suitable for short-distance communication, has a minimal amount of wiring and is also common for connecting external modules to Arduino boards in general [4].

A application-level protocol, JSON [6], is used to encode and decode transducer-specific data, such as identification number, type, value (input and output). JSON was mainly chosen since the data format is human-readable, which makes it easy for developing as well as analysing and debugging [4].

The prototype was developed within a limited time frame. It had bugs and it was limited in how many modules that could be connected at the same time (4-5 modules) [4].

1.3

Problem

The aim of this thesis was to create a testing tool, to be used for testing of a system as described in Chapter 2. In order to perform tests, a set of test cases (a test suite) also had to be defined. Key requirements for this thesis were:

• A test suite is created, to be used for verification of the prototype (R 1)2.

• A testing tool is created, to be used for executing test cases (R 2).

• The testing tool and the test suite are validated using the prototype (R 3).

Derived from these key requirements, additional requirements were specified for the out-come of this thesis.

The testing tool and the test suite were intended to be a proof-of-concept, that could be used and also further developed by engineers at Arduino Verkstad, during the de-velopment of the final product. Though the testing tool was primarily needed for this particular system, it was considered desirable if it could be reused for similar projects as well. Therefore, an additional requirement was formulated as follows:

• The testing tool can be used for verification of similar systems (R 2.1).

To reflect the core values which the concept of Arduino is built upon, it was required that the testing tool should be based upon open-source licensed products. Related to this matter, it is a common practice that Arduino use their own products within R&D. Hence, another requirement for the testing tool was specified:

• The testing tool is based upon open-source products, primarily Arduino software and hardware (R 2.2).

More requirements are presented in Chapter 6. The full list of requirements is available in Appendix A.

Research questions (specific problem):

• RQ 2: How can the test cases be written, and the testing tool be created, in order to bring up a suitable solution for verification of this particular system?

• RQ 3: How can the testing tool be created, in order to be a suitable solution for similar needs in other projects?

• RQ 4: How can the test cases and the testing tool be validated?

(8)

1.3.1 Prerequisites

When the work of this thesis began, a prototype (both software and hardware) could be accessed. Unfortunately, the prototype had almost no documentation. Besides reading existing documentation, knowledge could also be gained from the prototype’s source code and by talking to developers. This resulted in the system description in Chapter 2.

Moreover, there were no documented requirements for the system. Due to the lack of specific requirements, the functionality of the prototype, as described in Chapter 2, had to be used as a reference for specifying test cases. The test suite is described in Appendix B.

1.3.2 Limitations

• The test suite is limited to testing the hub device, the transducer modules, and the communication between these subsystems (see Chapter 2 for details).

• The test suite is also limited to black-box testing. Therefore, the inner workings of the prototype is neither analysed nor discussed. The test cases are only testing the communication to and from modules (via the hub). This also implies that there is no electrical verification of whether e.g. a sensor module does convert a voltage to the correct corresponding digital value.

• Testing was only done with 10 modules, due to a limited number of accessible mod-ules.

• Only two of the test cases were executed, due to limited access of the prototype system.

1.4

Relevance

The results of this thesis are relevant for analysing the validity of the I2C+ protocol, which might become Arduino’s future standard in connectivity between smart transducers. The results might also be relevant for the PELARS project, since the results are considered to be interesting for the on-going development of the system.

1.5

Thesis outline

The remaining chapters of this thesis have the following contents:

• Chapter 2 System description consists of system overview that describes the functionality of the prototype system (as mentioned in Section 1.2). The contents of this chapter is actually one of the results of the thesis. The reason for not placing this among the other results in Chapter 6, is that the current outline builds up a context in a better way, which guides the reader to a better understanding.

• Chapter 3 Theoretical background is mainly dedicated to the topic of testing.

• Chapter 4 Related work summarizes work that are similar or related to the work of this thesis.

• Chapter 5 Methodology describes how the results was produced.

• Chapter 6 Results presents the results, such as the testing tool and the test suite.

(9)

• Chapter 8 Conclusion contains the conclusion of this thesis based upon the re-search questions (RQ 1 - RQ 4).

• Chapter 9 Future work recommends e.g. further development of the testing tool.

Source code for the testing tool and test cases have been omitted in this thesis report, but it can be acquired by request, by sending an email to mathiasbeckius@hotmail.com.

(10)

Chapter 2

System description

This chapter3 presents an overview of the system’s architecture. The system consists of a

hub device, a number of transducer modules and a user interface (UI). As mentioned in Section 1.3.2, the main focus will be on the hub device, the transducer modules, and the communication between them, as they are the core of the I2C+ concept (see Figure 2.1).

Figure 2.1: System overview.

2.1

UI

The basic purpose of the UI is to show a visual representation of the transducer network. Values from modules, collected and sent from the hub, can then be displayed on the screen. The hub can also receive commands for setting the output of actuators to specific values. This feature gives a user the possibility of controlling actuators via the UI. By combining both features, reading sensors and setting actuators, another possibility emerges: a sensor module can control one or more actuator modules.

Example: Let’s say that two modules are connected to the hub, a module with a push-button and a module with a light-emitting diode (LED). By vir-tually connecting the output of a push-button to the input of the LED, the LED will be controlled by the push-button. When pressing the push-button, the LED will light up.

The future version of this application will allow the user to define more advanced interac-tion between the modules. In other words, it will be a visual programming environment, running on e.g. a personal computer (PC).

(11)

2.2

Hub

The hub is the center of the network. It is responsible for keeping track of transducer modules and to convey data between the UI and the modules. In order to communicate with the modules, the hub must identify each module with an individual address. The hub assigns an address to a module when the module connects to the network.

The hub requests data from the modules regularly. When the requested data is re-ceived, the hub redirects the data to the UI. The hub can also set the output of actuators, based upon a commands sent from the UI.

2.3

Modules

A module has one transducer, a sensor or an actuator. Typical sensors are push-buttons, potentiometers, light-dependent resistors (LDR) and temperature sensors. Typical actua-tors are LED’s, transistor switches and motor controllers. All modules have the following properties:

• Identification number (ID): This is also the same as the module’s I2C address.

• Type: Specifies the type of transducer (e.g. LED, LDR).

• Value: This is a 8-bit integer value (0-255) for an actuator’s output value, and a 10-bit value (0-1023) for a sensor’s input value.

2.4

Communication

2.4.1 UI - Hub

The UI and the hub utilizes serial communication (UART) via the USB interface, at a baud rate of 115200 bps, 8 data bits, no parity, one stop bit.

The hub sends information about the network, i.e. the connected modules, to the UI. The UI can send commands for controlling the output of actuators.

2.4.2 Hub - modules

The hub and the modules communicate over the I2C bus, at a rate of 100 kbps. The address width is 7 bits. Furthermore, a fixed buffer width of 32 bytes is used for sending data. This means that 32 bytes is always sent, even if less than 32 bytes is actually needed for the data itself. The rest of the buffer’s positions are filled with NUL-characters4.

I2C is based on a master-slave relationship. By default the hub is master (with address

1) and the modules are slaves. Most of the data transaction is initiated by the hub. It requests data from the modules and sends data to the modules. An exception is made when a module is connecting to the network and requests for an ID. During this moment the module has the master role.

After a module has powered up, it joins the I2C bus (with address 0) and sends a request for an ID from the hub. The hub replies by sending a ID to the module, which is the same as the hub is assigning a free address (between 2-127) to the module. After the module receives the ID, it will rejoin the bus, as slave, with this ID as an address.

When the modules have been assigned individual addresses, the hub can request data from them. The data is then redirected to the UI.

The hub can also receive commands, from the UI, for controlling actuators. When a command is received, it is redirected to the correct module.

(12)

2.4.3 Data format

Data packets sent between all sub systems, i.e. UI, hub and modules, are JSON encoded and carry transducer-specific information (ID, type of transducer and data value). Data from e.g. a LED module with the ID 2 and the output value 0, sent to the UI via the hub, would look like:

{"ID":2,"Type":"LED","Val":0}

If an actuator module, e.g. with the ID 3, should be set to a specific output value, e.g. 255, then the data string would look like:

{"ID":3,"Val":255}

2.5

Hardware specifications

The hub uses a Atmel ATmega32U4 microcontroller (MCU), with a 16 MHz system clock and a supply voltage of 5 V. The ATmega32U4 has an on-chip support for USB commu-nication. The hub is powered by the UI, via a USB cable.

All modules have the same basic hardware: a Atmel ATmega328P microcontroller, with a 8 MHz system clock and a supply voltage of 3.3 V. The hub and modules are interconnected via a 5-line bus interface. The hub have only one bus connector, but the modules have connectors on both left and right side (see Figure 2.2). The modules are powered by the hub. Naturally this makes the hub the first node on the bus, while the modules may be connected in any order. The bus lines consists of:

• Power supply: +3.3V and ground (GND).

• I2C: data (SDA) and bus clock (SCL).

• One spare line.

(13)

Chapter 3

Theoretical background

3.1

The purpose of testing

There are several opinions about the reasons of testing. Whittaker [7] emphasizes the importance of software quality. He thinks that there is no excuse for software failures, since they can be minimized by various strategies for preventing and detecting bugs.

Black [8] declares that software testing is not about proving that the software is free from bugs. Testing is not even about discovering all bugs. These goals are very hard to achieve, since most software systems almost have an infinite number of possible sequences. Black [8] leans more to the notion that testing is performed to reduce the risk of non-working software to an acceptable level.

According to Berger [9], testing is generally performed to:

• Find bugs. Besides discovering and removing errors, this also reduce costs - the earlier a bug is found, the less expensive it is to fix.

• Demonstrate that a system fulfils its specification.

• Improve performance. Finding and eliminating inefficient code when testing, leads to better performance.

Eriksson [10], as well as Berger [9], points out that there is no practical way of proving that software is entirely correct. Only a subset of tests can be chosen, preferably tests that have the highest probability of detecting most errors.

3.2

Test case design

3.2.1 Requirements

According to Ryber [11], software should be developed based upon requirements, which are used as a foundation for creating test cases. In order to create test cases, the requirements must be:

• Explicit and non-ambiguous.

• Consistent.

• Complete.

• Measurable.

Based upon the results from running a set of tests, it is possible to determine if the requirements are fulfilled.

Eriksson [10] describes different kinds of requirements, such as functional and non-functional requirements. Functional requirements describes the behaviour of a system. These requirements are usually described by specifying input to the system and the ex-pected output. Non-functional requirements covers aspects such as performance and us-ability (see Section 3.7).

(14)

3.2.2 Test cases

As mentioned in Section 3.1, only a subset of all possible tests can be chosen. This is due to the fact that even a smaller system can almost have an infinite number of tests. A system, or a function within a system, can generally be used in multiple ways, with a combination of different inputs and with both consequent and concurrent actions [10]. It is not possible to cover all ways of applying input. The key is to build the foundation of testing on variation. All the things that can be varied during a test must be identi-fied. Strategic variations must be selected wisely, and some variations may be excluded [7]. Then a set of tests is chosen, which are the tests that probably will expose most of the crit-ical errors. It is also important to write tests for both correct and incorrect behaviour [10].

Eriksson [10] points out some pros and cons with test cases: • Pros:

– Test cases are structured and it is easy to repeat tests over and over.

– When errors are exposed, results from a good test case is basically an error report.

– A test case is rather easy to create an automated test from, since the test steps work as instructions.

• Cons:

– The number of test cases tends to grow to an amount that is hard to overview and maintain.

– Test cases can not cover everything, even if there are thousands of them.

3.2.3 Different kinds of data input

Whittaker [7] declares that a common way of testing software is by executing the software in environments that is similar to the ”real environment”, and also by using realistic data. By realistic data, Whittaker [7] refers to feeding input to the system that is mimicking expected usage.

According to Hunt and Thomas [12], there are two kinds of data: real-world data and synthetic data. By using the two kinds of data, the different natures of these data can expose different kinds of bugs. As ”real-world” implies, real-world data represents typical user data and can be collected from an existing system. Synthetic data is artificially generated and might be needed due to any of the following reasons:

• A lot of data is needed, possibly more than can be provided by any collection of real-world data.

• Data is needed to stress boundary conditions (see Section 3.3).

• Data with certain statistical properties is needed, e.g. data in random or sorted order.

3.3

Black-box testing

As explained by Eriksson [10], ”black-box” implies that the system is viewed upon as a black box, i.e. without having knowledge of how the inner system details. Black-box testing is based upon requirements and specifications of the system. This can be performed on every level of testing, from unit tests to acceptance tests (see Section 3.4).

(15)

Ryber [11] refers to black-box testing as behavioural testing. He also mentions that black-box testing is applicable for functional as well as non-functional requirements, i.e. functional and non-functional testing.

According to Berger [9], it is suitable to apply the black-box perspective to an embed-ded system, with its different peripherals (inputs and outputs). Black-box tests are based upon which inputs are acceptable and their relation to the outputs. Some of the usual kinds of tests are boundary value tests, performance tests and stress tests [9]. Boundary value tests is performed with inputs that represent boundaries within a particular range. For example, if the numbers 1-12 is valid, then the boundary values would be 0, 1, 2, 11, 12 and 13. Performance tests is important when requirements for e.g. data rates and response times are specified [10]. Stress tests has the intention of e.g. overloading input channels to see that the software can handle and recover from overload [9].

3.4

Phases of software testing

According to Black [8], testing activities within a project could be divided into a sequence of phases, which also could be called test levels [10,11]. These levels are often organized in a sequence that corresponds to the order that parts of the system become ready to test [8]: 1. Unit testing, which basically means testing of subsystems (units). The subsystems

are tested as they are created.

2. Integration testing, which is testing of integrated units, i.e a collection of con-nected subsystems.

3. System testing, which refers to testing the entire system.

4. Acceptance test: At this level, the objective is to demonstrate that the system conforms to its specification and is ready for shipment.

3.5

Manual and automated testing

Systems can, at any level, be tested automatically or manually. Automated testing is achieved by writing and running test code [7]. It is appropriate to choose automated tests when a system should be tested under longer test procedures with lots of data input [11]. Although automation makes testing easier, it is not necessarily reliable, since the tests are still based upon software (which might have bugs). If a test succeeded, how do we know if the test is correct? If a test failed, how do we know that the failure is not in the test code [7]?

3.6

Verification and validation

According to the IEEE Standard Glossary of Software Engineering Terminology, verifica-tion is defines as:

The process of evaluating a system or component to determine whether the prod-ucts of a given development phase satisfy the conditions imposed at the start of that phase [13].

Validation is defined as:

The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements [13].

(16)

Verification demonstrates whether the output of a development phase conforms to the input, as opposed to showing that the output is actually correct. Errors resulting from incorrect input specification may not be detected and may propagate through later stages of the development cycle. Therefore, it is not sufficient to only depend on verification. Validation is also necessary to check for problems with the specification and to demonstrate that the system is operational [14]. Boehm [15] provides a short and simple description:

• Verification: Are we building the product right?

• Validation: Are we building the right product?

Verification can be done with functional and non-functional tests. Validation can e.g. be done with formal methods such as mathematical and logical techniques, to analyse a system’s behaviour [14].

3.7

Usability and response time

A system with a high level of usability will make it easier for its users to make the most of the system. To achieve this, it is important that people with knowledge about human-computer interaction (HMI) are involved in specifying requirements [10].

One of many HMI aspects when using a system is response time. According to Dix et.al. [16], a person can react to a visual signal in 200 milliseconds (ms), but factors as skill or practice (i.e. experience) might affect reaction time. Similar, but more comprehensive opinions are presented by Nielsen [17]:

• 0.1 second is about the limit the user to experience that the system is reacting instantaneously, and no special feedback is necessary except to e.g. display the result.

• 1.0 second is about the limit for the user to stay focused, even though the user will notice a delay. Normally it is not necessary with any special feedback during delays between 0.1 and 1 second, but the user will not have the experience of instant response from the system.

Normally, response times should be as fast as possible, but too fast response might lead to that a user cannot keep up, especially with visual feedback.

3.8

Measuring execution time

According to Stewart [18], there are several methods to measure execution time. The choice of method depends on the system’s hardware features and also on available instru-mentation tools (software and hardware). But, in order to select a measurement method, it is also necessary to consider the reason for measuring execution time. Some of the common reasons are analysis of real-time performance and debugging timing errors.

When debugging timing errors, White [19] suggests that using operations that heavily affects timing of the code, such as output to the serial port, should be avoided. One method could be to use spare I/O pins to show the system’s status and visualize the timing of cycles.

The same method can be used for measure the real-time performance of an algorithm. White [19], Stewart [18] as well as Ganssle [9] suggests this method. The approach is to toggle an output pin for the duration of the algorithm in question:

(17)

2. Execute algorithm.

3. Set pin level low.

The execution time, i.e. while the pin level is high, can then be measured with an oscillo-scope or a logic analyser.

The method of using a single output pin, was used by Zhang [20] and Eriksson [21] for measuring execution time for CAN bus communication. But, as Eriksson [21] states, this method has the disadvantage of affecting the timing, although the impact is not very big.

3.9

Analysing and visualising data

When there is a need of collecting and processing test results, it might be practical to use an external tool for these purposes. MATLAB is an application intended for tasks such as numerical calculations and visualisation of data [22]. MATLAB can also be used for communication between a PC and a microcontroller. There is a support package for Arduino boards5, which enables a programmer to control the peripherals of an Arduino board and to collect data via serial communication over a USB port [23].

MATLAB is suitable for scientific and engineering applications. With a microcon-troller, data can be sampled and transferred to a PC, where the data can be processed. One of many possible application, is to detect perturbations in the Earth’s magnetic field, as carried out by Yerrabothu [24]. In this implementation, an Arduino board samples data from magnetometers and other sensors. Data is then transferred to MATLAB, where the data is presented in the form of plots.

A non-proprietary alternative to MATLAB is GNU Octave, although it does not have all the features that is implemented in MATLAB [25]. Octave can, as well as MATLAB, be used for a wide range of applications, such as controlling oscilloscopes [26] or analysing and visualising data in robotics education, to name a few. The latter example refers to Balogh [27], who uses Atmel microcontrollers, Arduino software and GNU Octave in robotics education at the University of Technology in Bratislava.

(18)

Chapter 4

Related work

4.1

ToLERo: TorX-Tested Lego Robots

The work of Snippe [28] is focused on model-based testing of Lego Mindstorms [29] robots. Model-based testing is done by generating test cases from a model that represents the system under test. Then the testing tool runs the tests and compare the results with the expected results, based upon the model.

In this case, Snippe used a PC-based testing tool called Torx [30], which is suitable for industrial as well as academic applications. Torx is designed to be modular. The link between Torx and the system under test is called adaptor, which translates the communi-cation between the systems.

The adaptor has to be specific for each system to be tested. The aim of Snippe’s work was to 1) develop a specific adaptor for a Lego Mindstorms robot, and 2) develop a generic adaptor out of the specific adaptor. With a generic adaptor, more Lego Mindstorms robots could be supported.

4.2

A System for Automatic Testing of Embedded Software

in Undergraduate Study Exercises

Legourski et al. [31] have developed a test system to be used in embedded systems lab courses at Vienna University of Technology. From their experience, it is time-consuming to manually verify the correctness of the student’s programs. To solve this problems, Legourski et al. decided to simplify the process of verifying the functionality of embedded software.

The test system consist of a custom test board, with an Atmel microcontroller, and a host computer (PC). The test board is programmed, controlled and monitored from the host computer. This means that the tests are selected and started remotely from the host. After testing is done, test results are presented to the user at the host computer. The test board can be connected to the target system via digital and analog I/O pins, RS-232 (UART) and I2C. The test board is connected to the host computer via UART interface. The system is capable of performing black-box test on the I/O pins of the target system. i.e. the student’s microcontroller system. The test system provides the target system with input signals which simulates e.g. sensors and switches. The output responses from the target system are then compared to a pre-defined specification, which describes both functionality and timing of signals. The specification of how the target system should behave is based upon a special meta-language, which is compiled and downloaded to the test board. The output of the test system, i.e. the test results, can be presented both as binary grading, such as ”passed” or ”failed”, but also in the form of partial credit for partial functionality.

4.3

Using Arduino microcontroller boards to measure

re-sponse latencies

Schubert et al. [32] have been researching on how to use Arduino boards within cognitive science research. The aim of their work was to replace standard equipment for measuring

(19)

latencies of button presses, which are claimed to be both expensive and inflexible.

An Arduino board is considerably cheaper than standard equipment, such as response boxes. Another advantage is that multiple peripherals and I/O on an Arduino board, give researchers the possibility of connecting a wide range of sensors and actuators when conducting experiments.

Arduino boards, such as Arduino Uno and Arduino Leonardo, can easily be connected via the USB interface, to a PC (with Windows or Linux) or Mac. Arduino Leonardo has the advantage of having a built-in USB controller, which enables the Leonardo to act as a keyboard or a mouse. This opens up the possibility of not being dependent on standard software for handling measurements.

Their studies showed that an Arduino board can measure with good enough accuracy and excellent precision. An Arduino based solution can perform as good as standard equipment, although it is less expensive and more flexible.

(20)

Chapter 5

Methodology

The systems approach, as described by Pahl et al. [33], was chosen as the main method for producing the results of this thesis.

The work of this thesis is based upon a problem formulated by the needs of a company, and can be regarded as an engineering-oriented problem. Since the systems approach have been applied many times to solve problems in the industry, according to Pahl et al. [33], it was appropriate to choose this method. Moreover, the systems approach can easily be applied to multiple levels of subproblems. Another reason for relying on the guidelines from Pahl et al. [33], is their recommendation on a flexible adaptation of the method. The systems approach has, during the work of this thesis, been used in a flexible way, but still according to the method in general. See Section 5.1 for a description of the method and how it was applied.

While the systems approach was used as the main method for producing results, a selection of methods, recommended by Hevner et al. [34], were applied to evaluate the results. These methods were chosen since they are suitable for an academic context. Working with these methods are described in Section 5.1.4.

5.1

Systems approach

Figure 5.1: Steps of the systems ap-proach.

The following sections describe the steps of the systems approach (see Figure 5.1) and how they were used dur-ing the work of this thesis. Some steps were merged, and were also renamed to shorter equivalents. Some of the new names were even more proper, considering that the method were used for different types of prob-lems.

5.1.1 Problem breakdown (System studies)

During the first step of the systems approach, ing information about the system was gathered: exist-ing documentation, problem definition (see Section 1.3) and initial requirements (see Section 1.1 and 1.3). The aim was to formulate detailed information of the main problem and its subproblems. By using a top-down approach, a breakdown structure was outlined in Sec-tion 6.1. This structure was extended for almost all subproblems, see Section 6.2.1, 6.3.1 and 6.6.1.

5.1.2 Specification of requirements (Goal programme)

The purpose of this step was to specify requirements, to be used as a basis for evaluating the outcome of solving each problem (such as solution variants). Requirements were derived from the key requirements of this thesis (see R 1 - R 3 in Section 1.3). Some of these requirements, R 2.1 and R 2.2, were already introduced in Section 1.3. Additional requirements were only specified where it was considered to be necessary. Producing the

(21)

test results (Section 6.6) did not need any formal requirements, mostly because the main purpose of running the tests was to demonstrate the possibilities of the testing tool and analyse to the behaviour of the system.

The requirements can be found in Section 6.2.2, 6.3.2, 6.4.1 and 6.5.1. The full list of requirements is available in Appendix A.

5.1.3 Solution generation (System synthesis)

The purpose of System synthesis is to synthesise solution variants, based on the infor-mation that was acquired during the previous steps (such as problem formulation and requirements).

Solution variants were only generated for the testing tool (see Section 6.4.2). These viable solutions were generated by considering the requirements, reasoning and comparing with related work (from Chapter 4). Only one of these solutions were implemented in the end.

It was not necessary, and in some cases not applicable, to come up with more than so-lution for the other subproblems (see Section 6.2.3, 6.3.3 and 6.6.2). All of these soso-lutions were implemented. A special case was during improvement of the prototype (see Sec-tion 6.5.2), where several soluSec-tions were generated to solve different issues. All soluSec-tions were considered to be necessary, and consequently they were implemented.

5.1.4 Evaluation (System analysis and System evaluation)

The purpose of System analysis is to analyse the solution variants for their properties. During System evaluation, the properties of the solutions are then compared with the requirements. These steps were merged into one activity.

To support the evaluation, some methods suggested by Hevner et al. [34] were applied. Analytical evaluation was used for documentation (see Section 6.2.4). A combination of analytical and descriptive evaluation was used for the test suite (see Section 6.3.4) and the testing tool (see Section 6.4.3). The testing tool and the test suite was also evaluated by running tests, which consisted of functional and non-functional black-box testing (see Section 6.6.2). Analytical evaluation was applied to the test results (see Section 6.6.3). The improvements, as described in Section 6.5.2, was generated from experimental evaluation of the prototype. These improvements were then evaluated in an analytical way (see Section 6.5.3).

5.1.5 Implementation (System decision and System implementation plan)

The evaluation leads to a decision whether the solutions should be implemented. The implementation of almost all solutions are described in Section 6.2.5, 6.3.5, 6.4.4 and 6.5.4. The implementation step was not applicable for the test results.

(22)

Chapter 6

Results

6.1

Problem breakdown - an overview

By gathering existing information about the system, i.e. existing documentation, problem definition (see Section 1.3) and requirements (see Section 1.1 and 1.3), an overview of the main problem and its subproblems could be outlined.

As mentioned in Section 1.3, the aim of this thesis was to provide a testing tool, together with a test suite, to the developers at Arduino Verkstad. This is regarded as the main problem, which was broken down to subproblems (see Figure 6.1). The first subproblem (P 1) is quite obvious, since both the testing tool and the test suite need to be created. These results must then be validated, hence the second subproblem (P 2). The following sections will present the results from working with the next level of subproblems (P 1.1 - P 1.3, P 2.1 - P 2.2).

Figure 6.1: Overview of problems.

6.2

Documentation of the prototype

As mentioned in Section 1.3.1, the system had very little documentation. In order to define test cases and to create a testing tool, it was necessary to gain a deeper knowledge of the system. Therefore, documentation was created. Documentation was also needed to explain the rationale behind the test suite and the testing tool.

6.2.1 Problem breakdown

The following actions were identified in order to gather information for the documentation (see Figure 6.2):

(23)

6.2.2 Specification of requirements

To fulfil the needs of documenting the prototype, the following requirements were specified: • The documentation will clarify the purpose of each test case (R 1.1).

• The documentation will clarify the rationale of the testing tool’s design (R 2.3).

6.2.3 Solution generation

The documentation was created in iterations, as a result from reading existing documen-tation, talking to developers, examining the prototype and analysing the source code.

6.2.4 Evaluation

The documentation describes the main functions of the smart transducer system: it is possible to connect and disconnect modules, data from the modules is requested by the hub and redirected to the UI, and it is possible to set the output of an actuator module. These features are covered by the test suite.

By understanding the basics of the system, it is also easy to understand the simplicity of the testing tool, especially since the Arduino board (see Section 6.4.2 for details) is emulating the UI. Moreover, the data rates, which are also specified in the documentation, has been used for validation of the testing tool (and the test programs) and the results produced from running the tests.

In conclusion, the documentation fulfils the requirements (R 1.1 and R 2.3).

6.2.5 Implementation

The final result is an overview of the system, which can be found in Chapter 2.

6.3

Creation of test suite

6.3.1 Problem breakdown

The subproblem P 1.2 was broken down to additional subproblems (see Figure 6.3). First, test cases had to be written (P 1.2.1), in order to execute tests with the help of test programs (P 1.2.2).

Figure 6.3: Overview of P 1.2 and its subproblems.

6.3.2 Specification of requirements

As a minimum, the test suite should at least cover functional aspects such as connecting, disconnecting an arbitrary amount of modules and setting the output of transducers. It was also necessary to cover HMI aspects of the system, in order to evaluate the performance of the system and to ensure a user-friendly experience. To be able to determine the results from performing tests, requirements had to be specified (see Table 6.1).

(24)

Requirement ID Description

R 1.2 The test suite covers functional aspects of the system.

R 1.2.1 The hub can detect the connection of an arbitrary amount of modules, no matter of how many modules already connected, and also notify the UI (or equivalent). R 1.2.2 The hub can detect the disconnection of an arbitrary amount of modules, no matter

of how many modules already connected, and also notify the UI (or equivalent). R 1.2.3 The hub can read data from an arbitrary amount of transducer modules, no matter

of how many modules are connected, and also redirect this to the UI (or equivalent). R 1.2.4 The electrical signal output of a sensor is translated correctly to a corresponding

digital value on a sensor module.

R 1.2.5 The UI (or equivalent) can set the output value of an arbitrary amount of actuator modules, no matter of how many modules are connected.

R 1.2.6 The digital output value of an actuator module is translated correctly to a corre-sponding electrical signal.

R 1.3 The test suite covers non-functional aspects, e.g. performance requirements related to HMI.

R 1.3.1 The response time for connecting an arbitrary amount of transducer modules is max-imum 100 ms, no matter of how many modules are connected.

R 1.3.2 The response time for disconnecting an arbitrary amount of transducer modules is maximum 100 ms, no matter of how many modules are connected.

R 1.3.3 The response time for setting the output of an arbitrary amount of actuator modules is maximum 100 ms, no matter of how many modules are connected.

R 1.4 The test suite covers scenarios where the hub is connected first to the UI (or equiva-lent), before an arbitrary amount of modules is connected to the hub.

R 1.5 The test suite covers scenarios where an arbitrary amount of modules are first con-nected to the hub, before the hub is concon-nected to the UI (or equivalent).

R 1.6 The test suite covers scenarios where an arbitrary amount of modules are first con-nected to the hub, before the hub is concon-nected to the UI (or equivalent). Then another set of modules are connected to the network.

R 1.7 At least 10 modules can be operating on the network at the same time. R 1.8 Up to 20 modules can be operating on the network at the same time. R 1.9 More than 20 modules can be operating on the network at the same time.

Table 6.1: Requirements necessary for developing a test suite.

These requirements mainly apply to the system, but since the test suite is necessary for testing the system, these requirements also apply to the test suite.

6.3.3 Solution generation

A basic set of test cases were developed:

Test case ID Test case title

T 1 Connect 1 module (n modules connected) T 2 Connect n modules (m modules connected) T 3 Connect n modules (0 modules connected)

T 4 Disconnect 1 module (n modules remain connected) T 5 Disconnect n modules (m modules remain connected) T 6 Disconnect n modules (no modules remain connected) T 7 Set the output of 1 actuator (n modules connected) T 8 Set the output of 1 actuator (n modules connected) T 9 Set the output of n actuators (m modules connected) T 10 Set the output of n actuators (n modules connected)

Table 6.2: Overview of test cases.

The complete test suite can be found in Appendix B. The ID number of each test case does not reflect the chronological order of which the test cases were developed. After the testing tool had been created, two test programs (based upon test case T 7 and T 8) were devel-oped. The results from running these tests are described in both Section 6.6. The reason for not writing more test programs is discussed in the following section (Section 6.3.4).

(25)

6.3.4 Evaluation

A basic set of tests have been written, thus satisfying one of the key requirements (R 1). In other words, this is one of the most important results of this thesis. The tests are primarily focused on automated testing, with the purpose of demonstrating the testing tool.

The test cases are testing the basic functionality of the prototype system (as described in Chapter 2). Functional requirements can be verified with the test cases, although there is an emphasis on performance and the HMI aspect. The expected result for all test cases are focused on that a certain event should occur within a time limit. As an example, the expected result of test case 7 (T 7) dictates that the maximum response time for setting the output of an actuator module is 100 ms. Even though the functional aspect seems diminished, this is actually tested implicitly. If a module can be set, then a response will be detected and the response time will be recorded. If a module does not respond, i.e. can not be set, then a time-out will occur.

Testing the performance is relevant when it comes to validating the I2C+ protocol. This is also relevant since the smart transducer network is supposed to be used in education within the PELARS project, which might require a decent level of user-friendliness on the system. To achieve a user-friendly system, a maximum response time of 100 ms might be a good measure. This gives some time margin for the UI to perform e.g. computation and visual representation (see Section 3.7 for HMI related facts). It it also relevant to test the system with several modules, to evaluate the behaviour and performance for different network sizes.

A limitation worth mentioning, is that none of specified tests are explicitly testing if a sensor’s value can be read by the UI, or the time between a sensor’s new value is registered (by the module itself) until it is received by the UI. The reason for not cov-ering these aspects, is that they are already covered by other tests. The software for a transducer module practically works the same, no matter if it is a sensor or an actuator. It is sufficient to test if the value of a generic transducer module can be read. Also, if the response time for setting an actuator’s output and reading the updated value is good enough, then it is also good enough for a sensor, since only half of the procedure is required.

Only two of the test cases, T 7 and T 8, were converted into test programs. They were also the first written test cases, which were directly developed into corresponding test programs. The reason for doing so, was due to the fact that a prototype system was disposable at the time being, to be used for executing tests. During further development of the test suite, it was no longer possible to occupy the prototype for testing, which led to the decision of not writing any more test cases or test programs. This is also the reason for why the test suite does not fulfil all of the specified requirements (see Table 6.3):

Requirement ID Test case ID R 1.2 T 1 - T 10 R 1.2.1 T 1 - T 3 R 1.2.2 T 4 - T 6 R 1.2.3 T 1 - T 3, T 7 - T 10 R 1.2.4 ——————— R 1.2.5 T 7 - T 10 R 1.2.6 ——————— R 1.3 T 1 - T 10 R 1.3.1 T 1 - T 3 R 1.3.2 T 4 - T 6 R 1.3.3 T 7 - T 10 R 1.4 T 1 - T 10 R 1.5 ———————

(26)

R 1.6 ——————— R 1.7 T 1 - T 10 R 1.8 T 1 - T 10 R 1.9 T 1 - T 10

Table 6.3: Relation between test cases and requirements.

6.3.5 Implementation

The realisation of test case T 7 and T 8 is regarded as the implementation of the test suite.

6.4

Creation of testing tool

6.4.1 Specification of requirements

These were the initial requirements for the testing tool (specified in Section 1.3): • The testing tool can be used for verification of similar systems (R 2.1).

• The testing tool is based upon open-source products, primarily Arduino software and hardware (R 2.2).

To simplify the development of solution variants, additional requirements were specified: • The testing tool is capable of running automated tests, i.e. executing test programs

(R 2.4).

• The testing tool is capable of measuring electrical signals (R 2.5). • The testing tool is capable of calculating response times (R 2.6).

• The testing tool is capable of capturing and storing measurement data (R 2.7).

6.4.2 Solution generation

To fulfil requirement R 2.2, it is necessary to use an Arduino board. Since an Arduino board is capable of measuring electrical signals, via digital and analog I/O, this would fulfil requirement R 2.5. The measurement capability gives extended possibilities of mea-suring e.g. connection and disconnection of modules (read about the ”alive signal” in Section 6.5.2).

Using only a microcontroller is not enough, especially not if the test results should be presented in a decent way. The microcontroller must also be programmed. Using an Arduino board, it is quite common to use the Arduino IDE to upload the software, which is done from a PC. This narrows down to a solution based around a PC host, from which the Arduino is programmed, and an Arduino board, which probes the test object. This solution is similar to the test system developed by Legourski et al. [31], but different from the work by Snippe [28].

According to Schubert et al. [32], an Arduino board such as the Arduino Uno or Arduino Leonardo, is capable of measuring with accuracy and precision. Even though this applies to another field of application, it is not much different from this situation. In this case, millisecond precision is good enough.

Inspired by the work of Legourski et al. [31] and Schubert et al. [32], the Arduino board will not only detect e.g. if an actuator can be set, but also calculate response times. This fulfils requirement R 2.6.

In order to fulfil the rest of the requirements (R 2.1, R 2.4 and R 2.7), the following solution variant were synthesised.

(27)

Solution 1

To fulfil requirement R 2.1, it is a good choice to make the solution easy to adapt for other purposes. For instance, it would an advantage if the software running on the PC host easy to edit. This makes a good case for programming the PC host software in a interpreted programming language. Development is simplified by removing the need of compiling the software.

For this particular solution, GNU Octave was chosen, which is also open source, thus fulfilling R 2.2. The advantages of Octave is that this environment is intended for tasks such as calculating, processing data and creating plots. The programming language in Octave is also useful, even for tasks such as file handling and serial communication [25].

By choosing Octave, the PC host software can consist of a complete solution for initi-ating tests, capturing and storing measurements. This would make the testing procedure automated or at least semi-automated. This fulfils both requirement R 2.4 and R 2.7.

Considering requirement R 2.1, the design would benefit from being modular. This can be achieved by making the PC host ”light-weight”, with a few basic functions as described above: initiate tests, capture and store measurements. The PC host should be generic so it can be re-used for any test program, independent of hardware. The logic for executing the testing procedure, i.e. verification such as measuring response time, would then be performed on the Arduino board.

After a connection has been established between the host and the Arduino board, the test program on the Arduino board will have a master role. This means that the test monitor will be led through the testing procedure.

The system would look like this (see Figure 6.4):

Figure 6.4: Block diagram illustration of testing tool.

The Arduino board will emulate the UI (see Section 2.1 for details) as it is communi-cating with the hub (see Section 2.2) by sending and receiving transducer data.

The actual test program is uploaded to an Arduino board that fits the needs of the particular test. In this case, a board with more than one UART is needed, such as an Arduino Mega or an Arduino Leonardo. A great feature with the Mega is that it has many I/O pins, which can be used for extensive probing of the test object.

(28)

The procedure for running a test will be performed as follows:

1. Upload a test program to the Arduino board, e.g. by using the Arduino IDE.

2. Start the PC host software, follow the instructions (connect modules, etc) from the test program. During the test procedure, test data (e.g. measurements) is sent from the Arduino to the host.

3. When the test is finished, test data is stored in a file and the host software exits. Stored data can later be processed, e.g. determine minimum, maximum and mean value, and also present the test results in the form of plots.

There is no need to design a graphical user interface (GUI) for the PC host software. Since starting the test procedure is not complicated, this application can be operated from the command line.

Solution 2

The second solution variant is exactly the same as solution 1, but with exception of using Python instead of GNU Octave.

Python is also a interpreted programming language, which can used for programming any application in general. Functionality for mathematical purposes and creating plots is also available via many community supported libraries [35].

Solution 3

This solution is based upon solution 2. In this case, the PC host software is only dedicated to initiate tests, collect and store test data, but not processing the data. Instead, data is stored in a CSV (comma separated values) formatted file. Test data can then be imported to spreadsheet software such as Microsoft Excel and OpenOffice Calc. These applications have powerful features for processing data and creating plots [36, 37].

6.4.3 Evaluation

Solution variants have been developed which can be used for performing functional and non-functional testing of the smart transducer system. All solution variants would fulfil all requirements (R 2.1-2.2, R 2.4-2.7).

Solution 3 would be the simplest to implement, since there would be no need of de-veloping software for processing data. Importing CSV formatted data into a spreadsheet application, and then apply functions data processing and plotting, is a quite easy with built-in tools. The disadvantage with this solution is that large amounts of data will be hard to handle. It is simpler to automate this process, which will make a case for solution 1 or 2.

Solution 1 and 2 are both valid as ”proof-of-concept” and they are also similar to the system developed by Leqourski [31]. In the end, it comes down to personal preferences. That is the reason why solution 1 will be implemented.

6.4.4 Implementation

Based upon the evaluation, solution 1 was selected as the optimum solution. In this case an Arduino Mega was used, since this was the board that was available at the time being. The implementation of the selected solution fulfils one of the key requirements (R 2), thus making this one of the most important results of this thesis.

(29)

Validation and verification

Requirements R 2.1 and R 2.2 were already fulfilled by the generating this particular so-lution. Requirement R 2.4, R 2.6 and R 2.7 were fulfilled by running tests T 7 and T 8. Requirement R 2.5 is practically fulfilled by the use of an Arduino board, but was put to practice by the development of the ”alive signal” feature (see Section 6.5.2 for details).

Unit testing was applied during the development of the testing tool, to verify functionality for:

• Generating and sending data packets.

• Receiving and interpreting data packets.

• Measuring and calculating response time.

The test programs (T 7 and T 8) starts by detecting the connected modules, in order to keep track of them during the testing procedure. If modules are being disconnected during the test, then the response time will be too high, which results in a time-out. Since this will change the conditions, under which the test is executed, this will automatically abort the test. This is a simple way of validating the testing procedure.

Validation of the test results was done by comparing the response times with the the-oretical transmission time, which showed that the measured values are probably correct (see Appendix C for details).

6.5

Improvement of the prototype

As mentioned in Section 1.2, the prototype had some issues that probably would complicate the testing procedure.

6.5.1 Specification of requirements

The prototype had to be adjusted in order to run tests without any major issues. Im-provements in general, that could simplify the testing procedure or aid the future UI, were considered as ”good-to-have” features:

• The hub does not fail (i.e. freeze, get stucked) during testing (R 3.1).

• The hub does inform the UI (or equivalent) when a module has been disconnected (R 3.2).

6.5.2 Solution generation

Problem with length of data string

The JSON encoded data string seemed to be incomplete under certain conditions. This was discovered when the hub’s output was monitored through a serial terminal program. One module, a potentiometer module, was connected to the hub. This type of module performs A/D conversions with a 10-bit resolution, which results in a output value between 0-1023. To see if new values could be registered, the potentiometer was turned. When the output value turned into a 4-digit number the last ”curly brace” was missing, i.e. the JSON string became incomplete (see Table 6.4.

(30)

JSON data string Number of characters {”ID”:2,”Type”:”POT”,”Val”:1} 29

{”ID”:2,”Type”:”POT”,”Val”:10} 30 {”ID”:2,”Type”:”POT”,”Val”:102} 31 {”ID”:2,”Type”:”POT”,”Val”:1023 31

Table 6.4: Formatting problem when JSON data string becomes too long.

As mentioned in Section 2.4.2, the I2C driver has a fixed buffer-width of 32 characters.

This means that the JSON string must fit within these constraints, when data is sent from a module. When the JSON data string was encoded, only 32 character were allocated, including a terminating NUL-character. This explained the contents of the string, when the sensor’s input value was 1023. The problem was solved by allocating an extra character, a total of 33 characters for encoding JSON strings.

Still, there was another problem to solve: How can the data string always fit within the constraints of 32 characters? This was solved by shortening the parameter keys. ”Type” was shortened to ”T”, and ”Val” was shortened to ”V”. This results in a minimum length of 19 characters, without any values. A maximum of 3 characters is added for the I2C address (2-127). A maximum of 4 characters is also added for the module’s value (0-1023). That is a total of 26 characters, leaving 6 characters to specify the module’s type.

Notification of disconnected module

The hub’s software already had capabilities to detect when a module connects or discon-nects. Though these features were implemented, the hub never informed the UI application (or equivalent) when these events occurred. The only information that were sent from the hub, was merely information about the current module network, i.e. data strings from transducers. It was easy for the UI application to detect a new module, by just ”noticing” information about a new module. Detecting that a module was not accessible was more of a indirect approach, since a disconnected module resulted in absence of data strings. De-termining whether a module is disconnected or not, was to be decided by the UI. A more reliable solution would be if the hub informed the UI when a module has disconnected.

A simple way to implement this feature was to use the already existing data format. When a module is considered to be disconnected, then this JSON string is sent to the hub (if the module has the ID=2):

{"ID":2,"T":"DM","V":0}

The type parameter will inform that the module has disconnected, where ”DM” stands for ”Disconnected Module”. The ID parameter informs which module that has disconnected. The value parameter does not contain any valid value and should be ignored.

By using the same format and the same parameters, the interpretation of the incoming module data can be kept simple, compared to introducing a new set of parameters.

Removal of serious performance issues

During an early test of the testing tool, by executing a test program, serious issues with the hub and modules were discovered. The test program was supposed to send 1000 commands to each connected module, via the hub. The output from the hub was monitored in the Serial Monitor of the Arduino IDE.

Starting with one module, the hub seemed to freeze after a couple of hundred com-mands. This was solved by removing intentional delays, i.e. the use of ”delay()” calls, and unintentional delays, such as debug messages via serial port and redundant code with no apparent purpose. These adjustments were made both to the hub’s and the module’s

(31)

software. Without this improvement, it would not be possible to have up to 10 modules connected at the same time.

Implementation of ”alive signal”

The purpose of some tests is to determine whether a module (or several modules) can be recognized by the hub and the test board (Arduino board), and also to measure the response time after a module has been connected. This implies that it must be possible to measure the time from when a module gets connected, until the test board receives information about the new module.

Figure 6.5: External circuit for measuring ”alive signal”.

The problem was solved by raising the level of a digital output pin (on each module), from low to high, after the module has been powered and started correctly. This signal represents that the module is ”alive”. This signal is fed to an digital input pin on the test board. This opens up the possibility of measuring the time between the first indication of this signal, no matter of the size of the network, until the last module has sent its first data string to the testing tool. This solution requires an external circuit (see Figure 6.5), besides the modules, the hub and the test board.

This solution can also be used to determine when a module (or several modules) become disconnected - what is the response time for detecting a given

number of disconnecting modules? Since a module is powered by the network bus, it will lose its power when it gets disconnected. Then the ”alive-signal” will disappear. Therefore, the response time for disconnecting modules is measured from when the ”alive-signal” disappears, until the hub has notified the testing tool about the last disconnected module.

Figure 6.6 describes the method in principle. Time measurements start directly after the ”alive signal” has been raised (t1). Time is measured once more when the first data

string for the last module is received by the test board (t2). The difference between these

measurements is the response time for connecting a given number of modules (tconnection).

Almost the same procedure is used for detecting disconnecting modules. The time between the signal is lowered (t3), until the final ”disconnected module” message arrives at the test

board (t4), is the response time for detecting disconnecting modules (tdisconnection).

Figure 6.6: Timing diagram of ”alive signal”.

This testing procedure assumes that only the modules that are about to be measured, gets connected to the circuit (as described by Figure 6.5). It is also necessary that the

(32)

testing tool is configured to test the same amount of modules. For instance, if the ”alive pin” of 6 modules are connected, then the testing tool must be configured accordingly.

6.5.3 Evaluation

It may be possible to question whether the observed issues had to be fixed. In fact, running tests would still be possible without the adjustments, although the tests would not be very successful. If these issues would not be fixed, it would also be harder to discuss the validity of the I2C+ protocol and the smart transducer network.

The implementation of the ”Disconnected Module” message, is not crucial, since the original UI was using a time-out for each module. If a module did not send an update of its values within a certain time frame, the UI regarded that module as disconnected. However, the ”Disconnected Module” message does save computation time for the UI. It also increases the precision of measuring the response time when disconnecting modules, since the hub can detect this event earlier and inform the UI. The implementation of the ”alive-signal” also increases the precision of measuring response time when connecting modules, since time measuring can start directly after a module has been powered up.

Implementing the improvements fulfils requirement R 3.1 and R 3.2.

6.5.4 Implementation

Based upon the evaluation, all solutions were considered to be useful. Therefore, all solutions were implemented.

6.6

Test results

6.6.1 Problem breakdown

During Problem analysis, the subproblem P 2.2 was broken down to additional subprob-lems (see Figure 6.7). First, test programs had to be executed (P 2.2.1), and then the results from these tests had to be processed (P 2.2.2) in order to create e.g. plots.

Figure 6.7: Overview of P 2.2 and its subproblems.

6.6.2 Solution generation

After the test results were collected, these results were processed and presented in the form of plots:

(33)

Figure 6.8: T 7 - Overview of mean and max values, up to 10 modules.

Figure 6.9: T 8 - Overview of mean and max values, up to 10 modules.

These plots summarizes the results from running tests. More plots and data can be found in Appendix C.

6.6.3 Evaluation

Running tests from the test suite, by using the testing tool, proves the validity of the testing tool and the test suite. This also means that key requirement R 3 has been fulfilled, and that yet another of the most important results have been produced.

As mentioned in Section 6.3.4, the prototype system was only disposable for a brief period of time. Moreover, there were only 10 modules available, which was the reason for only testing up to 10 modules.

(34)

The purpose of measurement multiple times per module is that this renders a better statistical foundation, when analysing the test results. For instance, if the response time for a module is measured only once, how can we be sure that the value is not a random deviation? With more measurements, it is easier to draw correct conclusions about the system’s behaviour. That is why 1000 measurements per module was considered to be a good number.

The results from running tests, which is presented in Figure 6.8 and Figure 6.9 shows that test case T 7 and T 8 are successful for up to 10 modules:

• It is possible to set the output of all actuator modules.

• Maximum response time for setting the output of an actuator module is less than 100 ms.

By looking at the plots (especially Figure 6.8), it is also obvious that the system will probably suffer from performance issues if more modules are connected. Although the maximum response time differs between each test case, the mean value is basically the same.

The total response time of all measurements includes approximately 2.5 ms of parsing each data string, made by the testing tool. All data strings had almost the same length (+/- 1 character).

The tests have been executed with maximum 10 modules (nmax= 10) and each

mod-ule has been exposed to 1000 measurements (x = 1000). Details about the test cases can be found in Appendix B.

Though the test cases are different in how measurements are made, it is possible to draw the following conclusions from the test results (see Appendix C for details):

1. It is possible to set the output of a various number of actuator modules (up to 10 modules).

2. The response time for setting the output of each actuator module is less than 100 ms (up to 10 modules).

3. The measurements are reasonable, considering factors such as the size of data packets and transmission time between the systems. Therefore, these measurements are considered to be accurate.

4. The system will probably perform much worse if more modules are connected, due to increasing data traffic caused by transmission of excessive data packets.

References

Related documents

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Det har inte varit möjligt att skapa en tydlig överblick över hur FoI-verksamheten på Energimyndigheten bidrar till målet, det vill säga hur målen påverkar resursprioriteringar

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa