• No results found

Integration between Optima and Farkle and verification with a use case about file storage stack integration in a quality of service manager in OSE

N/A
N/A
Protected

Academic year: 2021

Share "Integration between Optima and Farkle and verification with a use case about file storage stack integration in a quality of service manager in OSE"

Copied!
72
0
0

Loading.... (view fulltext now)

Full text

(1)LiU-ITN-TEK-A--11/023--SE. Integration mellan Optima och Farkle och verifikation med ett use case om integrering av en filstack och Quality of Service hanterare i OSE Daniel Digerås 2011-05-06. Department of Science and Technology Linköping University SE-601 74 Norrköping , Sw eden. Institutionen för teknik och naturvetenskap Linköpings universitet 601 74 Norrköping.

(2) LiU-ITN-TEK-A--11/023--SE. Integration mellan Optima och Farkle och verifikation med ett use case om integrering av en filstack och Quality of Service hanterare i OSE Examensarbete utfört i Elektroteknik vid Tekniska högskolan vid Linköpings universitet. Daniel Digerås Handledare Barbro Claesson Handledare Detlef Scholle Examinator Ole Pedersen Norrköping 2011-05-06.

(3) Upphovsrätt Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under en längre tid från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/ Copyright The publishers will keep this document online on the Internet - or its possible replacement - for a considerable time from the date of publication barring exceptional circumstances. The online availability of the document implies a permanent permission for anyone to read, to download, to print out single copies for your own use and to use it unchanged for any non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional on the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its WWW home page: http://www.ep.liu.se/. © Daniel Digerås.

(4) Integration between Optima and Farkle and verification with a use case about file storage stack integration in a quality of service manager in OSE. Examensarbete utfört i Elektronikdesign vid Tekniska högskolan i Linköping av Daniel Digerås LiTH-ITN-EX--YY/XXXX--SE. Handledare:. Ole Pedersen itn, Linköpings universitet. Barbro Claesson Enea AB. Detlef Scholle Enea AB. Examinator:. Ole Pedersen itn, Linköpings universitet. Norrköping, 19 April, 2011.

(5)

(6) Abstract iFEST is an EU project aimed at streamlining product development by creating a standalized tool-chain. This work aims to look at how a debug and test tools can integrate with each other. The goal is to provide input for the iFEST project on how such integration should be done. The two products Optima products and Farkle developed by Enea is used for the integration. Similar integration projects are evaluated to find a possible solution to a good design. A basic design for the integration is made based on Eclipse, previous work and analysis of the tools Optima and Farkle. The design is implemented, and verified with a use case. Use case is about integrating the layers in a file system stack.. Sammanfattning iFEST är ett EU-projekt som syftar till att effektivisera produktutveckling genom att skapa en standradiserad verktygskedja. Detta arbete syftar till att titta närmare på hur ett debug-och testverkyg kan integreras med varandra. Målet är att ge input till iFEST-projeket om hur en sådan integration skall göras. Eneas produkter Optima och Farkle används som ett konkret fall. Liknande arbeten utvärderas för att hitta en möjlig lösning på integrationsproblematiken. Med Eclipse, tidigare arbete och analysen av verktygen bas tas en integrationsdesign fram. Designen implementeras och arbetet verifieras med ett användarfall. Användarfallet handlar om att integrera lager i en filsystemsstack.. iii.

(7)

(8) Acknowledgments Thanks to the master thesis workers at Enea who provided valuable help and good company. v.

(9)

(10) Contents 1 Introduction. 3. 1.1. Thesis background . . . . . . . . . . . . . . . . . . . . . . . . . . .. 3. 1.2. Problem statement & Power Aware Flash File System use case . .. 3. 1.3. Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 4. 1.4. Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 5. 2 Background. 7. 2.1. Introduction of meta models . . . . . . . . . . . . . . . . . . . . . .. 7. 2.2. Short introduction to the V project model . . . . . . . . . . . . . .. 8. 2.3. Short introduction to OSE developed by Enea . . . . . . . . . . . .. 9. 2.4. Overview of the Eclipse Development Platform . . . . . . . . . . .. 9. 2.4.1. The internal structure of Eclipse . . . . . . . . . . . . . . .. 9. 2.4.2. Eclipse Plugin architectural changes . . . . . . . . . . . . .. 12. Optima product overview . . . . . . . . . . . . . . . . . . . . . . .. 12. 2.5.1. Optima system model . . . . . . . . . . . . . . . . . . . . .. 12. 2.5.2. Process tracing and profiling . . . . . . . . . . . . . . . . .. 14. Farkle OSE test support framework . . . . . . . . . . . . . . . . . .. 14. 2.5. 2.6. 3 Theory 3.1 3.2. 3.3. 17. Tool integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 17. 3.1.1. Levels of tool integration . . . . . . . . . . . . . . . . . . .. 18. Tool integration definitions . . . . . . . . . . . . . . . . . . . . . .. 19. 3.2.1. Two different angles of tool integration . . . . . . . . . . . .. 19. 3.2.2. Tool integration levels and their properties . . . . . . . . .. 19. 3.2.3. Tool integration definitions conclusions . . . . . . . . . . . .. 21. Inter-tool communication technologies . . . . . . . . . . . . . . . .. 22. vii.

(11) viii. Contents 3.3.1. Software communication background . . . . . . . . . . . . .. 22. 3.3.2. Tool integration experiences from the BOOST project . . .. 22. 3.3.3. Experiences with Automated Glue/Wrapper code generation 23. 3.4. Eclipse and tool integration . . . . . . . . . . . . . . . . . . . . . .. 23. 3.5. Farkle integration into Eclipse . . . . . . . . . . . . . . . . . . . . .. 24. 3.6. Artifacts and data shared by test and debug tools. . . . . . . . . .. 24. Artifacts shared by test and debug tools . . . . . . . . . . .. 24. 3.7. Data integration for test and debug tools . . . . . . . . . . . . . .. 25. 3.8. Presentation, Control and Process Integration between test and debug tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 25. 3.8.1. Analysis of debug tools and Optima . . . . . . . . . . . . .. 25. 3.8.2. Analysis of test tools and Farkle . . . . . . . . . . . . . . .. 25. 3.8.3. Presentation Integration between test and debug tools . . .. 26. 3.8.4. Control Integration between test and debug tools . . . . . .. 26. 3.8.5. Process Integration between test and debug tools . . . . . .. 27. Integration tightness between test and debug tools . . . . . . . . .. 27. 3.10 Conclusions from theoretical studies . . . . . . . . . . . . . . . . .. 27. 3.6.1. 3.9. 4 Design of Optima’s and Farkle’s tool communication. 29. 4.1. Overview of the design . . . . . . . . . . . . . . . . . . . . . . . . .. 29. 4.2. Artifact, action and data collection in the Connect layer . . . . . .. 30. 4.2.1. Artifacts stored inside Connect layer . . . . . . . . . . . . .. 30. Graphical user interface . . . . . . . . . . . . . . . . . . . . . . . .. 31. 4.3.1. Extendability of the graphical interface . . . . . . . . . . .. 32. 4.4. Test Provider design . . . . . . . . . . . . . . . . . . . . . . . . . .. 32. 4.5. Actions design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 32. 4.6. Data access and loose coupling . . . . . . . . . . . . . . . . . . . .. 32. 4.6.1. Debug tool awareness in Farkle . . . . . . . . . . . . . . . .. 33. Test tool awareness in Optima . . . . . . . . . . . . . . . . . . . .. 33. 4.3. 4.7. 5 Implementation of Optima’s and Farkle’s tool communication. 35. 5.1. Overview of the implementation . . . . . . . . . . . . . . . . . . . .. 35. 5.2. Testbench plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 35. 5.2.1. Testbench plugin overview . . . . . . . . . . . . . . . . . . .. 35. 5.2.2. Testbench Interfaces Classes . . . . . . . . . . . . . . . . . .. 35.

(12) Contents. ix. 5.2.3. Testbench Extension Points . . . . . . . . . . . . . . . . . .. 38. 5.2.4. Testbench Properties Class . . . . . . . . . . . . . . . . . .. 38. 5.3. TestProjectPlugin plugin . . . . . . . . . . . . . . . . . . . . . . . .. 38. 5.4. OptimaTestbench plugin . . . . . . . . . . . . . . . . . . . . . . . .. 39. 5.5. Farkle plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 39. 5.5.1. Farkle plugin configurations . . . . . . . . . . . . . . . . . .. 39. 5.5.2. Test parsing for Testbench . . . . . . . . . . . . . . . . . . .. 39. 5.5.3. Limits in the design of the Farkle Plugin . . . . . . . . . . .. 40. 5.5.4. Farkle Test Report parse Class . . . . . . . . . . . . . . . .. 40. 5.6. PythonTestProvider plugin . . . . . . . . . . . . . . . . . . . . . .. 40. 5.7. OptimaMetricsProvider Plugin . . . . . . . . . . . . . . . . . . . .. 40. 5.8. Farkle Metrics Proxy Python implementation . . . . . . . . . . . .. 41. 6 Use case: Power Aware Flash File System 6.1. Use case introduction. 43. . . . . . . . . . . . . . . . . . . . . . . . . .. 43. 6.1.1. Quality of Service and graceful degradation . . . . . . . . .. 44. 6.1.2. Flash fundamentals . . . . . . . . . . . . . . . . . . . . . . .. 44. 6.1.3. Memory Device Driver . . . . . . . . . . . . . . . . . . . . .. 44. 6.1.4. Flash Translation Layer . . . . . . . . . . . . . . . . . . . .. 44. 6.1.5. Journaling Extensible File system Format . . . . . . . . . .. 45. 6.2. Use case divided into smaller parts . . . . . . . . . . . . . . . . . .. 45. 6.3. Case 1: File system stack integration with combined function test and code debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 45. 6.3.1. Results from use case 1 . . . . . . . . . . . . . . . . . . . .. 46. Case 2: QoS tests with Signal traces . . . . . . . . . . . . . . . . .. 46. 6.4.1. Results from use case 2 . . . . . . . . . . . . . . . . . . . .. 47. Case 3: File system stack memory test with debugger metrics . . .. 47. 6.5.1. Results from use case 3 . . . . . . . . . . . . . . . . . . . .. 48. Power measurements made as use case completion . . . . . . . . .. 48. 6.4 6.5 6.6. 7 Conclusions 7.1. 49. Conclusions from the use cases . . . . . . . . . . . . . . . . . . . .. 49. 7.1.1. Conclusions from the use case 1 . . . . . . . . . . . . . . . .. 49. 7.1.2. Conclusions from the use case 2 . . . . . . . . . . . . . . . .. 49. 7.1.3. Conclusions from the use case 3 . . . . . . . . . . . . . . . .. 50. 7.2. Requirements met . . . . . . . . . . . . . . . . . . . . . . . . . . .. 50. 7.3. Overall Conclusions made . . . . . . . . . . . . . . . . . . . . . . .. 52.

(13) x. Contents. 8 Future work. 55. A Conclusions and Future work in the Use case. 57. Bibliography. 58.

(14) Acronyms API. Application Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. BOOST Broadband Object-Oriented Service Technology . . . . . . . . . . . . . . . . . . . . . 22 CDT. C/C++ Development Tooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. CLI. Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. EMF. Eclipse Modeling Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. FTL. Flash Translation Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. GUI. Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23. iFEST. industrial Framework for Embedded Systems Tools . . . . . . . . . . . . . . . . . . . 3. JEFF. Journaling Extensible File system Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. MOF2. Meta-Object Facility version 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. MDD. Memory Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45. MTD. Memory Technology Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. OSE. Operating System E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. OSGi. Open Service Gateway initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12. QoS. Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. UI. User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. UML. Unified Modeling Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. XML. eXtensible Markup Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. 1.

(15)

(16) Chapter 1. Introduction 1.1. Thesis background. In 2009/2010 the European Union accepted founding to a research project named industrial Framework for Embedded Systems Tools (iFEST). The goal is to simplify the exchange of tools and ultimately reduce the development time by 20%. One of the key components lies in the integration of different tool in the tool chain. This includes tools sharing artifacts and supports traceability. Enea has, amongst others, two tools named Optima and Farkle. The two Enea tools have in previous studies been evaluated separately for future use in the iFEST, but the tools have not been tested together and not interacted with each other in the iFEST context. This work is a part of the iFEST research.. 1.2. Problem statement & Power Aware Flash File System use case. This thesis will focus on the possibility of the tool integration between Optima and Farkle. What type of data (artifacts) that can be shared between the two tools needs to be identified. How such an integration should be made also needs to be addressed. Further integration into iFEST should be looked upon. The questions can be summarized to: Q1 What tool control facilities and artifacts can be shared between the two tools? Q2 Is there a need to further integrate debug and test tools? Q2-1 Which artifacts between the two tools should be coupled together? Q2-2 How should the integration be done? 3.

(17) 4. Introduction Q3 Which possibilities are there to e.g. change one of the tools to another and keep the integration? Q4 What further development on the tools is needed for making the iFEST compliant?. This thesis will further validate the integration between Optima and Farkle with a use case with Power Aware Flash File System in OSE running on the development platform Freescale i.MX31 1 . The use case is a software development scenario. The four layers Journaling Extensible File system Format (JEFF), Flash Translation Layer (FTL), Memory Technology Device (MTD), physical flash (see fig. 1.1) will be implemented into Enea’s embedded operating system Operating System E (OSE) and into a Quality of Service (QoS) controller. 4 R 6. -()). P D Q D J H U. )7/ 07' )ODVK )LJXUHILOHVWDFN. Figure 1.1. Block description of the file stack. 1.3. Method. The thesis work will consist of two parts. The first part consists of a literature study. The literature study will focus on tool integration. The first part will also consist of preparations for the use case. The second part will focus on design and implementation based on the requirements from the literature study. The implementation and design will be proven with a use case and thus prove the academic study. 1 http://www.freescale.com/web. app/sps/site/prod_summary.jsp?code=i.MX31.

(18) 1.4 Delimitations. 1.4. 5. Delimitations. The length of the thesis work is 20 weeks. The first ten weeks will contain a academic literature study as well as preparations for the use case. In the remaining few weeks the implementation and use case will be performed. The study and implementation will be limited to the integration between Optima and Farkle. Due to the fact that iFEST is a framework in early development no care will be taken of making the tools iFEST compliant..

(19)

(20) Chapter 2. Background 2.1. Introduction of meta models. The notion “meta model” is somewhat abstract but can de described as a model describing a model. The model can be almost anything but if one think of for example a code function as a model, the meta model describes how artifacts and actions for the function model looks like. Example 2.1 shows Unified Modeling Language (UML) meta models.. M3 Metameta model (MOF2) M2 Metamodel (UML). M1 Function description. Class. <<instanceOf>>. Attribute. <<instanceOf>>. Class. <<instanceOf>> <<instanceOf>>. Function +action: String <<instanceOf>>. M0 Actual function. String action(){ ... }. Figure 2.1. Meta model illustration. 7.

(21) 8. Background. Example 2.1: UML meta model UML is a meta model language whilst an actual instance of UML is a model. Figure 2.1 shows the basic concept of a model and the relation with meta models. At level M0 the actual implementation of the function exists. M1 is the level where the function is described as a model, in our case as a class with an attribute. M2 is the language for describing the model. At the highest level M3 is the language in which the M2 modeling language is described. One such M3 language is MetaObject Facility version 2 (MOF2) which that could be used to describe UML. [1]. With meta modeling it is possible to transform functions between environments or to integrate different models. Heiko Kern and Stefan Kühne have used meta models to create an interface to convert Microsoft Visio models into Eclipse Modeling Framework (EMF) models. A M2-level transformation based upon a M3-level mapping transforms the Data model and the Microsoft Visio Stencil into a Eclipse EMF Metamodel. A M1-level transformation based upon the M2-level transformation transforms the Visio model to a Eclipse EMF based model. The work shows the possibilities to use model transformations for transforming models from one modeling environment to another.The approach is useful in building tool-chains and reusing models and model operations. [2]. 2.2. Short introduction to the V project model. The V-model is a project model used for developing products. It is a common project model also used as a template for the iFest project. The model is designed to simplify the complexity and show the steps in product development. The way the V-model operates is a Project Definition phase. The outline and design of the product is created. At the bottom of the model The implementation takes place. In a software development project code is written. The debugger is used during this stage. When the implementation is finished test and verifications take place. If the testing fail at any point the project will revert to the corresponding state from the Project Definition phase. Figure 2.2 displays the stages of the V-model. The work in this thesis focus on the integration between debug tools used in the implementation phase and test tools used in the integration, test and verification phase..

(22) 2.3 Short introduction to OSE developed by Enea. 9. Figure 2.2. Illustration of the V-model. 2.3. Short introduction to OSE developed by Enea. OSE is a signal based real time operating system developed by Enea. OSE is based on a microkernel and the most fundamental blocks are Processes. A Process executes the user code. To be able to communicate to other process OSE introduces Signals, Signals are sent between processes. Every Process have its own Signal queue and signal passing is handled by the microkernel. [3]. 2.4. Overview of the Eclipse Development Platform. The Eclipse development platform may be thought of as both a extendable plugin platform with large integration capabilities and as a development platform. At its core Eclipse is a small platform (runtime) with a set of plugins on top. The plugin architecture of Eclipse is it’s key because it make the implementation of new plugins easy. The result is that many developer and other productivity applications are based upon the Eclipse platform. There are two ways of distributing functionality in Eclipse. The first solution is to distribute the plugins separately. Eclipse have a mechanism called Software Suites that makes plugin management easy. The second way is to ship the plugins with a branded version of Eclipse.. 2.4.1. The internal structure of Eclipse. Eclipse introduces a few concepts shared between all plugins. Figure 2.3 shows the main concepts: Workbench, Workspace, Team Support, Help and Plugins. Inside the workbench resides JFace and SWT which are libraries for creating graphical components used by Eclipse. The JFace and SWT libraries is outside the scope of this thesis and will not be discussed any further. [4].

(23) 10. Background. Figure 2.3. Eclipse Platform architectural overview. Workbench The Workbench is the component that all graphical user interfaces in Eclipse is placed inside. The workbench handles the general layout and component placement for Eclipse. Eclipse have a couple of important graphical interface concepts. The smallest placable graphical elements is called views. A view can hold any piece of information. The Project Navigator, Outline and Console are three examples of views. The the views can be moved around the sides of the window and stacked. The views are circulating the editor. The editor is where the users main attention is. Normal use for the editor is to display and edit code, but plugins can make their own editors to display custom layouts. An editor don’t need to writable. On the top an action bar and the menu bar exists. They are context sensitive and can change depending on the type of edit window showing at the moment. An example on a normal window with its components is illustrated in figure 2.4. A default set of views and editors is part of a Perspective. A Perspective is used to set a specific set of views and actions depending on the task at hand. Debugging and code editing needs different views and with two Perspectives the layout can quickly be switched. Workspace The Workspace is a storage area for all files that needs a persistent storage. The files is grouped together in Projects. A Project often conforms to physical boundaries such as a software product. I plugin development each project represents a single plugin. Each Project can have a Nature. A Nature brings a definition of what the Project contains. The Workspace often uses, but are not limited to, the local file system for storing files. Through software plugins the physical file storage can be on virtually any media that can contain information. One such media is databases. The workspace also keep track of changes in files that plugins can track and stores configuration. Example 2.2 gives a how a scenario of what the Workspace can look like..

(24) 2.4 Overview of the Eclipse Development Platform. 11. Figure 2.4. Eclipse Platform GUI. Example 2.2 Let say a developer are creating an enterprise software application written in Java. The application is written in java and it’s functionality is documented with LaTeX 1 . The software application would then be a Project. The Project in turn would be marked with a Java Nature. This tells Eclipse handle that Project in a Java specific way. Library paths and Java source code compilers are added to the project. The documentation is another Project with a LaTeX Nature. Eclipse does not have any Latex support built in but it can be added with plugins.. Team Support Eclipse is able to place projects under version and configuration management with an associated team repository. The Team Support defines extension points and a provider API that allows software plugins to provide new types to repositories as well as means of utilize a repository and its functionality. Example 2.3 In example 2.2 a developer is developing a enterprise software. The developer is however not alone. There is a group of developers working on the same development project. All developers are using a versioning system. With a plugin for that 1 LaTeX. is a typesetting system, this report is written with LATEX.

(25) 12. Background. versioning system the developers can work on the same Project in Eclipse at the same time. Changes are submitted to a software repository and tasks to work on is propagated to all developers. Conflicts that arise when two developers edits the same line is handled when the last developer tries to commit his work.. Help Eclipse have a built in help mechanism that combines help from different sources into one central documentation. All the content is in turn retrievable from a built in web server.. 2.4.2. Eclipse Plugin architectural changes. From version 3 Eclipse uses the Open Service Gateway initiative (OSGi) framework as well as an older plugin architecture where plugins are probed at launch time. In OSGi the advantage is that modules are able to be probed and loaded at runtime whilst Eclipse native plugins has to be probed at startup. Eclipse plugins has better maturity and is better supported in Eclipse plugin development toolkit. [5]. 2.5. Optima product overview. Optima is a debug tool developed by Enea targeting the OSE platform. Currently Optima is built as a plugin package for Eclipse and is distributed both as a complete package with Eclipse included and as separate plugins for integration in an existing Eclipse environment. Optima is divided in two major parts, a Debugger and a Log Analyzer. The Debugger can handle live debugging and analysis of the target platform as well as post mortem debugging 2 . The Log Analyzer analyzes logs to give metrics about the event process. Depending on the input, the Log Analyzer will give detailed information about what processes where running and what signals that where sent and when the signals where sent. The data can be displayed as timelines, gantt charts and text logs, to name a few. [6]. 2.5.1. Optima system model. Optima is built upon the Eclipse platform but extends outside the platform as well. Figure 2.5 shows the system model for Optima. Optima uses Eclipse C/C++ Development Tooling (CDT)3 which is a set of plugins that gives Eclipse capabilities 2 Post mortem debugging is offline debugging where the debugger uses memory and stack dumps as the debugging source 3 http://www.Eclipse.org/cdt/.

(26) 2.5 Optima product overview. 13. to develop C and C++ software. Optima provides plugins to Eclipse alongside a command line code debugger used by CDT. Optima communicates with the OSE target through a TCP/IP connection via a program manager and a run-mode manager. Managers are not explicitly a part of Optima but a part of OSE and 4.3. LOG ANALYZER can thus be used by other software as well. . ' .   

(27). 

(28) .  . 

(29) .  ,

(30) .  .   . $. 

(31)     . 

(32) . $. &  &

(33) 

(34) . %#&' &

(35) 

(36) . $  &'. 

(37)  &

(38) 

(39) .     . +. +. #

(40) 

(41)  ". . !. "

(42) 

(43)  . "

(44) 

(45)   "())* %

(46) ' (. +. +. "

(47) 

(48)  .

(49) ' 

(50) 

(51). Figure 4.2. Optima Run-Mode Debug. Figure 2.5. Optima system model overview When a target that is to be debugged is found it can be added by simply right click on it and choose “Show in OSE Log Manager”. Now it is possible to download Optima also provides publicfileclasses that The canevent be used to control Optima. The an event action settings to the target. action settings file tells the target eventsinto it should listen to. An event can be all from that a signal is functionality is what divided six packages. sent or received to that a process has been killed. The action to take when the event occurred can be to save a trace of the event, or to notify the connected OSE • com.ose.event.format contains and writers for the eXtensible Gateway client that an event occurred. readers It is also possible to intercept running Markup Language action and event files. The package also includes readers process (XML) at certain events. To startfor debug sourcedump code, settings made in the debug configurations. and writers event files have andtoabeconverter between event XML files The GDB comes as executable files that needs to be linked to, from inside Optima. and It event dump files. is also needed to choose which one of the executables to use depending on what target is to be debugged. There is support for ARM, MIPS and PowerPC processors. • com.ose.gateway is a package that contains an OSE Gateway API client. OSE can also be debugged as a soft kernel but not on the source code level. First The the OSE Gateway client can used by any software communicate targets IP-address and port hasbe to be entered. After Java that the part of theto system to be Gateway debugged has to be chosen, it is possible to debug source code withthat an isOSE server. The since communication is asynchronous and do from one single process to the whole system. When this is done source code can be not depend on Eclipse. The package contains the java classes needed to debugged.. connect, attach, send and receive signals.. 4.3 Log Analyzer • com.ose.gateway.server contains the OSE Gateway server Application Programming Interface (API). The server can be used to supply own serThe Optima Log Analyzer is used to analyze log files from events or post mortem vicesdump or to encapsulate interface other physicalWhen transport than files. From the Logthe Analyzer, charts in andsome browsers are generated. a TCP/IP, for example a serial connection. 19. • com.ose.pmd.editor contains the Command Line Interface (CLI) dump editor and utility classes used by dump editors. • com.ose.prof.format contains readers and writers for XML process settings and profiling report files. The package also includes converters between profiling reports XML and dump files..

(52) 14. Background • com.ose.system is the largest package that contains the OSE system model. It models OSE concepts like gates, targets, segments, pools, heaps, blocks, and processes in an hierarchical tree structure. All data Optima receives from the target is extractable through the API. With com.ose.system Optima gains some scripting capabilities in the sense that a 3rd party software could control Optima.. 2.5.2. Process tracing and profiling. Alongside a pure code debugger Optima provide facilities to view the system state of OSE and its processes. The process state and the signals in a process queue can be observed. Optima can also trace events in OSE such as messages sent and received, process swaps and memory allocation. To set up an event trace a special action-file in XML is parsed and sent to the OSE target. Traces can be performed at system, block or process level. Optima also have the ability to profile CPU usage for processes, blocks or the whole system, profile heap usage and other customized profiling.. 2.6. Farkle OSE test support framework. Farkle is a test support framework developed by Enea. Farkle targets the OSE platform. Figure 2.6 describes the tool and the surrounding artifacts needed and generated. Farkle Class library. Farkle Python classes. Includes. Offline tool. sigpa. fout.sig. fsigs.py. Device under test. Includes. testfile.py. Connects to process. Figure 2.6. Schematic over Farkle tool chain. Farkle is written in Ruby and Python whilst the test executions take place in Python. The Farkle Python classes includes means of communication through.

(53) 2.6 Farkle OSE test support framework. 15. either OSEGateway or Linx4 and classes converting C-signals into Python classes. The figure 2.6 illustrates the parts in Farkle. To be able to use the signals in Python the tool sigpa is used on a OSE signal file to generate Python classes into a Python Class file. The Python class extends the Farkle Python class signal.py that includes logic for converting C-structured data into Python variables.. 4 Linx. is an open source inter-process communication service[7].

(54)

(55) Chapter 3. Theory 3.1. Tool integration. Before reasoning about tool integration, it is needed to define whats a tool is. General examples of tools used in hardware development is everything from tools for drawing mechanical parts to source code compilers. Tools can share artifacts and mostly (but not exclusive to) generated artifacts from one tool is used as input for another. It is not uncommon that tools are built upon other tools. A good example of tools using tools is Optima that is built upon Eclipse, Eclipse is built on Java, Java needs a computer. Example 3.1 gives a scenario where tools is used in relation to each other. The example also show how artifacts can be passed between tools. Example 3.1: A software development tool chain When writing a software program the project is often splitted amongst several files. The developer has two source code files: mainfile.cpp and enterprizeclass.cpp. When the files is compiled two new files are given: mainfile.o and enterprizeclass.o. The files, called object files, contains executable code that the computer processor can understand. The files holds the parts of the program, but does not individually form a program. To create a single complete executable the developer links the two files together with a linker. The result is the executable application my-application.exe. To automate the process a third tool can be used. One such tool is GNU Make1 .[8, Ch. 9]. 1 Make is a tool which controls the generation of executables and other non-source files of a program from the program’s source files. 17.

(56) 18. 3.1.1. Theory. Levels of tool integration. Tools can be integrated to different extents. According to Anthony I. Wasserman [9] there are five types of integration. (PlI) Platform Integration For tools to be able to operate together they need to be on the same platform. The easiest way of looking at a platform is as a computer running an operating system where the tools runs. However, in distributed environments the tools don’t necessarily need to be on the same computer or even on the operating system. To solve the platform integration one can se the distributed system as a virtual platform where the tools operates. (PrI) Presentation integration If the platform integration is how the tools conform to a platform, presentation integration is how they conform to the user. To be able to orientate in different tools it helps if whey are built to look and function in the same way. That don’t imply that they should supply the same functions just that the look and feel should be the same. A tool with a alien interface is much harder to learn and understand than an interface that uses known User Interface (UI) concepts. (DI) Data integration Tools works on artifacts to either manipulate them or to create new artifacts. In the simplest way this share of artifacts can be done through pipes and file stores. For larger flexibility one want to let the tools share a data store such as a database. With a database tools have easy access to each others artifacts and the tool-chain will be more flexible. (CI) Control integration When two tools shares the same artifacts it is desirable to be able for the tools to notify each other when one tool changes the artifact. Without any form of control integration it’s up to the user to identify and manually run each tool when some content changes. With improved level of control integration some of the processes can be automated when an artifact is updated. (PI) Process integration If the other types of integration looks of certain aspects from the tool perspective process integration looks from the perspective of the organization. It is important that the process used conforms with the workflow of the tools. Certain tools in turn can be used to help with the process and organize artifacts and requirements. The process management tools need to regard the other types of integration. The tools can in turn be divided into vertical and horizontal tools. Horizontal tools are tools that are used during the whole development process. Such tools can handle requirement traceability, documentation or project management. The vertical tools are the tools used in separate stages. Such tools can be design or coding tools. The source code compiler and object linker used in example 3.1 is vertical tools. Make in the same example is a horizontal tool..

(57) 3.2 Tool integration definitions. 19. Since Anthony I. Wasserman’s article was published there have been much development in the field of tool integration and new standards have emerged. With the introduction of internet applications are not longer bound to one machine. The development approaches client-server applications. The use of web services are becoming more wide spread. One such system is Jazz2 where life cycle management is handled on a centralized web server. The general trend in software development is towards advancements in middleware, development of intelligent devices and multiple development platforms. [10]. 3.2. Tool integration definitions. Anthony I. Wasserman defined the earlier mentioned five levels of tool integration. Ian Thomas and Brian A. Nejmeh further specifies the concepts of tool integration. They believe that tool integration isn’t a property of a single tool, but it’s relationship with other elements in the environment. [11]. 3.2.1. Two different angles of tool integration. The discussion of tool integration can be viewed from two different angles. The first angle is the environments users. The tool integration in a users point of view is a seamless tool collection. The second angle is the environments builders. The tool integration from the builders point of view is the amount of work it takes to give the user a seamless tool collection.. 3.2.2. Tool integration levels and their properties. With the exception of Platform Integration (Pl I) Ian Thomas and Brian A. Nejmeh have added level properties to all levels of integration in section 3.1.1. (PrI) Presentation Integration properties The goal of PrI is to reduce the users cognitive load when using tools. The load is reduced by reducing the number of presentation and interaction paradigms in the environment. Two properties are identified. Appearance and behavior Appearance and behavior refer to how similar in looks and behavior the tools are. If the users experience of one software tool can be applied to another, the tools have fulfilled the requirements of appearance and behavior. 2 http://jazz.net/about/about-jazz-vision.jsp.

(58) 20. Theory. Interaction paradigm The interaction paradigm refer to how well metaphors and mental models is integrated. Metaphors and mental models can refer to how different tool represents how information is stored. A tool that uses a database concept of tables and rows is not well integrated to another tool which uses catalogs and files. (DI) Data Integration properties DI refers to both persistent and non-persistent data and how it is shared between software tools. The overall goal with DI is to keep the information consistent. Data integration between two software tools is however not relevant if the data is disjoint. Five properties is defined for DI. Interoperability The Interoperability refers to how two software tools can share common data. Software tool A might need the data produced by software tool B. If the tools uses the same semantics and syntax the tools are Interoperable. Interoperability can however be seen from both the builders and the users view. To the user two tools can be Interoperable despite different data structures. An automatic conversion tool could create Interoperability from the users point of view. Non-redundancy The Non-redundancy property refers to the amount of redundant data two software tools have. For the two tools to be well integrated they need to have little redundant data. Data consistency The Data consistency property refers to the semantics constraints between data. Two software tools are well integrated if they adhere to the semantic constraints of the data they are operating on. If the first tool change the data in file A, semantics might limit the range for data in file B. The second tool should not generate data that is outside the data range in file B. Data exchange The Data exchange property refers to the exchange of data between tools. The tools need to agree over structure and semantics of the data. In difference to Interoperability, Data exchange also refers to non-persistent data as copy-and-paste. For two tools to be integrated the tools should be able to share non persistent data. Synchronization The Synchronization property refers to the ability for tool’s to synchronize non-persistent data. The data can amongst many things be menu choices and operating modes. A tool is said to be well integrated if all nonpersistent data can be shared..

(59) 3.2 Tool integration definitions. 21. (CI) Control Integration properties Control integration is needed to support flexible function combinations. In the ideal scenario all tools expose their functions to the environment. Two properties can be identified for CI. Provision The Provision property refers to how well a software tool’s functions are used by other software tools. One such scenario is a text edit tool used by another software tool. The tools are said to be well integrated if they provide functionality other tools requires and uses. Use The Use property refers to the extent a tool uses other tool’s functionalities. Use is the opposite of Provision. A tool that appropriately uses the functionality is considered well integrated with respect to Use. (PI) Process Integration properties The process is divided into three entities; process step, process event and process constraint. A process step is where work is done. A process event is something that is triggered during a step. A process constraint is something constricting. PI is divided into these three properties. Process step The process step property refers to how well the tools conforms to the process step and how well they help other tools in the same step. If a tool helps with the achievement of the process step and does not make it harder for other tools reaching its goals it is well integrated. Event The Event property refers to how well the tools handles events and their preconditions. Tools that handle event notification consistently are considered well integrated with the respect of Event. Constraint The Constraint property refers to how tools handles constraints. A tool can be constrained by other functions. Tools can also constrain other functions. Constraints should not be confused with Data consistency. Whilst Data consistency refers to the data, Constraint refers to the Process. Tools that make similar assumptions about the constraints are considered to be well integrated with respect of the Constraint property.. 3.2.3. Tool integration definitions conclusions. The properties defined are independent of the platform technology and conforms to object oriented environments. Instead of adapting the environment to the tools, the tools should be modified so they can use the environment in the best way..

(60) 22. 3.3 3.3.1. Theory. Inter-tool communication technologies Software communication background. The concept of communicating between processes isn’t new. Posix started its work in 1980 under the name /usr/group which in 1985 aligned its work with IEEE. As of 1988 Posix is an IEEE standard. Posix specifies inter process communication on a far to low level to be feasible in tool communication in a heterogeneous environment. [12, 13]. 3.3.2. Tool integration experiences from the BOOST project. The Broadband Object-Oriented Service Technology (BOOST) project is a part of the RACE project 2076 which is a part of Europeans Community’s Intelligence ins Services and Networks. The aim of the BOOST project is to support the development of telecommunication services. The BOOST project evaluates different approaches to tool integration. The approaches evaluated are tool integration by encapsulation in an existing environment and direct tool-to-tool integration. The use of Tool Control Language (Tcl), a language to increase tool programmability is discussed by the BOOST project. [14] Integration by encapsulation The encapsulation is made into an environment called EAST. EAST provides a data repository, a graphical interface and functions to encapsulate Unix tools. The encapsulation results in a uniform way of accessing the different tools. The drawbacks is that EAST and unix tools access underlying data differently. For a unix tool to access the data the source code have to be modified. The interaction paradigm also differs between EAST and tools encapsulated. Direct tool-to-tool integration The first direct tool-to-tool integration is to build a Pay Per View TV service. The integration is achieved by exploiting the two tools openness. The result was in general terms an excellent integration. The interaction is done with C code. The integration is however tool specific and in some cases even version specific. Other problems are poor integration abilities with 3rd party tools in which the source code is not available. Programmable tools with Tcl Tcl was invented in an attempt to provide a standard way of programming tools. Tcl is able to use extensions; one such extension is Tk, an extension for X3 and 3 X is a graphical server in Unix which software connect to and draw graphics, almost all modern Linux desktops uses X for input and output.

(61) 3.4 Eclipse and tool integration. 23. the Graphical User Interface (GUI) library Motif. With Tcl/Tk the GUI is fully customizable and the tool is programmable. Two tools called DEMON and IE in a chain are tested. The two tools is working in sequence. The databases for each tool does not need to contain synchronized data. A data export is performed after the first tool is finished and transformed into Tcl commands to be executed for the other tool. The use of Tlc is also tried on the direct tool-to-tool scenario. Replacing the C code with a Tcl extension. The resulting solution is to use EAST with Tcl encapsulation. Although Tcl encapsulation provides benefits in terms of easier configuration than direct tool-to-tool integration, integration concerns with closed source tools still persist. The BOOST project concludes with an adaptation of a framework called RACE Open Service Architecture. Tcl is a required part of tools that are to be integrated into the BOOST project.. 3.3.3. Experiences with Automated Glue/Wrapper code generation. In the paper Automated Glue/Wrapper Code Generation in Integration of Distributed and Heterogeneous Software Components[15] the authors uses meta modeling to create wrapper code for integrating tools. Four interoperation technologies are discussed. Source-to-source transformation that have many drawbacks such as the increased complexity with increasing number of components and the need for source code. Transforming communication components to communicate with a common technology, such as XML. Meta-interoperations, which is a variant of. Transforming communication components, where meta-data also is transformed. The last alternative discussed is to transform the communications instead of transforming the communicators. The framework for communication used is called UniFrame. UniFrame creates a connection through proxy objects in different environments. UniFrame uses SOAP4 for interoperation. The biggest challenge is to generate glue/wrapper code on demand. The actual implementation is written in java. The result is an integration between two different environments using UniFrame.. 3.4. Eclipse and tool integration. With the pure plugin architecture of Eclipse and the concepts discussed in section 2.4 Eclipse provides very good tool integration capabilities. The workspace creates a central file storage where different tools can operate on the same files and thus Eclipse also provides good data integration. Because it 4 SOAP. is a messaging framework.

(62) 24. Theory. is simple to extend plugins and add buttons to panels, Eclipse also qualifies for control integration. The process integration depend more on the implementation of a specific set of plugins inside Eclipse and it makes it hard to tell with certainty that Eclipse supports a good process integration. Eclipse however has team support that can help with the flow of todo lists and problems to be corrected. Lots of processes are simplified and for certain tasks Eclipse handle lots of work. It can thus be said that Eclipse stimulates a deep process integration.. 3.5. Farkle integration into Eclipse. Early work with the integration of Farkle into Eclipse has been done as a part of the iFest project at Enea. The implementation enables Farkle to be run within Eclipse through the use of PyDev 5 . Because Farkle tests are Python files PyDev basically run them from within Eclipse. This makes it easy to debug and run the test scripts but is very general towards Python scripts in general. The implementation uses scripts to generate C and signal files from header files. The implementation integrates in presentation and in some sense also control.. 3.6. 3.6.1. Artifacts and data shared by test and debug tools Artifacts shared by test and debug tools. The test and debug tools work toward the same target from different perspectives in the development. This have an impact in the number of artifacts the tools share. Three artifacts are identified. Source code artifacts The source code defines the entry-points the test tool uses. The source code also defines the function of the product. The debug tool troubleshoots the function described by the source code. The testing tool only uses the source code file as a reference. The debug tool uses the source code artifact as a pointer to where in the execution the product software is. The debugger cannot use the source code directly but needs to have a binary file derived from the source code. In the specific case of Farkle, the relation to the source code artifacts is loosely coupled. Farkle sends signals rather than calling functions directly. The signal helper classes is generated from special source code artifacts containing the signal definitions. 5 http://pydev.org/.

(63) 3.7 Data integration for test and debug tools. 25. Requirement artifacts Test and their results as source code artifacts can be derived from requirement artifacts. Requirement artifacts also control what source code artifacts are necessary. Neither Optima or Farkle uses requirement artifacts but as requirements are central parts, requirements needs to be discussed. Log artifacts Both debugging and testing tools uses logs. Debug tools have use of reading log artifacts as well as producing them. Test tools produces log artifacts. Optima have a product specifically for reading and analyzing logs. Farkle create logs during the execution. If the log times are in sync with the logs produce in the device under test, the result can be used for extended debugging. In terms of integrating the artifacts the following requirement is given: Req1: The log output from the test and debug tool should be presented together.. 3.7. Data integration for test and debug tools. Test tools need to share those test that exists and the hierarchy of the tests. For Farkle there needs to be a way to extract the tests from inside a script file. Debug tools have access to detailed information about the state of execution as well as memory usage and other system details. This information is the basis for metrics valuable for the test tool. Hardware testing tools could provide valuable metrics but falls outside the scope for this thesis and is ignored. From the Data Integration standpoint the following requirements exists: Req2: The test tool should provide information about tests to external tools Req3: The debug tool should provide its metrics to external tools. 3.8 3.8.1. Presentation, Control and Process Integration between test and debug tools Analysis of debug tools and Optima. Debug tools often integrates in some type of development environment. Optima integrates into Eclipse and CDT. Test tools range from being simple scripts to full software suites.. 3.8.2. Analysis of test tools and Farkle. Nothing can be said about the control and presentation of tests in general. Some test tools have a graphical user interface and read in requirements form files. Other,.

(64) 26. Theory. such as Python Unit Tests, uses scripting to perform tests. In that case the script language interpreter becomes the executable program. One way of reasoning is that the interpreter is a tool executing tests. That is as correct as reasoning that the computer is a tool for executing tests. To take the argument further; if both a software test and software debug tool is executed in the same environment, they are Platform Integrated (PlI). Two ways to view Farkle As stated in section 2.6 Farkle is a helper framework for python test scripts. This provides a problem in how to view tests written with help of Farkle in the toolchain. There are two views to be considered. The first way of looking on Farkle is as a pure toolset or library that is utilized by the testing tool, where the Python test script is regarded as a tool. In that case there are unknown artifacts because it is up to the tester to chose where the test conditions are fetched from. It would be to hard specify input and output artifacts but it is a flexible viewpoint. A test developer can write his or hers test structure to pure personal specifications provided that the resulting tests does as intended. The second way of looking is to approximate Farkle to a tool. The test script is an artifact that Farkle is running with help of Python. If the Python test file should be seen as an artifact it needs to conform to strict rules on what a test looks like. One way is to say that the test file needs to use Python unit test. Both views are correct. To stay inside the delimitations of this thesis, the second viewpoint is considered and used.. 3.8.3. Presentation Integration between test and debug tools. As noted earlier, test tools are far to unspecified in their design for any conclusions to be made. In the case of Farkle and Optima an analysis is possible. The tools have no Appearance and behavior integration. Optima is a graphical tool based upon eclipse and Farkle is a command line tool based on Python. Because the different platforms the Interaction paradigm integration. Integration between test and debug tools give the following requirements: Req4: Test and debug tools should share the same graphical characteristics Req5: Test and debug tools should share the same grammatical denominations. 3.8.4. Control Integration between test and debug tools. Looking at the two properties Provision and Use, debug and test tools are not connected. For Provision Optima should provide the means start and stop debugging. Optima should provide means of delivering dynamic data such as information.

(65) 3.9 Integration tightness between test and debug tools. 27. about the debug target. Test tools should provide the means of controlling execution of tests. This also holds true for Farkle. This gives the following requirements: Req6: Debug tools should provide the means to start and stop debugging Req7: Debug tools should provide means to deliver dynamic data Req8: Test tools should provide the means of controlling execution of tests Looking at the Use property, Farkle should mainly use the facility to retrieve metrics from Optima if needed. Optima should be able to start, stop and get status updates from running tests. The requirement identified is: Req9: The debug tool should use the possibility to start, stop and get status updates from the test tool. 3.8.5. Process Integration between test and debug tools. In general, test tools give information on parts of the design that does not work. In this sense test tools helps debug tools in the job of finding bugs. The tools does not hinder each other. For the debug tool, to get events from the test tool should be considered as a good Event integration. Events are covered Control Integration (CI) and does not need a separate requirement.. 3.9. Integration tightness between test and debug tools. Taking the experience from the BOOST project a direct approach to tool integration gives a inflexible environment. Further taking the goals of iFEST in consideration a direct approach is not desirable. Converting the tools to conform to a scripting language is a complex task and falls outside the boundaries of this thesis. The middle road is to use the existing environment Eclipse which both Optima and Farkle integrate into. This gives requirements on the nature of integration: Req10: The integration should be performed through Eclipse Req11: The test and debug tool should be loosely coupled. 3.10. Conclusions from theoretical studies. What tool control facilities and artifacts can be shared between the two tools? As discussed in section 3.6 not many artifacts are shared between test and debug tools. What can be shared or merged is log data. Provided that both logs from the test and debug output have synchronized timestamps, such log data could be very informative when looking at faults. In terms of control facilities shared between debug and test tools, both tools can give each other control over start and stop of execution..

(66) 28. Theory. Is there a need to further integrate debug and test tools? Debug and test tools are largely used in different stages of a product development. The debug tool can provide the test tool with useful metrics. In turn the test tool provides with useful stimuli when debugging. Because test and debug tools are seen as separate entities and are as separated as they are; an integration is desirable. What artifacts shared between the two tools should be coupled together? In terms of artifacts the only artifact both tools should share is logs. When looking at Presentation and Control Integration the tools need to be integrated. In section 3.8.3 and 3.8.4 the integrations is discussed. How should the integration be done? Section 3.9 discusses what boundaries the integration have. The tools are integrated into a common platform. The tools are abstracted before the integration. Apart from the integration layer nearest to Optima and Farkle, the final solution should consider the tools as generic test and debug tools. What possibilities are there to e.g. change one of the tools to another and keep the integration? If the tools connected to each other does so in a generic fashion, there will be no problems to switch the tools independent of each other. This demands that the new tool have the same interfaces as the old one. What further development on the tools is needed for making the iFEST compliant? iFEST is looking into representing the tools with meta models and to create tool type specific interfaces. Both Farkle and Optima needs to be represented by models based on meta models for test and debug tools. To date no final meta models have been established. Exactly how the models should look like cannot be determined at this point. Determine such meta models is outside the scope of this thesis..

(67) Chapter 4. Design of Optima’s and Farkle’s tool communication 4.1. Overview of the design. From the requirements made in the previous chapter; a design of an integration has been made. Gui. Connect layer. Data Metrics. Providers. Actions. Test provider. Debug actions. Test tool. Debug tool Test actions. Debug metrics. Figure 4.1. Design for test and debug tool integration. The Design overview is illustrated in figure 4.1. The design consists of five major parts: • Connect layer The connect layer is used for the overall integration and the place where artifacts, actions and metrics are stored and exchanged. • Gui The user interface presents the test artifacts and the actions that can be taken. 29.

(68) 30. Design of Optima’s and Farkle’s tool communication • Test Providers Providers are something that gives test artifacts or provides metrics. The providers connect to a general interface on the interconnect layer. • Action Providers Actions defines what can be done with the test artifacts. Example of actions are run test and trace whilst running test • Data access Data access is used to retrieve information about artifacts or metrics.. Also showing in the figure is how metrics can be fetched directly form the connect layer. The overall goal with the design is to be able to exchange tools. Tool specific provider and action classes must therefore be as small as possible to minimize the overhead in adding a new tool. The parts will be further discussed separately in this chapter.. 4.2. Artifact, action and data collection in the Connect layer. All communication is made through the Connect layer. The Connect layer also keep track of everything connected to it. The collected data can later be used by the graphical user interface. When data is requested the Connect layer checks what Providers provide the information. If metrics is requested the Connect layer probes the Metrics provider for the latest metrics. The design is based upon the Eclipse platform and is required to provide loose coupling. Data exchange is made by Java Interfaces and primitives are Java types. All data should be described in a general way to prevent a product specific dependency.. 4.2.1. Artifacts stored inside Connect layer. The Connect layer needs to have knowledge of the artifacts stored inside of it. To keep the design simple and understandable, there are no generic artifact types. The Connect layer keeps track of five artifact types: • Signal The Signal artifact gives information about a signal in the system. The Signals is related to OSE Signals. Signals can be associated with Tests. • Test The Test artifact represent a single test. Tests can be associated with all the other artifacts. • Suite The Suite artifact represents a collection of Test artifacts. Suites can be associated with Tests, Requirements and Results.

(69) 4.3 Graphical user interface. 31. • Requirement The Requirement artifact represents the requirement the tests are verifying. Requirements can be associated with Tests, Suites and Results. • Result The Result artifacts holds more detailed information about a executed Test or Suite. Results are more dynamic than the other artifacts and is generated during test executions. Results can be associated with Testa, Suites and Requirements The artifacts are represented by Java Objects specified by Java Interfaces. Metrics and other dynamic data is not stored in the Connect layer. They are provided by respective Provider upon request.. 4.3. Graphical user interface. The graphical user interface is the front end towards the user. Giving an overview of tests and actions available. By creating a central graphical user interface less design changes have to be made to the test and debug products. In some cases the test product don’t have any interface of its own, further promoting the use of a central graphical user interface. Figure 4.2 displays the intended characteristics of a GUI. The main area consists of a tree view which is used to present test artifacts. Above the main area, a tool bar with action buttons to start tests and to control log output is present. Testbench X. Y. +Tests -TestA -TestB. Figure 4.2. Concept design of GUI. Basic functionality for the user interface is to start a test and to provide means to select if a log file is to be created. Icons should be easy to understand and properly named..

(70) 32. 4.3.1. Design of Optima’s and Farkle’s tool communication. Extendability of the graphical interface. The design and implementation scope of this report does not extend beyond the integration between Optima and Farkle. It is however possible for tests to be performed and manipulated by other software as well. This is achieved by giving each Action in figure 4.1 a separate action button.. 4.4. Test Provider design. To provide the Connect layer with different artifacts, Test providers are used. A provider translates tool specific information to generic information that can be used by any other test or debug tool. The provider is also responsible for providing available artifacts upon request. The Provider should at all times have its artifacts stored inside of it.. 4.5. Actions design. An Action is used to perform a test in some fashion. Because Test artifacts can come from several different sources they provide basic test execution. Actions invoke the execution of tests and is given the ability to pass information to the test and to perform other tasks related to the tests. One such task is starting signal tracing. The Action provider also needs to provide information about the action. The information is for the developer; to understand what the action does. Important parts are action name and a relevant icon. The Actions are Java Objects based on a predefined Java Interface to eliminate tool specific functions.. 4.6. Data access and loose coupling. For test and debug tools to be able to use metrics as well as getting information about artifacts, the Connect layer needs to incorporate an interface to provide the information. Although Farkle have been integrated into Eclipse to a point, the test scripts are still executed by Python. To provide the scripts with the ability to retrieve information, the data interface needs to be able to extend beyond Java. Such examples have been discussed in section 3.3. The data interface should be built upon Java Objects as well as a way of delivering information outside of Optima. The way of delivering data to Farkle is through a SOAP server. The SOAP data exchange is platform independent and will as a side effect enable tests to be performed from other computers. The use of the Interconnect layer also ensures an indirect coupling between test and debug tools..

(71) 4.7 Test tool awareness in Optima. 33. Standard names for information and predefined artifacts removes the need for the test tool to know the debug tool explicitly. Thus loose coupling is achieved.. 4.6.1. Debug tool awareness in Farkle. For test scripts to be able to control the debug tool, the Data access also needs to be able to call Actions. Farkle needs additional Python Libraries to be able to utilize the information provided by the Connect layer. An access library must be created to communicate with the Connect layer. The library should include all the same function calls as the native Data access.. 4.7. Test tool awareness in Optima. Optima gets its test tool awareness by providing actions to be performed by tests. The Log analyzer need to be able to read log files from the test. To be able to read tests the output from Farkle should be readable from the Log Analyzer. There are not much more information the debug tool can use from tests..

(72)

(73) Chapter 5. Implementation of Optima’s and Farkle’s tool communication 5.1. Overview of the implementation. With the design for test and debug tool integration, the implementation illustrated in figure 5.1 is produced. The graphical user interface is named Testbench. Due to faster implementation the GUI is merged with the Connect layer. Metrics have been separated from Testbench into a separate plugin. In the coming sections all the individual plugins will be discussed in detail.. 5.2 5.2.1. Testbench plugin Testbench plugin overview. Testbench is the central GUI for executing tests. Figure 5.2 illustrates the main concepts of the Testbench view. The predefined actions are from left: Run, Toggle log, Run, Forget results and Refresh providers. The default actions are the bare minimum needed to run tests and analyze the results. The aim with the log toggle action is to turn of log generation and popup when tests are used as stimuli for debugging.. 5.2.2. Testbench Interfaces Classes. To be able to exchange information Testbench defines a number of Java Interfaces. The interfaces can be divided into Data and Control. Respective group of interfaces 35.

(74) 36. Implementation of Optima’s and Farkle’s tool communication PC environment Legend. Eclipse platform Optima AaBb AaBb. Optima action provider. Testbench. Farkle. Test provider. Metrics Proxy. Environment Existing feature Implemented feature User provided. Metrics proxy. Farkle. sigpa. Pyhon test. Device under test. Figure 5.1. Overview of the implementementation. are divided into separate packages. Testbench Data Interfaces Testbench defines a number of standard Interfaces to encapsulate artifacts. An Interface to indicate that an artifact is linked to something executable is also present. The relations between the classes is shown in figure 5.3. ITestArtifact is the general artifact Interface. ITestArtifact requires the implementers to supply an Identification, Name, a list with Sources and a list with Relations to other artifacts. All other specific Artifacts inherit from ITestArtifact and does not define any new methods. The objective with the children to ITestArtifact is for Testbench to be able to categorize it’s Artifacts. An implementation of an Artifact can also choose to implement ITestExecutable. ITestExecutable defines a way of execute an Artifact, mostly tests. Functions implemented are to receive data streams, set callbacks for reports and get a Job1 . TestArtifact implementations that do not implement ITestExecutable can not be executed. IReport is used to report back the status of an execution. Testbench Control Interfaces The control Interfaces is used to help extending functionality to Testbench and to help transporting data. Figure 5.4 illustrates the Interfaces. 1A. Job is a central concept in Eclipse.

(75) 5.2 Testbench plugin. 37. Figure 5.2. The final Testbench user interface design. ITestArtifact +getRelations(): List<String> +getResources(): List<String> +getId(): String +getName(): String +equals(Object): boolean IRequirement ISignal ISuite. IReport +enum TestResult +getArtifactId(): String +getResultMessage(): String +getResult(): TestResult ITestExecutable +getExecutable(boolean): Job +setReportAcceptor(IReportAcceptor) +getInputStream(): InputStream +getInputStream(): ErrorStream +getOutputStream(): OutputStream. ITest. Figure 5.3. Illustration of Testbench Data Interfaces. • ATestActionProvider is an abstract Class rather than an Interface. It is a Class because Interfaces can not extend Classes. A Plugin aimed at providing actions to Testbench needs to submit a class that extends ATestActionProvider. • ITestProvider is an Interface for describing TestProviders. Testbench probes TestProviders for known Artifacts discussed in the previous section. ITestEvent, IReportAcceptor and ITestRunInjector are interfaces for callback functions. • ITestEvent is used by Test Providers to report back changes about artifacts. • IReportAcceptor is used Test Providers to report back results to Testbench about test results..

(76) 38. Implementation of Optima’s and Farkle’s tool communication ITestProvider +updateTree() +registerUpdater(ITestEvent): boolean +getElements(): List<ITestArtifact> +getElementById(String) : ITestArtifact +getRelationsTo(String) : List<ITestArtifact> +getArtifactsFrom(String) : List<ITestArtifact> ATestActionProvider +setTestRunInjector(ITestRunInjector). ITestRunInjector +getInjectCode(): JOB[] ITestEvent +update(ITestProvider) IReportAcceptor +addReport(IReport). Figure 5.4. Illustration of Testbench Control Interfaces. • ITestRunInjector is used by Testbench to provide actions with runnable tests. After the action is done with set up it executes runInjectCode(). 5.2.3. Testbench Extension Points. Testbench provides two extension points: • TestContentProvider TestContentProvider provides Testbench with tests and requirements. A TestContentProvider needs to implement ITestProvider. • TestActionProvider TestActionProvider provides Testbench with custom actions to run the tests with.. 5.2.4. Testbench Properties Class. TestbenchProperties is used for other plugins to retrieve properties. The class is a Singleton and holds two properties: • SaveLog that is used by executable tests to determine if they should save log output. • TestProject that is used to get a test project for data storage.. 5.3. TestProjectPlugin plugin. TestProjectPlugin is used to mark a Project as a Test Project. A Test Project can be used in a number of ways by test services. The plugin provides a toggle Action for setting Test Nature to a project. The Action is added to Project context popup. When the Test Nature is added to a Project the existence of the directory /logs is checked. If the directory does not exist it is created..

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Uppgifter för detta centrum bör vara att (i) sprida kunskap om hur utvinning av metaller och mineral påverkar hållbarhetsmål, (ii) att engagera sig i internationella initiativ som

In the latter case, these are firms that exhibit relatively low productivity before the acquisition, but where restructuring and organizational changes are assumed to lead

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

Not all skill switches imply a negative outcome. Some displaced workers who are re-employed in occupations with different skill requirements move to jobs with higher skill