Wienke, Johannes: Framework-level resource awareness in robotics and intelligent systems. Improving dependability by exploiting knowledge about system resources. 2018
Inhalt
- Abstract
- Acknowledgments
- Contents
- List of figures
- List of tables
- List of code listings
- Research topic
- 1 Introduction
- 2 Fundamental concepts and terminology
- 2.1 Resources and related concepts
- 2.1.1 Resource categorization schemes
- 2.1.2 Metrics, KPIs, and performance counters
- 2.1.3 A conceptual model of system resources
- 2.2 Dependable computing and FD*
- 3 A survey on bugs in robotics systems
- 3.1 Tool usage
- 3.2 Bugs and their origins
- 3.3 Performance bugs
- 3.4 Bug examples
- 3.5 Summary
- 3.6 Threats to validity
- 4 A concept of resource awareness
- 4.1 Resource awareness in computing systems
- 4.1.1 Server infrastructure operation
- 4.1.2 Cloud computing
- 4.1.3 Model-based performance prediction
- 4.2 Resource awareness in robotics
- 4.2.1 Space robotics
- 4.2.2 Cloud robotics
- 4.2.3 Resource-aware algorithms
- 4.2.4 Resource-aware planning and execution
- 4.2.5 Infrastructure monitoring of robotics systems
- 4.2.6 Model-driven approaches
- 4.3 Summary
- Technological foundation
- 5 Component-based robotics systems
- 5.1 Component-based software engineering
- 5.2 CBSE and distributed systems
- 5.3 CBSE in robotics
- 5.4 Patterns in component-based robotics systems
- 5.5 Summary
- 6 Middleware foundation: RSB
- 6.1 Architecture
- 6.1.1 Event model
- 6.1.2 Naming model
- 6.1.3 Notification model
- 6.1.4 Time model
- 6.1.5 Observation model
- 6.1.6 Extension points
- 6.2 Introspection
- 6.3 Domain data types: RST
- 6.4 Tool support
- 6.5 Interoperability with other middlewares
- 6.6 Applications
- 6.7 Summary
- 7 A holistic dataset creation process
- 7.1 Challenges in creating datasets
- 7.2 Description of the holistic process
- 7.3 Realization based on RSB
- 7.4 Summary
- 8 System metric collection
- Developer perspective
- 9 Runtime resource introspection
- 9.1 Available tools
- 9.2 Resource utilization dashboard implementation
- 9.3 Dashboard design
- 9.4 Evaluation
- 9.5 Summary
- 10 Systematic resource utilization testing
- 10.1 Related work
- 10.2 Performance testing framework concept
- 10.3 Realization
- 10.3.1 Load generation
- 10.3.2 Environment setup
- 10.3.3 Test execution
- 10.3.4 Test analysis
- 10.3.5 Automation
- 10.4 Evaluation
- 10.5 Summary
- 11 Model-based performance testing
- Autonomy perspective
- 12 A dataset for performance bug research
- 12.1 Recording method
- 12.2 Included performance bugs
- 12.2.1 Algorithms & logic
- 12.2.2 Resource leaks
- 12.2.3 Skippable computation
- 12.2.4 Configuration
- 12.2.5 Threading
- 12.2.6 Inter-process communication
- 12.3 Automatic fault scheduling
- 12.4 Summary
- 13 Runtime resource utilization prediction
- 13.1 Feature generation
- 13.1.1 Accumulated event window features
- 13.1.2 Adding previous system metrics
- 13.1.3 Baseline: system metrics
- 13.1.4 Preprocessing
- 13.2 Model learning
- 13.3 Evaluation
- 13.4 Learning from performance tests
- 13.5 Related work
- 13.6 Summary
- 14 Runtime performance degradation detection
- Perspectives
- Appendix
- A Survey: failures in robotics systems
- A.1 Introduction
- A.2 Monitoring Tools
- A.2.1 How often do you use the following kinds of tools to monitor the operation of running systems?
- A.2.2 Please name the concrete tools that you use for monitoring running systems.
- A.3 Debugging Tools
- A.3.1 How often do you use the following tools for debugging?
- A.3.2 Please name the concrete tools that you use for debugging.
- A.4 General Failure Assessment
- A.4.1 Averaging over the systems you have been working with, what to do you think is the mean time between failures for these systems?
- A.4.2 Please indicate how often the following items were the root cause for system failures that you know about.
- A.4.3 Which other classes of root causes for failures did you observe?
- A.5 Resource-Related Bugs
- A.6 Impact on Computational Resources
- A.6.1 Please indicate how often the following computational resources were affected by bugs you have observed.
- A.6.2 If there are other computational resources that have been affected by bugs, please name these.
- A.7 Performance Bugs
- A.8 Case Studies
- A.9 Case Study: Representative Bug
- A.9.1 How was the representative bug noticed?
- A.9.2 What was the root cause for the bug?
- A.9.3 Which steps were necessary to analyze and debug the problem?
- A.9.4 Which computational resources were affected by the bug?
- A.10 Case Studies
- A.11 Case Study: Interesting Bug
- A.11.1 How was the interesting bug noticed?
- A.11.2 What was the root cause for the bug?
- A.11.3 Which steps were necessary to analyze and debug the problem?
- A.11.4 Which computational resources were affected by the bug?
- A.12 Personal Information
- A.12.1 In which context do you develop robotics or intelligent systems?
- A.12.2 How many years of experience in robotics and intelligent systems development do you have?
- A.12.3 How much of your time do you spend on developing in the following domains?
- A.13 Final remarks
- B Failure survey results
- B.1 Used monitoring tools
- B.2 Used debugging tools
- B.3 Summarization of free form bug origins
- B.4 Summarization of other resources affected by bugs
- B.5 Representative bugs
- B.5.1 Representativ bug 8
- B.5.2 Representativ bug 10
- B.5.3 Representativ bug 14
- B.5.4 Representativ bug 21
- B.5.5 Representativ bug 26
- B.5.6 Representativ bug 30
- B.5.7 Representativ bug 41
- B.5.8 Representativ bug 42
- B.5.9 Representativ bug 46
- B.5.10 Representativ bug 60
- B.5.11 Representativ bug 69
- B.5.12 Representativ bug 70
- B.5.13 Representativ bug 76
- B.5.14 Representativ bug 81
- B.5.15 Representativ bug 96
- B.5.16 Representativ bug 128
- B.5.17 Representativ bug 135
- B.5.18 Representativ bug 136
- B.5.19 Representativ bug 156
- B.5.20 Representativ bug 190
- B.5.21 Representativ bug 191
- B.6 Interesting bugs
- B.6.1 Interesting bug 5
- B.6.2 Interesting bug 21
- B.6.3 Interesting bug 32
- B.6.4 Interesting bug 46
- B.6.5 Interesting bug 60
- B.6.6 Interesting bug 69
- B.6.7 Interesting bug 76
- B.6.8 Interesting bug 83
- B.6.9 Interesting bug 133
- B.6.10 Interesting bug 149
- B.6.11 Interesting bug 150
- B.6.12 Interesting bug 153
- B.6.13 Interesting bug 156
- B.6.14 Interesting bug 162
- B.7 Collected system metrics
- C Survey: dashboard evaluation
- C.1 Introduction
- C.2 General
- C.2.1 Please rate, how often you consult the monitoring dashboard in different situations?
- C.2.2 How much insight do you gain into the consumption and availability of computational resources (like CPU, I/O or memory) when using the dashboard?
- C.2.3 Do you think you have a better understanding of the use of computational resource in the system as a result of the dashboard?
- C.2.4 For the different kinds of computational resources, how much did the dashboard improve your understanding of the consumption of these resources?
- C.2.5 Please describe briefly, in which situation you find the dashboard most valuable.
- C.3 Debugging
- C.3.1 How often are issues that you observe in the system visible in the dashboard?
- C.3.2 Does the dashboard help to isolate the origin of bugs?
- C.3.3 Did you find bugs through the dashboard that you wouldn't have noticed at all or much later otherwise?
- C.3.4 Please briefly describe the bugs that you have found.
- C.4 Tools
- C.4.1 Which tools do / did you use apart from the dashboard to understand resource utilization?
- C.4.2 Did the dashboard reduce the use of other tools for the purpose of understanding resource utilization?
- C.5 End
- C.6 Final remarks
- D Dashboard survey results
- E ToBi dataset details
- Acronyms
- Glossary
- Bibliography
- Declaration
- Colophon
