Symposium Program

Note: Session 2b will be held in Marine Science Campus.
Symposium Program is also available in PDF.

Monday, November 3 (Kiel University)

14:00Kieker Developer Meeting

Tuesday, November 4 (adesso Kiel)

9:00 ⁠–⁠ 9:30Registration and Get Together
9:30 ⁠–⁠ 9:40Opening and Welcome Note
9:40 ⁠–⁠ 9:55Short Reports on Palladio, Kieker, and Descartes
9:55 ⁠–⁠ 10:40Keynote 1: Wirth’s Law in the Age of Cloud and Green IT: Addressing Software Complexity for Sustainable Performance. Christian Dähn, adesso.
10:40 ⁠–⁠ 11:10Coffee Break
11:10 ⁠–⁠ 12:50Session 1: Cloud & Containers
Validating Alerts in Cloud-Native Observability. Maria C. Borges, Julian Legler and Lucca Di Benedetto.
RAdaptSQ: Real-Time AI-Planning and Environment-Aware Self-Adaptation to Optimize Security and QoS. Lin Cui and Raffaela Mirandola.
Towards Bringing Vitruvius into the Cloud II: Against Attacks from the Internet. Martin Armbruster, Fatma Chebbi, Thomas Weber and Anne Koziolek.
A Case Study on the Value of Simulations and Synthetic Microservice Applications for Performance Model Training. Yannik Lubas, Martin Straesser, Ivo Rohwer, André Bauer and Samuel Kounev.
12:50 ⁠–⁠ 14:20Lunch Break
14:20 ⁠–⁠ 16:00Session 2a: Modeling
(adesso Kiel)
Towards Scalability Analysis of State-based Model Comparison. Martin Armbruster, Manar Mazkatli, Alp Toraç Genç and Anne Koziolek.
Towards Intelligent Performance Data Analytics With Graph Databases. Ivo Rohwer, Martin Straesser, Yannik Lubas, Samuel Kounev and André Bauer.
Generation of Checkpoints for Hardware Architecture Simulators. Sebastian Weber, Lars Weber, Thomas Weber, Jörg Henß and Robert Heinrich.
Extracting Reusable Service Demands for TeaStore. Elijah Seyfarth, Sebastian Frank and Jóakim von Kistowski.
Session 2b: Benchmarking and Monitoring
(Marine Science Campus)
Detection of Performance Changes in MooBench Results Using Nyrkiö on GitHub Actions. Shinhyung Yang, David Georg Reichelt and Wilhelm Hasselbring.
Dynamic and Static Analysis of Python Software with Kieker. Daphné Larrivain, Shinhyung Yang and Wilhelm Hasselbring.
Benchmarking Pattern Matching Strategies for Scalable Log Analytics. Luka Leko, Sören Henning, Adriano Vogel, Otmar Ertl and Rick Rabiser.
First Steps for Performance Monitoring of Petri Net Simulator Renew with Kieker. Marcel Hansson and Daniel Moldt.
16:00 ⁠–⁠ 16:30Coffee Break
16:30 ⁠–⁠ 17:45Session 3: AI-Driven Performance Modeling
LLM-Assisted Microservice Performance Modeling. Maximilian Hummel, Nathan Hagel, Minakshi Kaushik, Jan Keim, Erik Burger and Heiko Koziolek.
LLMs on Affordable GPUs: A Benchmarking Study. David Georg Reichelt, Daniel Abitz, Jonathan Groß and Stefan Kühne.
Machine Learning Surrogate Models for Performance Prediction with Architectural Models. Sebastian Weber, Vincenzo Pace, Thomas Weber, Jörg Henß and Robert Heinrich.
17:45Closing and End of the First Day Remarks
19:30Conference Dinner

Wednesday, November 5 (adesso Kiel)

9:00 ⁠–⁠ 9:30Registration and Get Together
9:30 ⁠–⁠ 9:40Opening and Welcome Note
9:40 ⁠–⁠ 10:25Keynote 2: Should I Run My Cloud Benchmark on Black Friday? Sören Henning, Dynatrace.
10:25 ⁠–⁠ 10:55Coffee Break
10:55 ⁠–⁠ 12:35Session 4: Monitoring & Visualization
Simplifying Kotlin Compile-Time Code Instrumentation. Lorenz Bader and Markus Weninger.
OpenTelemetry Instrumentation using Kotlin Multiplatform Compiler Plugins. Fabian Schoenberger and Markus Weninger.
Analysis and Visualization of Unit Test Traces With Kieker and ExplorViz. Malte Hansen, David Georg Reichelt and Wilhelm Hasselbring.
Interoperability From OpenTelemetry to Kieker: Demonstrated as Export from the Astronomy Shop. David Georg Reichelt, Shinhyung Yang and Wilhelm Hasselbring.
12:35 ⁠–⁠ 12:55Conference Closing and Final Remarks
12:55Lunch

Keynote 1

Wirth’s Law in the Age of Cloud and Green IT: Addressing Software Complexity for Sustainable Performance

Christian Dähn, adesso.

Abstract:
Despite exponential hardware advancements, software performance often lags behind due to increasing complexity. This keynote explores Wirth’s Law – “software gets slower more quickly than hardware gets faster” – and its implications for modern application development. We will examine how the proliferation of middleware, diverse programming languages, and complex cloud environments contribute to software degradation, resulting in slower response times and escalating resource demands. This negatively impacts operational costs and contributes to a larger carbon footprint. This talk will highlight the crucial importance of resource-efficient application design, from architectural decisions and optimized development tools to runtime performance analysis – particularly in the context of cloud computing and the growing need for Green IT initiatives.

Christian is an experienced software architect and developer specialized in high-performance image analysis and secure cloud solutions. Proven expertise in C++ development, machine learning, neural networks, and optimizing resource-intensive systems for maximum efficiency. 20+ years of experience building scalable, concurrent solutions for high-speed industrial applications.

  • 2023 – Present: Chief Architect for Government Clouds @ adesso SE – Focus on multi-cloud strategies, Confidential Computing & AI
  • 2015 – 2023: Architect and R&D-manager @ DVZ M-V GmbH – Modernization of enterprise applications – from Monolith to high scalable DDD architectures
  • 2006 – 2015: Team Lead and Chief Architect @ ASinteg GmbH – Development of image analysis frameworks and machine learning applications
  • 2002 – 2005: R&D Team Lead and Architect @ PLANET – Building high speed image analysis solutions e.g. for Speed enforcement cameras
  • 1999 – 2002: Developer@ PLANET intelligent systems GmbH – Building OCR and object detection systems for OTTO, FedEx and Swedish Post
  • Education: Diplom-Informatiker (FH) – Wilhelm Büchner Hochschule Darmstadt.

Keynote 2

Should I Run My Cloud Benchmark on Black Friday?

Sören Henning, Dynatrace.

Abstract:
Benchmarks and performance experiments are frequently conducted in cloud environments both in research and practice. Yet, their results are often met with skepticism, as the presumed high variability of performance in the cloud raises concerns about reproducibility and credibility. In a comprehensive longitudinal study, we recently conducted an empirical analysis to quantify the extent and nature of this variability and its impact on benchmarking results. This keynote sheds light on some uncertainties surrounding cloud benchmarking and addresses relevant questions such as: Does the time of the day influence benchmark results? Are there measurable effects based on the day of the week? What about long-term trends that emerge over several weeks? And do major global events, such as Black Friday, affect the outcomes of performance benchmarks?

Should I Run My Cloud Benchmark on Black Friday? Sören Henning, Adriano Vogel, Esteban Pérez-Wohlfeil, Otmar Ertl and Rick Rabiser.

Sören is a researcher on real-time analytics at Dynatrace, working at the intersection of software engineering, distributed systems, and data management. His research focuses on designing scalable, distributed software architectures, with particular interests in event and stream processing, as well as empirically grounded performance engineering.
He holds a PhD from Kiel University, where he worked on scalability benchmarking of cloud-native applications with special focus on event-driven microservices. He continued this work in the JKU/Dynatrace Co-Innovation Lab at Johannes Kepler University Linz, before joining Dynatrace in 2025.