Monitoring and diagnosis of performance problems in enterprise applications with mobile front-end

More and more companies are monitoring their applications with Application Performance Management (APM) tools to detect performance problems in production, and avoid possible revenue losses that come from them. As the number of users of mobile applications grows, mobile applications should also be tracked with the help of APM tools. However, currently there are no open source APM tools which monitor mobile applications. Also, commercial tools provide only limited support for monitoring these applications. In this talk, we will present the extensions to the open source APM tool inspectIT [1] for monitoring iOS applications.

In the first part of our talk, we will present the results of evaluation of mobile monitoring strategies for native iOS applications. Depending on various application categories, different implementation approaches of mobile agents would be more appropriate than others. For bypassing the manual instrumentation process, which normally has to be performed by the application developer, we also implement a pre-compilation process which statically analyzes the written source code and automatically inserts the instrumentation points. This allows for the automated insertion of instrumentation source code into application source code and will improve monitoring applications by reducing work overhead.

In the second part of our talk, we present one approach for collecting and analyzing the data from an application with mobile (iOS) front-end and Java back-end. The main issue here is how can the data from two different platforms be combined for the analysis. To tackle this problem, both agents use Opentracing [2] standard, which defines an execution-trace-like structure with spans. A span is marked with a specific name and contains a unique identifier to distinguish it from other collected spans. Considering that methods might be invoked from one another or consecutively, each span also contains its parent span identifier. Spans can be arbitrarely created within the application source code, e.g., to encompass one method invocation, more method invocations, or to encompass complete use-cases. In this context, a use-case represents processing of one user request, and may contain a sequence of method invocations, which are not instrumented directly. In the course of our work, we organized spans to be mapped as use cases.

Beside collecting the monitoring data through spans, another challenge is to determine the root cause of a performance problem. The research project diagnoseIT [3] addresses this problem by automatically determining root causes and common performance antipatterns in enterprise applications. The analysis is performed using predefined rules [4]. However, as the original rules were designed for enterprise applications, they do not yet cover specific mobile performance antipatterns.
We therefore investigated for typical performance antipatterns in mobile applications and extended the diagnoseIT rule set with rules which detect specific mobile performance antipatterns.

Summarized, the result of this work is a pipeline, that is constructed as follows. Measurements are collected from mobile device with the developed inspectIT iOS agent, which generates use-case for each client request to the application. Within the use-case, a request can be sent from the front-end to a back-end of the application. On the back-end side, data is collected using the inspectIT Java agent. Both agents send the collected data to the inspectIT back-end. The data from both agents is first combined into a single trace. The resulting trace is converted into OPEN.xtrace [5], and passed to diagnoseIT for further analysis and antipattern detection.

We evaluated if mobile performance antipatterns are detected correctly by the diagnoseIT. The analyzed data was from a real mobile application monitored by the iOS agent. The evaluation showed that the antipattern detection works as intended and that antipatterns are detected correctly.

In addition to introducing our approach, the monitoring pipeline with its components, we also present a short demo.

References:
[1] inspectIT, www.inspectit.rocks
[2] Open-Tracing, opentracing.io
[3] Heger, Christoph, et al. "Expert-guided automatic diagnosis of performance problems in enterprise applications." Dependable Computing Conference (EDCC), 2016 12th European. IEEE, 2016.
[4] diagnoseIT Rules, github.com/alperhi/diagnoseIT/tree/bachelorthesis
[5] Okanović, Dušan, et al. "Towards performance tooling interoperability: An open format for representing execution traces." European Workshop on Performance Engineering. Springer International Publishing, 2016.