Identifying Web Performance Degradations through Synthetic and Real-User Monitoring

  title={Identifying Web Performance Degradations through Synthetic and Real-User Monitoring},
  author={J{\"u}rgen Cito and Devan Gotowka and Philipp Leitner and Ryan Pelette and Dritan Suljoti and Schahram Dustdar},
  journal={J. Web Eng.},
The large scale of the Internet has offered unique economic opportunities, that in turn introduce overwhelming challenges for development and operations to provide reliable and fast services in order to meet the high demands on the performance of online services. [] Key Method We develop a simulation model based on a taxonomy of root causes in server performance degradation. Within an experimental setup, we obtain results through synthetic monitoring of a target Web service, and observe changes in Web…
Beaconnect: Continuous Web Performance A/B Testing at Scale
This work introduces the system Beaconnect, built for a custom browser-based acceleration approach and thus does not rely on traditional CDN technology, and presents a continuous aggregation pipeline that achieves sub-minute end-to-end latency.
Benchmarking Web API Quality - Revisited
This paper revisits a 3-month, geo-distributed benchmark of popular web APIs, originally performed in 2015, and compares results from these two benchmarks regarding availability and latency, and introduces new results from assessing provider security preferences.
Browser Extension-based Crowdsourcing Model for Website Monitoring
A crowdsourcing-based approach that makes use of browser extensions as checkpoints to monitor websites and a batch processing technique for handling monitoring requests is presented.
Use of Self-Healing Techniques to Improve the Reliability of a Dynamic and Geo-Distributed Ad Delivery Service
  • Nicolas Brousse, O. Mykhailov
  • Computer Science
    2018 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)
  • 2018
It is found that a distributed infrastructure that leverages public cloud providers and a private cloud with open infrastructure technologies can deliver dynamic advertising content with low latency while preserving its high availability.
Runtime metric meets developer: building better cloud applications using feedback
This paper explores what it considers to be the logical next step in this succession: integrating runtime monitoring data from production deployments of the software into the tools developers utilize in their daily workflows to enable tighter feedback loops.
The making of cloud applications: an empirical study on software development for the cloud
The first systematic study on how software developers build applications for the cloud is reported, finding that developers need better means to anticipate runtime problems and rigorously define metrics for improved fault localization and the cloud offers an abundance of operational data.
SMART: a service-oriented architecture for monitoring and assessing Brazil’s Telehealth outcomes
The specification, implementation and validation of an architecture to integrate the various telehealth platforms developed by the centers, entitled SMART, to standardize information so that the Ministry of Health can monitor and evaluate the results of Telehealth actions is described.


Identifying Root Causes of Web Performance Degradation Using Changepoint Analysis
This paper investigates how performance engineers can identify three different classes of externally-visible performance problems (global delays, partial delays, periodic delays) from concrete traces, and develops a simulation model based on a taxonomy of root causes in server performance degradation.
A provider-side view of web search response time
An analysis framework is developed that separates systemic variations due to periodic changes in service usage and anomalies due to unanticipated events such as failures and denial-of-service attacks and develops a technique that robustly detects and diagnoses performance anomalies in SRT.
Traffic model and performance evaluation of Web servers
Pinpoint: problem determination in large, dynamic Internet services
This work presents a dynamic analysis methodology that automates problem determination in these environments by coarse-grained tagging of numerous real client requests as they travel through the system and using data mining techniques to correlate the believed failures and successes of these requests to determine which components are most likely to be at fault.
Automated anomaly detection and performance modeling of enterprise applications
The thesis is that online performance modeling should be a part of routine application monitoring and early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service.
Web page performance analysis
Responsibility time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions, and a framework that consists of measurement, modelling, and monitoring that revolves around response time is adopted to support the performance analysis activity.
The Effect of Network and Infrastructural Variables on SPDY's Performance
The impact of network characteristics and website infrastructure on SPDY's potential page loading benefits is identified, finding that these factors are decisive for SPDY and its optimal deployment strategy.
Anomaly Detection Techniques for Web-Based Applications: An Experimental Study
  • J. Magalhães, L. Silva
  • Computer Science
    2012 IEEE 11th International Symposium on Network Computing and Applications
  • 2012
An experimental study about the detection abilities provided by the monitoring tools that are being used nowadays in web-based applications includes an application-level monitoring technique that provides the detection of performance anomalies by performing a correlation analysis among application parameters collected by an aspect-oriented program.
Capturing, indexing, clustering, and retrieving system history
We present a method for automatically extracting from a running system an indexable signature that distills the essential characteristic from a system state and that can be subjected to automated
Performance debugging for distributed systems of black boxes
The goal is to design tools that enable modestly-skilled programmers to isolate performance bottlenecks in distributed systems composed of black-box nodes by developing two very different algorithms for inferring the dominant causal paths through a distributed system from these traces.