SYSTEMS AND METHODS FOR AUTOMATED WEB PERFORMANCE TESTING FOR CLOUD APPS IN USE-CASE SCENARIOS
First Claim
1. An automated testing system for measuring performance metrics of apps, comprising:
- at least one processor deployed on a server being programmed to implement the measuring of metrics for the performance testing by the server of a plurality of app coupled to the server;
the at least one processor scheduling performance testing of a plurality of apps to generate a set of performance metrics from a client, server and device relating to performance of each app wherein the performance metrics comprises processing times and requests associated with the app, wherein the scheduling performance testing of each app is executed by a combination of the client, server, and device comprising different networks, operating systems, and browsers;
the at least one processor having a performance engine to capture the set of performance metrics of each app from the client, server and device, and further for organizing the set of performance metrics into categories based on an instrumentation and profile of each app wherein the categories comprise clusters of the performance metrics;
the at least one processor having a graphic user interface for rendering the set of performance metrics in a manner to facilitate comparisons between each cluster and category of the set of performance metrics; and
the at least one processor having an input to receive data from processors of the client, server, and network to present in a first instance in a waterfall graph of event logging and processing time for events across the client, server and network; and
in a second instance to present in a flame graph for discrete events of creation time for each component on a page whereby a bottleneck in an event and component processing is identified visually at an event level across the client, server and network and at a component level across components of the page related to the client, server and network by a particular display of the waterfall and flame graphs.
1 Assignment
0 Petitions
Accused Products
Abstract
Systems and methods for measuring performance metrics of apps where a controller schedules performance testing of a plurality of apps to generate a set of performance metrics from a client, server and device relating to performance of each app wherein the generated set of performance metrics comprises processing times and requests of the app. The scheduled performance testing is executed by a combination of the client, server, and device includes different networks, operating systems, and browsers. A performance engine captures the set of performance metrics of each app from the different client, server and device, and organizes the app metrics into categories based on an instrumentation and profile of each app. The categories include clusters comprising performance metrics of the client, server, and device. A user interface renders the set of performance metrics to facilitate comparisons between each cluster and category of the set of performance metrics.
-
Citations
21 Claims
-
1. An automated testing system for measuring performance metrics of apps, comprising:
-
at least one processor deployed on a server being programmed to implement the measuring of metrics for the performance testing by the server of a plurality of app coupled to the server; the at least one processor scheduling performance testing of a plurality of apps to generate a set of performance metrics from a client, server and device relating to performance of each app wherein the performance metrics comprises processing times and requests associated with the app, wherein the scheduling performance testing of each app is executed by a combination of the client, server, and device comprising different networks, operating systems, and browsers; the at least one processor having a performance engine to capture the set of performance metrics of each app from the client, server and device, and further for organizing the set of performance metrics into categories based on an instrumentation and profile of each app wherein the categories comprise clusters of the performance metrics; the at least one processor having a graphic user interface for rendering the set of performance metrics in a manner to facilitate comparisons between each cluster and category of the set of performance metrics; and the at least one processor having an input to receive data from processors of the client, server, and network to present in a first instance in a waterfall graph of event logging and processing time for events across the client, server and network; and
in a second instance to present in a flame graph for discrete events of creation time for each component on a page whereby a bottleneck in an event and component processing is identified visually at an event level across the client, server and network and at a component level across components of the page related to the client, server and network by a particular display of the waterfall and flame graphs. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A method for measuring performance metrics of apps during an app development cycle, the method comprising:
-
implementing performance testing by a server of a plurality of apps coupled to the server by programming a processor of the server to schedule performance testing of a plurality of apps to generate a set of performance metrics from a client, server and network relating to performance of each app wherein the set of performance metrics comprises processing and request times associated with the app, wherein the scheduling performance testing of each app is executed from a processor of the server for a combination of clients, servers, and networks comprising different devices, operating systems, and browsers; capturing by a performance engine the set of performance metrics of each app from the clients, servers and networks, and further organizing the set of performance metrics into categories based on an instrumentation and profile of each app wherein the categories comprise clusters of the performance metrics; and rendering by a user interface the set of performance metrics in a manner to facilitate comparisons between each cluster and category of the set of performance metrics for automated analysis and trending by the performance engine to determine an improvement or regression of a feature of an app through the app development cycle. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A system comprising:
-
at least one processor; and at least one computer-readable storage device comprising instructions that when executed causes execution of a method of cloud automated testing for measuring performance metrics of apps during an app development cycle, the method comprising; configuring a first module to connect a plurality of clients to an app server running the app; configuring a second module to implement a plurality of operating systems; configuring a third module to implement a plurality of web browsers; configuring a fourth module to implement a plurality of network connections; downloading the app, and loading pages and capturing performance metrics for each page of the downloaded app across a plurality of scenarios from the app server to each client using a different combinations of the plurality of operating systems, web browsers and network connections; and configuring a fifth module to implement a user interface for rendering the captured performance metrics during the app development cycle, the rendering comprising; receiving data from processors of the client, server and network to cluster together for events across the client, server and network, and for particular discrete events of a page flow of creation time for each component on a page during the page flow to identify a bottleneck in an event and component processing at an event level across the client, server and network and at a component level across components of the page by a display of a waterfall graph and a flame graph for each event and component. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
Specification