Predictive rollup and caching for application performance data
First Claim
1. A system for pre-fetching performance data in a monitored environment, including:
- a processor;
a memory; and
one or more modules stored in the memory and executable by the processor to perform operations including;
record queries that request application performance data with latencies longer than a threshold;
learn, via a machine learning process, access patterns in the recorded queries with latencies longer than the threshold, wherein the access pattern includes when and how often a same query is requested;
based on the learned access patterns, pre-fetch and cache the application performance data requested by the recorded queries before the same recorded queries are requested next time, wherein prefetching includes;
identifying a roll up process required by the recorded queries, wherein the roll up process includes processing data into a next granular format; and
provide the pre-fetched application performance data from the cache when the same recorded queries are requested next time.
3 Assignments
0 Petitions
Accused Products
Abstract
In one aspect, a system for pre-fetching performance data in a monitored environment is disclosed. The system can include a processor; a memory; and one or more modules stored in the memory. The one or more modules are executable by the processor to perform operations including: record queries that request for application performance data with latencies longer than a threshold; learn access patterns in the recorded queries with latencies longer than the threshold; pre-fetch and cache the application performance data requested by the recorded queries before the same recorded queries are requested next time; and provide the pre-fetched application performance data from the cache when the same recorded queries are requested next time.
-
Citations
22 Claims
-
1. A system for pre-fetching performance data in a monitored environment, including:
-
a processor; a memory; and one or more modules stored in the memory and executable by the processor to perform operations including; record queries that request application performance data with latencies longer than a threshold; learn, via a machine learning process, access patterns in the recorded queries with latencies longer than the threshold, wherein the access pattern includes when and how often a same query is requested; based on the learned access patterns, pre-fetch and cache the application performance data requested by the recorded queries before the same recorded queries are requested next time, wherein prefetching includes; identifying a roll up process required by the recorded queries, wherein the roll up process includes processing data into a next granular format; and provide the pre-fetched application performance data from the cache when the same recorded queries are requested next time. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method for pre-fetching performance data in a monitored environment, including:
-
receiving a query for application performance data associated with the monitored environment; determining whether latency of the received query exceeds a threshold; recording the query based on the determining that the latency of the received query exceeds the threshold; learning, via a machine learning process, an access pattern in the recorded query with the latency longer than the threshold, wherein the access pattern includes when and how often a same query is requested; based on the learned access patterns, pre-fetching and caching the application performance data requested by the recorded query before the same recorded query is requested next time wherein prefetching includes; identifying a roll up process required by the recorded queries, wherein the roll up process includes processing data into a next granular format; providing the pre-fetched application performance data from the cache when the same recorded query is requested next time; and updating the learned access pattern based on data obtained during the next time when the same recorded query is requested. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A non-transitory computer readable medium embodying instructions when executed by a processor to cause operations to be performed including:
-
recording a query requesting performance data that has a latency that exceeds a threshold latency time; learning, by a machine learning process, an access pattern in the recorded query with the latency longer than the threshold, wherein the access pattern includes when and how often a same query is requested; based on the learned access patterns, pre-fetching and caching the application performance data requested by the recorded query before the same recorded query is requested next time wherein prefetching includes; identifying a roll up process required by the recorded queries, wherein the roll up process includes processing data into a next granular format; and providing the pre-fetched application performance data from the cache when the same recorded query is requested next time. - View Dependent Claims (18, 19, 20, 21, 22)
-
Specification