×

Tier aware caching solution to increase application performance

  • US 10,146,437 B2
  • Filed: 03/17/2015
  • Issued: 12/04/2018
  • Est. Priority Date: 03/17/2014
  • Status: Active Grant
First Claim
Patent Images

1. A method, comprising:

  • determining, by an application, a list of logical blocks to be processed or a list of changed logical blocks, wherein the changed logical blocks are logical blocks that have been changed after a last schedule in a system;

    passing, by the application to a caching API (application program interface), the list of logical blocks to be processed or the list of changed logical blocks;

    performing, by the caching API, a logical-to-physical mapping based on the list of logical blocks to be processed or the list of changed logical blocks, including determining, by the caching API, logical offsets of blocks and physical offsets of blocks involved in the logical-to-physical mapping wherein the logical-to-physical mapping identifies placements of physical blocks that are distributed among a faster tier storage and a slower tier storage;

    interacting, by the caching API, with a caching module in order for the caching API to obtain a list of physical blocks that are present in a cache;

    returning, by the caching module to the caching API, a list of physical blocks that are in the cache;

    performing, by the caching API, a physical-to-logical mapping based on the list of physical blocks that are distributed among the faster tier storage and the slower tier storage;

    based on the physical-to-logical mapping, providing, by the caching API (application program interface) to the application, a distribution of the list of logical blocks to be processed or the list of changed logical blocks that will be processed by the application, wherein the list of logical blocks to be processed or the list of changed logical blocks permits the application to be aware of a distribution of the list of logical blocks to be processed or the list of changed logical blocks across the cache which comprises the faster tier storage and across a permanent storage device which comprises the slower tier storage;

    wherein the logical blocks to be processed or the changed logical blocks are associated with at least one file;

    wherein the caching API includes a shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage;

    based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, issuing, by the application to the caching module, a synchronous input/output (IO) request on the logical blocks to be processed in the cache or the changed logical blocks in the cache and returning, by the caching module to the application, the logical blocks to be processed in the cache or the changed logical blocks in the cache to the application, and immediately processing, by the application, the logical blocks to be processed from the cache or the changed logical blocks from the cache so as to reduce an overall processing time of the logical blocks to be processed or the changed logical blocks associated with the at least one file and to reduce IO operation stream bottleneck problems;

    based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, subsequently issuing, by the application to the caching module, an asynchronous IO request on the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device and returning, by the caching module to the application, the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device to the application, and processing, by the application, the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device;

    wherein the application subsequently processes the logical blocks to be processed from the cache or the changed logical blocks from the cache and then processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device, irrespective of an ordering of the logical blocks to be processed or the changed logical blocks in the at least one file, andwherein the logical blocks to be processed or the changed logical blocks can span across multiple files or wherein the logical blocks to be processed or the changed logical blocks can span across the at least one file that comprises a fragmented single file.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×