Tier aware caching solution to increase application performance
First Claim
1. A method, comprising:
- determining, by an application, a list of logical blocks to be processed or a list of changed logical blocks, wherein the changed logical blocks are logical blocks that have been changed after a last schedule in a system;
passing, by the application to a caching API (application program interface), the list of logical blocks to be processed or the list of changed logical blocks;
performing, by the caching API, a logical-to-physical mapping based on the list of logical blocks to be processed or the list of changed logical blocks, including determining, by the caching API, logical offsets of blocks and physical offsets of blocks involved in the logical-to-physical mapping wherein the logical-to-physical mapping identifies placements of physical blocks that are distributed among a faster tier storage and a slower tier storage;
interacting, by the caching API, with a caching module in order for the caching API to obtain a list of physical blocks that are present in a cache;
returning, by the caching module to the caching API, a list of physical blocks that are in the cache;
performing, by the caching API, a physical-to-logical mapping based on the list of physical blocks that are distributed among the faster tier storage and the slower tier storage;
based on the physical-to-logical mapping, providing, by the caching API (application program interface) to the application, a distribution of the list of logical blocks to be processed or the list of changed logical blocks that will be processed by the application, wherein the list of logical blocks to be processed or the list of changed logical blocks permits the application to be aware of a distribution of the list of logical blocks to be processed or the list of changed logical blocks across the cache which comprises the faster tier storage and across a permanent storage device which comprises the slower tier storage;
wherein the logical blocks to be processed or the changed logical blocks are associated with at least one file;
wherein the caching API includes a shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage;
based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, issuing, by the application to the caching module, a synchronous input/output (IO) request on the logical blocks to be processed in the cache or the changed logical blocks in the cache and returning, by the caching module to the application, the logical blocks to be processed in the cache or the changed logical blocks in the cache to the application, and immediately processing, by the application, the logical blocks to be processed from the cache or the changed logical blocks from the cache so as to reduce an overall processing time of the logical blocks to be processed or the changed logical blocks associated with the at least one file and to reduce IO operation stream bottleneck problems;
based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, subsequently issuing, by the application to the caching module, an asynchronous IO request on the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device and returning, by the caching module to the application, the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device to the application, and processing, by the application, the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device;
wherein the application subsequently processes the logical blocks to be processed from the cache or the changed logical blocks from the cache and then processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device, irrespective of an ordering of the logical blocks to be processed or the changed logical blocks in the at least one file, andwherein the logical blocks to be processed or the changed logical blocks can span across multiple files or wherein the logical blocks to be processed or the changed logical blocks can span across the at least one file that comprises a fragmented single file.
2 Assignments
0 Petitions
Accused Products
Abstract
An embodiment of the invention provides a method comprising: permitting an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. The cache comprises a solid state device and the permanent storage device comprises a disk or a memory. In yet another embodiment of the invention, an apparatus comprises: a caching application program interface configured to permit an application to be aware to be aware of a distribution of a data of the application across a cache and a permanent storage device. A caching application program interface is configured to determine an input/output strategy to consume the data based on the distribution of the data.
66 Citations
17 Claims
-
1. A method, comprising:
-
determining, by an application, a list of logical blocks to be processed or a list of changed logical blocks, wherein the changed logical blocks are logical blocks that have been changed after a last schedule in a system; passing, by the application to a caching API (application program interface), the list of logical blocks to be processed or the list of changed logical blocks; performing, by the caching API, a logical-to-physical mapping based on the list of logical blocks to be processed or the list of changed logical blocks, including determining, by the caching API, logical offsets of blocks and physical offsets of blocks involved in the logical-to-physical mapping wherein the logical-to-physical mapping identifies placements of physical blocks that are distributed among a faster tier storage and a slower tier storage; interacting, by the caching API, with a caching module in order for the caching API to obtain a list of physical blocks that are present in a cache; returning, by the caching module to the caching API, a list of physical blocks that are in the cache; performing, by the caching API, a physical-to-logical mapping based on the list of physical blocks that are distributed among the faster tier storage and the slower tier storage; based on the physical-to-logical mapping, providing, by the caching API (application program interface) to the application, a distribution of the list of logical blocks to be processed or the list of changed logical blocks that will be processed by the application, wherein the list of logical blocks to be processed or the list of changed logical blocks permits the application to be aware of a distribution of the list of logical blocks to be processed or the list of changed logical blocks across the cache which comprises the faster tier storage and across a permanent storage device which comprises the slower tier storage; wherein the logical blocks to be processed or the changed logical blocks are associated with at least one file; wherein the caching API includes a shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage; based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, issuing, by the application to the caching module, a synchronous input/output (IO) request on the logical blocks to be processed in the cache or the changed logical blocks in the cache and returning, by the caching module to the application, the logical blocks to be processed in the cache or the changed logical blocks in the cache to the application, and immediately processing, by the application, the logical blocks to be processed from the cache or the changed logical blocks from the cache so as to reduce an overall processing time of the logical blocks to be processed or the changed logical blocks associated with the at least one file and to reduce IO operation stream bottleneck problems; based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, subsequently issuing, by the application to the caching module, an asynchronous IO request on the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device and returning, by the caching module to the application, the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device to the application, and processing, by the application, the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device; wherein the application subsequently processes the logical blocks to be processed from the cache or the changed logical blocks from the cache and then processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device, irrespective of an ordering of the logical blocks to be processed or the changed logical blocks in the at least one file, and wherein the logical blocks to be processed or the changed logical blocks can span across multiple files or wherein the logical blocks to be processed or the changed logical blocks can span across the at least one file that comprises a fragmented single file. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. An apparatus, comprising:
-
a caching API (application program interface) configured to provide, to an application, a distribution of a list of logical blocks to be processed or a list of changed logical blocks that will be processed by the application, wherein the list of logical blocks to be processed or the list of changed logical blocks permits the application to be aware of a distribution of the list of logical blocks to be processed or the changed logical blocks across a cache and a permanent storage device; a caching module coupled to the cache; wherein the application determines, the list of changed logical blocks, wherein the list of logical blocks to be processed or the changed logical blocks are logical blocks that have been changed after a last schedule in a system; wherein the application passes the list of logical blocks to be processed or the list of changed logical blocks to the caching API (application program interface); wherein the caching API performs a logical-to-physical mapping based on the list of logical blocks to be processed or the list of changed logical blocks, and wherein the caching API determines logical offsets of blocks and physical offsets of blocks involved in the logical-to-physical mapping wherein the logical-to-physical mapping identifies placements of physical blocks that are distributed among a faster tier storage and a slower tier storage; wherein the caching API interacts with the caching module in order for the caching API to obtain a list of physical blocks that are present in the cache; wherein the caching module returns, to the caching API, a list of physical blocks that are in the cache; wherein the caching API performs a physical-to-logical mapping based on the list of physical blocks that are distributed among the faster tier storage and the slower tier storage; wherein, based on the physical-to-logical mapping, the caching API provides, to the application, the distribution of the list of logical blocks to be processed or the list of changed logical blocks that will be processed by the application; wherein the logical blocks to be processed or the changed logical blocks are associated with at least one file; wherein the caching API includes a shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of the logical blocks to be processed in the cache or the cached logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage; wherein the application issues, to the caching module, a synchronous input/output (IO) request on the logical blocks to be processed in the cache or the changed logical blocks in the cache based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, and the logical blocks to be processed in the cache or the changed logical blocks in the cache are returned by the caching module to the application, and the application immediately processes the logical blocks to be processed from the cache or the changed logical blocks from the cache so as to reduce an overall processing time of the logical blocks to be processed or the changed logical blocks associated with the at least one file and to reduce 10operation stream bottleneck problems; wherein the application subsequently issues, to the caching module, an asynchronous IO request on the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, and the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device are returned by the caching module to the application, and the application processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device; wherein the application subsequently processes the logical blocks to be processed from the cache or the changed logical blocks from the cache and then processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device, irrespective of an ordering of the logical blocks to be processed or the changed logical blocks in the at least one file, and wherein the logical blocks to be processed or the changed logical blocks can span across multiple files or wherein the logical blocks to be processed or the changed logical blocks can span across the at least one file that comprises a fragmented single file. - View Dependent Claims (8, 9, 10, 11, 12)
-
-
13. An article of manufacture, comprising:
-
a non-transitory computer-readable medium having stored thereon instructions operable to permit an apparatus to perform a method comprising; determining, by an application, a list of logical blocks to be processed or a list of changed logical blocks, wherein the changed logical blocks are logical blocks that have been changed after a last schedule in a system; passing, by the application to a caching API (application program interface), the list of logical blocks to be processed or the list of changed logical blocks; performing, by the caching API, a logical-to-physical mapping based on the list of logical blocks to be processed or the list of changed logical blocks, including determining, by the caching API, logical offsets of blocks and physical offsets of blocks involved in the logical-to-physical mapping wherein the logical-to-physical mapping identifies placements of physical blocks that are distributed among a faster tier storage and a slower tier storage; interacting, by the caching API, with a caching module in order for the caching API to obtain a list of physical blocks that are present in a cache; returning, by the caching module to the caching API, a list of physical blocks that are in the cache; performing, by the caching API, a physical-to-logical mapping based on the list of physical blocks that are distributed among the faster tier storage and the slower tier storage; based on the physical-to-logical mapping, providing, by the caching API (application program interface) to the application, a distribution of the list of logical blocks to be processed or the list of changed logical blocks that will be processed by the application, wherein the list of logical blocks to be processed or the list of changed logical blocks permits the application to be aware of a distribution of the list of logical blocks to be processed or the list of changed logical blocks across the cache which comprises the faster tier storage and across a permanent storage device which comprises the slower tier storage; wherein the logical blocks to be processed or the changed logical blocks are associated with at least one file; wherein the caching API includes a shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage; based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, issuing, by the application to the caching module, a synchronous input/output (IO) request on the logical blocks to be processed in the cache or the changed logical blocks in the cache and returning, by the caching module to the application, the logical blocks to be processed in the cache or the changed logical blocks in the cache to the application, and immediately processing, by the application, the logical blocks to be processed from the cache or the changed logical blocks from the cache so as to reduce an overall processing time of the logical blocks to be processed or the changed logical blocks associated with the at least one file and to reduce IO operation stream bottleneck problems; based on the shared library that uses the logical-to-physical mapping and the physical-to-logical mapping for permitting the application to become aware of placements of logical blocks to be processed or the changed logical blocks in the cache that comprises the faster tier storage and in the permanent storage device that comprises the slower tier storage, subsequently issuing, by the application to the caching module, an asynchronous IO request on the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device and returning, by the caching module to the application, the logical blocks to be processed in the permanent storage device or the changed logical blocks in the permanent storage device to the application, and processing, by the application, the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device; wherein the application subsequently processes the logical blocks to be processed from the cache or the changed logical blocks from the cache and then processes the logical blocks to be processed from the permanent storage device or the changed logical blocks from the permanent storage device, irrespective of an ordering of the logical blocks to be processed or the changed logical blocks in the at least one file, and wherein the logical blocks to be processed or the changed logical blocks can span across multiple files or wherein the logical blocks to be processed or the changed logical blocks can span across the at least one file that comprises a fragmented single file. - View Dependent Claims (14, 15, 16, 17)
-
Specification