Increased parallelization efficiency in tiering environments
First Claim
1. A computer-implemented method, comprising:
- receiving an operation request which corresponds to a given object;
identifying multiple block addresses which are associated with the given object;
determining whether any one or more of the identified block addresses have a token currently issued thereon;
combining the multiple block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses;
transitioning a first portion of the block addresses to a second set, wherein the first portion includes ones of the block addresses determined as having a token currently issued thereon;
dividing a second portion of the block addresses into equal chunks, wherein the second portion includes the block addresses remaining in the first set;
allocating the chunks in the first set across two or more parallelization units;
dividing the block addresses in the second set into equal chunks; and
allocating the chunks in the second set to at least one dedicated parallelization unit.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer-implemented method, according to one embodiment, includes: receiving an operation request which corresponds to a given object, identifying multiple block addresses which are associated with the given object, determining whether any one or more of the identified block addresses have a token currently issued thereon, and combining the multiple block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses. A first portion of the block addresses determined as having a token currently issued thereon is transitioned to a second set. A remaining portion of the block addresses is also divided into equal chunks. The chunks are allocated in the first set across parallelization units, and the block addresses in the second set are divided into equal chunk. Furthermore, the chunks in the second set are allocated to a dedicated parallelization unit.
12 Citations
20 Claims
-
1. A computer-implemented method, comprising:
-
receiving an operation request which corresponds to a given object; identifying multiple block addresses which are associated with the given object; determining whether any one or more of the identified block addresses have a token currently issued thereon; combining the multiple block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses; transitioning a first portion of the block addresses to a second set, wherein the first portion includes ones of the block addresses determined as having a token currently issued thereon; dividing a second portion of the block addresses into equal chunks, wherein the second portion includes the block addresses remaining in the first set; allocating the chunks in the first set across two or more parallelization units; dividing the block addresses in the second set into equal chunks; and allocating the chunks in the second set to at least one dedicated parallelization unit. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions readable and/or executable by a processor to cause the processor to perform a method comprising:
-
receiving, by the processor, an operation request which corresponds to a given object; identifying, by the processor, multiple block addresses which are associated with the given object; determining, by the processor, whether any one or more of the identified block addresses have a token currently issued thereon; combining, by the processor, the multiple block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses; transitioning, by the processor, a first portion of the block addresses to a second set, wherein the first portion includes ones of the block addresses determined as having a token currently issued thereon; dividing, by the processor, a second portion of the block addresses into equal chunks, wherein the second portion includes the block addresses remaining in the first set; allocating, by the processor, the chunks in the first set across two or more parallelization units; dividing, by the processor, the block addresses in the second set into equal chunks; and allocating, by the processor, the chunks in the second set to at least one dedicated parallelization unit. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A system, comprising:
-
a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to; receive, by the processor, an operation request which corresponds to a given file in a filesystem; identify, by the processor, multiple block addresses which are associated with the given file; determine, by the processor, whether any one or more of the identified block addresses have a token currently issued thereon; combine, by the processor, the multiple block addresses to a first set in response to determining that at least one token is currently issued on one or more of the identified block addresses; transition, by the processor, a first portion of the block addresses to a second set, wherein the first portion includes ones of the block addresses determined as having a token currently issued thereon; divide, by the processor, a second portion of the block addresses into equal chunks, wherein the second portion includes the block addresses remaining in the first set; allocate, by the processor, the chunks in the first set across two or more parallelization units; divide, by the processor, the block addresses in the second set into equal chunks; and allocate, by the processor, the chunks in the second set to at least one dedicated parallelization unit. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification