Optimized pre-fetch ordering using de-duplication information to enhance network performance
First Claim
Patent Images
1. A method of optimizing an order of a pre-fetch list, the method comprising:
- a computer determining a degree of information duplication between at least two files included in an original pre-fetch list;
the computer generating a re-ordered pre-fetch list by re-ordering the files included in the original pre-fetch list based, at least in part, on the degree of information duplication between the two files included in the original pre-fetch list, wherein the files included in the original pre-fetch list are re-ordered by grouping files containing higher degrees of duplicate information closer together in the re-ordered pre-fetch list;
the computer generating a weighted graph, wherein a node of the weighted graph is associated with a file included in the original pre-fetch list;
the computer connecting two or more nodes with a weighted edge wherein the weighted edge represents the degree of information duplication between two files respectively associated with the nodes;
the computer determining a weight for each sub-tree included in the weighted graph, wherein the weight for each sub-tree is based, at least in part, on a sum of weighted edges included in the sub-tree;
the computer generating an ordered sub list of nodes included in the sub-tree, wherein the ordered sub list of nodes is based, at least in part on, the weight of at least one edge associate with a given node; and
the computer generating a new pre-fetch list based, at least in part, on one or more sub-lists, wherein the sub-lists are added to the new pre-fetch list based on their associated sub-tree weights.
1 Assignment
0 Petitions
Accused Products
Abstract
A computer determines a degree of information duplication between at least two files included in an original pre-fetch list. The computer generates a re-ordered pre-fetch list by re-ordering the files included in the original pre-fetch list. The re-ordering is based, at least in part, on the degree of information duplication between the two files included in the original pre-fetch list. The files included in the original pre-fetch list are re-ordered by grouping files containing higher degrees of duplicate information closer together in the re-ordered pre-fetch list.
12 Citations
14 Claims
-
1. A method of optimizing an order of a pre-fetch list, the method comprising:
-
a computer determining a degree of information duplication between at least two files included in an original pre-fetch list; the computer generating a re-ordered pre-fetch list by re-ordering the files included in the original pre-fetch list based, at least in part, on the degree of information duplication between the two files included in the original pre-fetch list, wherein the files included in the original pre-fetch list are re-ordered by grouping files containing higher degrees of duplicate information closer together in the re-ordered pre-fetch list; the computer generating a weighted graph, wherein a node of the weighted graph is associated with a file included in the original pre-fetch list; the computer connecting two or more nodes with a weighted edge wherein the weighted edge represents the degree of information duplication between two files respectively associated with the nodes; the computer determining a weight for each sub-tree included in the weighted graph, wherein the weight for each sub-tree is based, at least in part, on a sum of weighted edges included in the sub-tree; the computer generating an ordered sub list of nodes included in the sub-tree, wherein the ordered sub list of nodes is based, at least in part on, the weight of at least one edge associate with a given node; and the computer generating a new pre-fetch list based, at least in part, on one or more sub-lists, wherein the sub-lists are added to the new pre-fetch list based on their associated sub-tree weights. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A computer program product for optimizing an order of a pre-fetch list, the computer program product comprising:
one or more computer-readable storage-medium and program instructions stored on the one or more computer-readable storage medium, the program instructions comprising; program instructions to determine a degree of information duplication between at least two files included in an original pre-fetch list; program instructions to generate a re-ordered pre-fetch list by re-ordering the files included in the original pre-fetch list based, at least in part, on the degree of information duplication between the two files included in the original pre-fetch list, wherein the files included in the original pre-fetch list are re-ordered by grouping files containing higher degrees of duplicate information closer together in the re-ordered pre-fetch list; program instructions to generate a weighted graph, wherein a node of the weighted graph is associated with a file included in the original pre-fetch list; program instructions to connect two or more nodes with a weighted edge wherein the weighted edge represents the degree of information duplication between two files respectively associated with the nodes; program instructions to determine a weight for each sub-tree included in the weighted graph, wherein the weight for each sub-tree is based, at least in part, on a sum of weighted edges included in the sub-tree; program instructions to generate an ordered sub list of nodes included in the sub-tree, wherein the ordered sub list of nodes is based, at least in part on, the weight of at least one edge associate with a given node; and program instructions to generate a new pre-fetch list based, at least in part, on one or more sub-lists, wherein the sub-lists are added to the new pre-fetch list based on their associated sub-tree weights. - View Dependent Claims (7, 8, 9, 10)
-
11. A computer system for optimizing an order of a pre-fetch list, the computer system comprising:
-
one or more computer processors; one or more computer readable storage medium; program instructions stored on the computer readable storage medium for execution by at least one of the one or more processors, the program instructions comprising; program instructions to determine a degree of information duplication between at least two files included in an original pre-fetch list; program instructions to generate a re-ordered pre-fetch list by re-ordering the files included in the original pre-fetch list based, at least in part, on the degree of information duplication between the two files included in the original pre-fetch list, wherein the files included in the original pre-fetch list are re-ordered by grouping files containing higher degrees of duplicate information closer together in the re-ordered pre-fetch list; program instructions to generate a weighted graph, wherein a node of the weighted graph is associated with a file included in the original pre-fetch list; program instructions to connect two or more nodes with a weighted edge wherein the weighted edge represents the degree of information duplication between two files respectively associated with the nodes; program instructions to determine a weight for each sub-tree included in the weighted graph, wherein the weight for each sub-tree is based, at least in part, on a sum of weighted edges included in the sub-tree; program instructions to generate an ordered sub list of nodes included in the sub-tree, wherein the ordered sub list of nodes is based, at least in part on, the weight of at least one edge associate with a given node; and program instructions to generate a new pre-fetch list based, at least in part, on one or more sub-lists, wherein the sub-lists are added to the new pre-fetch list based on their associated sub-tree weights. - View Dependent Claims (12, 13, 14)
-
Specification