Systems and methods for management of memory in information delivery environments
First Claim
1. A method of managing memory units, comprising assigning a memory unit of an oversize data object to one of two or more memory positions based on a status of at least one first memory parameter that reflects the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof.
2 Assignments
0 Petitions
Accused Products
Abstract
Memory managements systems and methods that may be employed, for example, to provide efficient management of memory for network systems. The disclosed systems and methods may consider cost-benefit trade-off between the cache value of a particular memory unit versus the cost of caching the memory unit and may utilize a multi-layer queue management structure to manage buffer/cache memory in an integrated fashion. The disclosed systems and methods may be implemented as part of an information management system, such as a network proceessing system that is operable to process over-size data objects communicated via a network environment, and that may include a network processor operable to process network-communicated information and a memory management-system operable to manage disposition of individual memory units of over-size data objects based upon one or more parameters, such as one or more parameters reflecting the cost and value associated with maintaining the information in integrated buffer/cache memory.
-
Citations
95 Claims
- 1. A method of managing memory units, comprising assigning a memory unit of an oversize data object to one of two or more memory positions based on a status of at least one first memory parameter that reflects the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof.
-
19. A method of managing memory units within an information delivery environment, comprising assigning a memory unit of an over-size data object to one of a plurality of memory positions based on a status of at least one first memory parameter and a status of at least one second memory parameter;
- said first memory parameter reflecting the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof; and
said second memory parameter reflecting the number of memory units existing in the data interval between an existing viewer of said memory unit and a succeeding viewer of said memory unit, the difference in data consumption rate between said existing viewer and said succeeding viewer of said memory unit, or a combination thereof. - View Dependent Claims (20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38)
- said first memory parameter reflecting the number of anticipated future requests for access to said memory unit, the elapsed time until receipt of a future request for access to said memory unit, or a combination thereof; and
-
39. A method of managing memory units using an integrated memory management structure, comprising:
-
assigning memory units of an over-size data object to one or more positions within a buffer memory defined by said integrated structure;
subsequently reassigning said memory units from said buffer memory to one or more positions within a cache memory defined by said structure or to a free pool memory defined by said structure; and
subsequently removing said memory units from assignment to a position within said free pool memory;
wherein said reassignment of said memory units from said buffer memory to one or more positions within said cache memory is based on the combination of at least one first memory parameter and at least one second memory parameter, wherein said first memory parameter reflects the value of maintaining said memory units within said cache memory in terms of future external storage I/O requests that may be eliminated by maintaining said memory units in said cache memory, and wherein said second memory parameter reflects cost of maintaining said memory units within said cache memory in terms of the size of said memory units and duration of storage associated with maintaining said memory units within said cache memory. - View Dependent Claims (40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56)
-
-
57. A method of managing memory units memory units of an over-size data object using a multi-dimensional logical memory management structure, comprising:
-
providing two or more spatially-offset organizational sub-structures, said substructures being spatially offset in symmetric or asymmetric spatial relationship to form said multi-dimensional management structure, each of said sub-structures having one or more memory unit positions defined therein; and
assigning and reassigning memory units memory units of an over-size data object between memory unit positions located in different organizational sub-structures, between positions located within the same organizational sub-structure, or a combination thereof;
wherein said assigning and reassigning of memory units memory units of an over-size data object within said structure is based on multiple memory state parameters. - View Dependent Claims (58, 59, 60)
-
-
61. A method of managing memory units using an integrated two-dimensional logical memory management structure, comprising:
-
providing a first horizontal buffer memory layer comprising two or more sequentially ascending buffer memory positions;
providing a first horizontal cache memory layer comprising one or more sequentially ascending cache memory positions and a lowermost memory position that comprises a free pool memory position, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer;
horizontally assigning and reassigning memory units of an over-size data object between said buffer memory positions within said first horizontal buffer memory layer based on at least one first memory parameter;
horizontally assigning and reassigning memory units of an over-size data object between said cache memory positions and between said free pool memory position within said first horizontal cache memory layer based on at least one second memory parameter; and
vertically assigning and reassigning memory units of an over-size data object between said first horizontal buffer memory layer and said first horizontal cache memory layer based on at least one third memory parameter. - View Dependent Claims (62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72)
-
-
73. An integrated two-dimensional logical memory management structure for use in managing memory units of over-size data objects, comprising:
-
at least one horizontal buffer memory layer comprising two or more sequentially ascending continuous media data buffer memory positions; and
at least one horizontal cache memory layer comprising one or more sequentially ascending over-size data object memory unit cache memory positions and a lowermost memory position that comprises an over-size data object memory unit free pool memory position, said first horizontal cache memory layer being vertically offset from said first horizontal buffer memory layer. - View Dependent Claims (74, 75, 76, 78)
-
-
77. A method for managing over-size data object content in a network environment comprising:
-
determining the number of active connections and anticipated future connections associated with said over-size data object content used within the network environment; and
referencing the content location based on the determined connections and anticipated future connections. - View Dependent Claims (79, 80, 81, 82, 83, 84, 85, 86)
-
-
87. A network processing system operable to process information communicated via a network in an over-size data object environment comprising:
-
a network processor operable to process network communicated information in said oversize data object environment; and
a memory management system operable to reference the information based upon a connection status, number of anticipated future connections, and cache storage cost associated with the information. - View Dependent Claims (88, 89, 90, 91, 92, 93, 94)
-
-
95. A method for managing over-size data object content within a network environment comprising:
-
determining the number of active connections and anticipated future connections associated with said over-size data object content used within the network environment;
referencing the content based on the determined active and anticipated connections;
locating the content in a memory; and
re-referencing the content using and available free memory reference upon detecting closure of all active connections.
-
Specification