Software architecture which maintains system performance while pipelining data to an MFP and uses shared DLL
First Claim
1. A throttled pipeline within a host, the host being coupled to at least one general purpose computer workstation having an originator process and to an output device having a recipient process, the throttled pipeline being adapted to throttle a transfer-rate of data from the at least one general purpose computer workstation to the output device, to thereby conserve system resources of the host computer, wherein the originator process involves a transfer of data from the general purpose computer workstation to the output device;
- and wherein the recipient process receives the data from the originator process;
the throttled pipeline comprising;
a data storage and retrieval unit operatively coupled between the at least one general purpose computer workstation and the output device, the data storage and retrieval unit being adapted to enable a selective transfer of data from the at least one general purpose computer workstation to the output device using face buffers, wherein each face buffer corresponds to a memory space that is adapted to hold a single data set therein, each data set having a predetermined size according to the type of face buffer, wherein the face buffer is selected from any available memory space that will receive that type of face buffer, and wherein data can only be transferred from the at least one general purpose computer workstation to the output device in data sets using face buffers supplied by the data storage and retrieval unit.
0 Assignments
0 Petitions
Accused Products
Abstract
A throttled data pipeline having a limited data-transfer rate for conserving system resources is disclosed. The throttled data pipeline of the present invention includes a source, a destination and a throttling device. The throttling device of the present invention is interposed between the source and the destination, and is adapted to limit data-transfer rates through the throttled data pipeline in accordance with predetermined criteria. By limiting data-transfer rates through the throttled data pipeline, system resources of the host computer, which would otherwise be wasted, are conserved. The throttled data pipeline of the present invention is configured to allow for fast and efficient transfers of data during low throughput operations when system resources are not significantly taxed. When high-throughput data transfers or other taxing operations which would otherwise detrimentally consume significant system resources are required of the throttled data pipeline, the data transfer rate of the throttled data pipeline is limited.
14 Citations
12 Claims
-
1. A throttled pipeline within a host, the host being coupled to at least one general purpose computer workstation having an originator process and to an output device having a recipient process, the throttled pipeline being adapted to throttle a transfer-rate of data from the at least one general purpose computer workstation to the output device, to thereby conserve system resources of the host computer, wherein the originator process involves a transfer of data from the general purpose computer workstation to the output device;
- and wherein the recipient process receives the data from the originator process;
the throttled pipeline comprising;a data storage and retrieval unit operatively coupled between the at least one general purpose computer workstation and the output device, the data storage and retrieval unit being adapted to enable a selective transfer of data from the at least one general purpose computer workstation to the output device using face buffers, wherein each face buffer corresponds to a memory space that is adapted to hold a single data set therein, each data set having a predetermined size according to the type of face buffer, wherein the face buffer is selected from any available memory space that will receive that type of face buffer, and wherein data can only be transferred from the at least one general purpose computer workstation to the output device in data sets using face buffers supplied by the data storage and retrieval unit. - View Dependent Claims (2, 3, 4, 5, 6)
- and wherein the recipient process receives the data from the originator process;
-
7. A throttled pipeline within a host, the host being coupled to at least one general purpose computer workstation having a plurality of processes and to an output device having a plurality of processes, the throttled pipeline being adapted to throttle a transfer-rate of data from the at least one general purpose computer workstation to the output device, to thereby conserve system resources of the host computer, wherein the data storage and retrieval unit is capable of throttling additional pipelines between multiple originator and recipient processes simultaneously;
- wherein a first process involves a transfer of data from a first originator to a first recipient; and
wherein a second process involves a transfer of data from a second originator to a second recipient;
the throttled pipeline comprising;a data storage and retrieval unit operatively coupled between the at least one general purpose computer workstation and the output device, the data storage and retrieval unit being adapted to enable a selective transfer of data from the at least one general purpose computer workstation to the output device using face buffers, wherein each face buffer corresponds to a memory space that is adapted to hold a single data set therein, each data set having a predetermined size according to the type of face buffer, wherein the face buffer is selected from any available memory space that will receive that type of face buffer, and wherein data can only be transferred from the at least one general purpose computer workstation to the output device in data sets using face buffers supplied by the data storage and retrieval unit. - View Dependent Claims (8, 9, 10, 11)
- wherein a first process involves a transfer of data from a first originator to a first recipient; and
-
12. A system for apportioning resources of a host to a first image data processing process in the host by a data storage and retrieval unit in the host, the host comprising a processor and a memory, the first process involving a transfer of image data from a first originator service to a first recipient service, the system comprising:
-
a means for identifying an available number of face buffers in the host'"'"'s memory by the data storage and retrieval unit, wherein a face buffer comprises a working region of memory for temporarily storing a face of image data;
a means for receiving requests for the face buffer from the services by the data storage and retrieval unit, wherein the first originator service requests empty face buffers and the first recipient service requests filled face buffers;
a means for identifying an available number of face buffers as requested by the data storage and retrieval unit;
a means for returning a location of the one available face buffer to the requesting service by the data storage and retrieval unit, if at least one face buffer of the requested type is available;
a means for filling the face buffer by the first originator service, if the requesting service is the first originator service;
a means for emptying the face buffer by the first recipient service, if the requesting service is the first recipient service;
a means for delaying by the data storage and retrieval unit, until at least one face buffer of the requested type becomes available, and then the data storage and retrieval unit returning a location of the face buffer to the requesting service, if the data storage and retrieval unit is unable to identify any face buffers of the requested type.
-
Specification