High performance integrated cached storage device
First Claim
1. An integrated cache storage system moving host data in accordance with a host I/O protocol between a host and at least one array of storage devices for storage of said host data in and retrieval of said host data from said at least one array of storage devices, comprising:
- at least one front end adapter having a first front end control processor and a second front end control processor and at least a first front end pipeline and a second front end pipeline for transferring said host data under control of said first front end control processor and said second front end control processor respectively, each of said first and second front end pipeline including, an I/O portion receiving a portion of said host data from said host and converting said host data from said host I/O protocol into at least one memory word, said at least one front end adapter further including a front end buffer memory receiving said at least one memory word;
a cache memory configured to receive said at least one memory word from said front end buffer memory to temporarily store said at least one memory word; and
at least one back end adapter having a first back end control processor and a second back end control processor and at least a first back end pipeline and a second back end pipeline for transferring said at least one memory word under control of said first back end control processor and said second back end control processor respectively, each of said first and second back end pipeline including, a back end buffer memory receiving said at least one memory word from said cache memory and an I/O portion receiving said memory word from said back end buffer memory and converting said at least one memory word into host data for transfer to said host according to said host I/O protocol.
7 Assignments
0 Petitions
Accused Products
Abstract
An integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources. Each control processor is responsible for 2 pipelines and respective Direct Multiple Access (DMA) and Direct Single Access (DSA) pipelines, for Global Memory access. Each processor has its own Memory Data Register (MDR) to support DMA/DSA activity. The dual processors each access independent control store RAM, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides. Pipelines are extended to add greater depth by incorporating a prefetch mechanism that permits write data to be put out to transceivers awaiting bus access, while two full buffers of assembled memory data are stored in Dual Port Ram and memory data words are assembled in pipeline gate arrays for passing to DPR. Data prefetch mechanisms are included whereby data is made available to the bus going from Global Memory on read operations, prior to the bus being available for an actual data transfer. Two full buffers of read data are transferred from Global Memory and stored in DPR while data words are disassembled in the pipeline gate array, independent of host activity. Timing of system operations is implemented such that there is overlap of backplane requests to transfer data to/from memory, memory selection and data transfer functionality.
134 Citations
29 Claims
-
1. An integrated cache storage system moving host data in accordance with a host I/O protocol between a host and at least one array of storage devices for storage of said host data in and retrieval of said host data from said at least one array of storage devices, comprising:
-
at least one front end adapter having a first front end control processor and a second front end control processor and at least a first front end pipeline and a second front end pipeline for transferring said host data under control of said first front end control processor and said second front end control processor respectively, each of said first and second front end pipeline including, an I/O portion receiving a portion of said host data from said host and converting said host data from said host I/O protocol into at least one memory word, said at least one front end adapter further including a front end buffer memory receiving said at least one memory word; a cache memory configured to receive said at least one memory word from said front end buffer memory to temporarily store said at least one memory word; and at least one back end adapter having a first back end control processor and a second back end control processor and at least a first back end pipeline and a second back end pipeline for transferring said at least one memory word under control of said first back end control processor and said second back end control processor respectively, each of said first and second back end pipeline including, a back end buffer memory receiving said at least one memory word from said cache memory and an I/O portion receiving said memory word from said back end buffer memory and converting said at least one memory word into host data for transfer to said host according to said host I/O protocol. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A controller transferring data from a source to a destination, comprising:
-
a first control processor controlling at least a first pipeline selectable to transfer at least a first portion of said data from said source to said destination; a second control processor controlling at least a second pipeline selectable to transfer at least a second portion of said data from said source to said destination; a control program run by both said first control processor and said second control processor to control said first pipeline and said second pipeline to transfer said at least said first portion and said at least said second portion of data from said source to said destination, said control program specifying at least one memory address for accessing during execution; shared resource memory having a first portion accessible to said first control processor and a second portion accessible to said second control processor, said first portion and said second portion having different respective first and second address ranges; and a transparency mechanism receiving said at least one memory address and determining which one of said first address ranges said at least one memory address is located in for access by said first control processor, and determining a corresponding memory address in said second address range in which said at least one memory address is not located for access by said second control processor. - View Dependent Claims (11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A method of pipelining data for transfer between a data source and a data destination, said method comprising the steps of:
-
receiving data from said data source during a write operation and assembling said data into a plurality of data destination words; storing each of said plurality of data destination words individually in at least one register pipeline stage; configuring a first and a second buffer memory, said first and said second buffer memory being in communication with said at least one register pipeline stage to receive respective ones of each of said plurality of data destination word therefrom; transferring a first write portion of said plurality of data destination words into said first buffer memory until a first selected number of data destination words are transferred to said first buffer memory; transferring said first write portion of said plurality of data destination words from said first buffer memory to said data destination until said first selected number of data destination words are transferred from said first buffer memory to said data destination; transferring a second write portion of said plurality of data destination words assembled and stored in said at least one register pipeline stage into said second buffer memory until a second selected number of data destination words are transferred to said second buffer memory, while said step of transferring said first write portion of said plurality of data destination words from said first buffer memory to said data destination is taking place; and transferring said second write portion of said plurality of data destination words from said second buffer memory to said data destination until said second selected number of data destination words are transferred from said second buffer memory to said data destination, while said first buffer memory is available to receive another write portion of said plurality of data destination words assembled for transfer by said at least one register pipeline stage. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29)
-
Specification