×

Data engine with metadata processor

  • US 6,754,773 B2
  • Filed: 01/29/2002
  • Issued: 06/22/2004
  • Est. Priority Date: 01/29/2001
  • Status: Expired due to Term
First Claim
Patent Images

1. A file server comprising:

  • a network interface for communicating with one or more clients, said network interface comprising a network transaction queue;

    a storage interface for communicating with one or more disk drives, said storage interface comprising a storage transaction queue;

    a metadata processor configured to communicate with said network interface across a first memory-mapped bus and configured to communicate with said storage interface across a second memory-mapped bus, said metadata processor configured to queue network transaction requests to said network interface in response to file access requests from said clients, said metadata processor configured to queue storage transaction requests in response to file access requests from said clients, said network transaction requests and said storage transaction requests comprising address information and opcode information; and

    a data engine configured to communicate with said network interface across said first memory-mapped bus, said data engine configured to communicate with said storage interface across said second memory-mapped bus, said data engine configured to receive first address words and third address words from said network interface and to receive second address words and fourth address words from said storage interface, said first address words comprising first address bits and first opcode bits, said second address words comprising second address bits and second opcode bits, said third address words comprising third address bits and third opcode bits, said fourth address words comprising fourth address bits and fourth opcode bits, said data engine receiving first data from said first memory-mapped bus and storing said first data in a data cache according to said first address bits and said first opcode bits, said data engine receiving second data from said second memory-mapped bus and storing said second data in said data cache according to second address bits and said second opcode bits, said data engine providing third data to said first memory-mapped bus from said data cache according to said third address bits and said third opcode bits, said data engine providing fourth data to said second memory-mapped bus from said data cache according to said fourth address bits and said fourth first opcode bits.

View all claims
  • 16 Assignments
Timeline View
Assignment View
    ×
    ×