Replication and restoration of multiple data storage object types in a data network
First Claim
1. A method of restoring, in a first data storage server, a data storage object from a desired version of the data storage object, the desired version of the data storage object residing in data storage of a second data storage server, said method comprising:
- the first data storage server communicating with the second data storage server to identify a most recent common base snapshot copy of the data storage object, a first copy of the most recent common base snapshot copy residing in data storage of the first data storage server and a second copy of the most recent common base snapshot copy residing in the data storage of the second data storage server; and
the second data storage server transmitting to the first data storage server changes between the desired version of the data storage object and the second copy of the most recent common base snapshot copy; and
the first data storage server receiving the changes from the second data storage server and using the changes for restoring, from the first copy of the most recent common base snapshot copy, a local production version of the data storage object;
wherein said method is used for restoring, in the first data storage server an iSCSI LUN data storage object from a desired version of the iSCSI LUN data storage object, the desired version of the iSCSI LUN data storage object residing in the data storage of the second data storage server,wherein said method is used for restoring in the first data storage server, a file system data storage object from a desired version of the file system data storage object, the desired version of the file system data storage object residing in the data storage of the second data storage server, andwhich further includes the first data storage server initially receiving the first copy of the most recent common base snapshot copy remote replication from a third data storage server, and the second data storage server initially receiving the second copy of the most recent common base snapshot copy by remote replication from the third data storage server, andwhich includes specifying a retention policy for snapshot copies in the third data storage server, and wherein the remote replication from the third data storage server includes the third data storage server replicating the retention policy for snapshot copies to the first data storage server and to the second data storage server.
6 Assignments
0 Petitions

Accused Products

Abstract
A data storage server is programmed for management, version control, and scheduling of replication of multiple types of data storage objects including iSCSI LUNs and file systems. The version control determines if two data storage objects are the same or have a common base so that only a difference needs to be transmitted for replication or restoration. A replication job may specify a “one-to-many” replication or a cascaded replication, and any snapshot retention policy is propagated during a cascaded replication. Concurrent replication sessions to the same destination are paced in accordance with respective allocation shares of the reception bandwidth. File handle information is replicated so that a file handle issued by a primary data storage server can be used for accessing a replicated file in a secondary data storage server.
252 Citations
OPTIMIZED COPY OF VIRTUAL MACHINE STORAGE FILES | ||
Patent #
US 20110035358A1
Filed 08/07/2009
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Quest Software Inc.
|
Method for replicating explicit locks in a data replication engine | ||
Patent #
US 7,962,458 B2
Filed 02/17/2010
|
Current Assignee
RPX Corporation
|
Original Assignee
Gravic Incorporated
|
Verification Of Remote Copies Of Data | ||
Patent #
US 20110099148A1
Filed 07/02/2008
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Hewlett Packard Enterprise Development LP
|
Log Structured Content Addressable Deduplicating Storage | ||
Patent #
US 20100082529A1
Filed 03/31/2009
|
Current Assignee
Riverbed Technology Incorporated
|
Original Assignee
Riverbed Technology Incorporated
|
DATA INTEGRITY IN A DATABASE ENVIRONMENT THROUGH BACKGROUND SYNCHRONIZATION | ||
Patent #
US 20100153346A1
Filed 12/17/2008
|
Current Assignee
iAnywhere Solutions Incorporated
|
Original Assignee
iAnywhere Solutions Incorporated
|
METHOD FOR REPLICATING LOCKS IN A DATA REPLICATION ENGINE | ||
Patent #
US 20100191884A1
Filed 02/17/2010
|
Current Assignee
RPX Corporation
|
Original Assignee
Gravic Incorporated
|
Method of Selective Replication in a Storage Area Network | ||
Patent #
US 20100293145A1
Filed 07/02/2009
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
SYSTEM FOR EMULATING A VIRTUAL BOUNDARY OF A FILE SYSTEM FOR DATA MANAGEMENT AT A FILESET GRANULARITY | ||
Patent #
US 20090006486A1
Filed 09/04/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
SYSTEM AND METHOD FOR SYNCHRONIZATION BETWEEN SERVERS | ||
Patent #
US 20090077262A1
Filed 09/14/2007
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
DIRECTORY SERVER REPLICATION | ||
Patent #
US 20090150398A1
Filed 12/05/2007
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Protocol Independent Server Replacement and Replication in a Storage Area Network | ||
Patent #
US 20090187668A1
Filed 01/23/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
CONTINUOUSLY AVAILABLE PROGRAM REPLICAS | ||
Patent #
US 20090210455A1
Filed 02/19/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
MIXED MODE SYNCHRONOUS AND ASYNCHRONOUS REPLICATION SYSTEM | ||
Patent #
US 20090313311A1
Filed 06/12/2009
|
Current Assignee
RPX Corporation
|
Original Assignee
Gravic Incorporated
|
CONTROLLING RESOURCE ALLOCATION FOR BACKUP OPERATIONS | ||
Patent #
US 20090307285A1
Filed 06/06/2008
|
Current Assignee
Veritas Technologies LLC
|
Original Assignee
Symantec Corporation
|
COMMUNICATION NETWORK AND METHOD FOR STORING MESSAGE DATA IN A COMMUNICATION NETWORK | ||
Patent #
US 20080228948A1
Filed 10/11/2007
|
Current Assignee
Avaya Incorporated
|
Original Assignee
Avaya GmbH Company KG
|
Method and appratus for accessing home storage or internet storage | ||
Patent #
US 20070156899A1
Filed 01/04/2007
|
Current Assignee
Samsung Electronics Co. Ltd.
|
Original Assignee
Samsung Electronics Co. Ltd.
|
Methods and systems for rapid rollback and rapid retry of a data migration | ||
Patent #
US 8,112,665 B1
Filed 07/23/2010
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
SYSTEMS AND METHODS FOR REPLICATING A GROUP OF DATA OBJECTS WITHIN A STORAGE NETWORK | ||
Patent #
US 20120136828A1
Filed 11/30/2010
|
Current Assignee
Red Hat Inc.
|
Original Assignee
Red Hat Inc.
|
INCREMENTAL RESTORE OF DATA BETWEEN STORAGE SYSTEMS HAVING DISSIMILAR STORAGE OPERATING SYSTEMS ASSOCIATED THEREWITH | ||
Patent #
US 20120136832A1
Filed 11/30/2010
|
Current Assignee
Network Appliance Incorporated
|
Original Assignee
Network Appliance Incorporated
|
SHARED-BANDWIDTH MULTIPLE TARGET REMOTE COPY | ||
Patent #
US 20120166588A1
Filed 05/21/2010
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Continuously available program replicas | ||
Patent #
US 8,229,886 B2
Filed 02/19/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
DOCUMENT DISTRIBUTION SYSTEM | ||
Patent #
US 20120191653A1
Filed 01/19/2012
|
Current Assignee
Toshiba Tec Corporation, Toshiba Corporation
|
Original Assignee
Toshiba Tec Corporation, Toshiba Corporation
|
System for emulating a virtual boundary of a file system for data management at a fileset granularity | ||
Patent #
US 8,255,364 B2
Filed 09/04/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
CONTINUOUSLY AVAILABLE PROGRAM REPLICAS | ||
Patent #
US 20120221516A1
Filed 05/01/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Mixed mode synchronous and asynchronous replication system | ||
Patent #
US 8,301,593 B2
Filed 06/12/2009
|
Current Assignee
RPX Corporation
|
Original Assignee
Gravic Incorporated
|
RAPID NOTIFICATION SYSTEM | ||
Patent #
US 20120303720A1
Filed 05/26/2011
|
Current Assignee
Micro Focus LLC
|
Original Assignee
Stratify Inc.
|
Controlling resource allocation for backup operations | ||
Patent #
US 8,326,804 B2
Filed 06/06/2008
|
Current Assignee
Veritas Technologies LLC
|
Original Assignee
Symantec Corporation
|
Data integrity in a database environment through background synchronization | ||
Patent #
US 8,380,663 B2
Filed 12/17/2008
|
Current Assignee
iAnywhere Solutions Incorporated
|
Original Assignee
Sybase Incorporated
|
REPLICATION OF DATA OBJECTS FROM A SOURCE SERVER TO A TARGET SERVER | ||
Patent #
US 20130054523A1
Filed 08/30/2011
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
REPLICATION OF DATA OBJECTS FROM A SOURCE SERVER TO A TARGET SERVER | ||
Patent #
US 20130054524A1
Filed 04/25/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
VIRTUAL MACHINE SNAPSHOTTING IN OBJECT STORAGE SYSTEM | ||
Patent #
US 20130054910A1
Filed 08/29/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
System and method for targeted consistency improvement in a distributed storage system | ||
Patent #
US 8,396,840 B1
Filed 09/22/2010
|
Current Assignee
Amazon Technologies
|
Original Assignee
Amazon Technologies
|
Method and apparatus for providing point-in-time backup images | ||
Patent #
US 8,433,864 B1
Filed 06/30/2008
|
Current Assignee
Veritas Technologies LLC
|
Original Assignee
Symantec Corporation
|
Non-disruptive storage server migration | ||
Patent #
US 8,452,856 B1
Filed 08/04/2010
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
SYNCHRONIZING UPDATES ACROSS CLUSTER FILESYSTEMS | ||
Patent #
US 20130138616A1
Filed 08/20/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
SYNCHRONIZING UPDATES ACROSS CLUSTER FILESYSTEMS | ||
Patent #
US 20130138615A1
Filed 11/29/2011
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
SHARED-BANDWIDTH MULTIPLE TARGET REMOTE COPY | ||
Patent #
US 20130144977A1
Filed 01/31/2013
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
METHOD AND SYSTEM FOR DATA FILING SYSTEMS | ||
Patent #
US 20130166501A1
Filed 01/17/2012
|
Current Assignee
Amadeus
|
Original Assignee
Muriel Becker, Julien DOrso, Thierry Bassilana, David-Olivier Saban
|
FAULT TOLERANT SYSTEM IN A LOOSELY-COUPLED CLUSTER ENVIRONMENT | ||
Patent #
US 20130179729A1
Filed 01/05/2012
|
Current Assignee
GlobalFoundries Inc.
|
Original Assignee
International Business Machines Corporation
|
Method and apparatus for supporting compound disposition for data images | ||
Patent #
US 8,495,315 B1
Filed 09/29/2007
|
Current Assignee
Veritas Technologies LLC
|
Original Assignee
Symantec Corporation
|
SYSTEM AND METHOD FOR MANAGING REPLICATION IN AN OBJECT STORAGE SYSTEM | ||
Patent #
US 20130232313A1
Filed 04/22/2013
|
Current Assignee
Dell Products LP
|
Original Assignee
Dell Products LP
|
Configuring object storage system for input/output operations | ||
Patent #
US 8,595,460 B2
Filed 08/26/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Protocol independent server replacement and replication in a storage area network | ||
Patent #
US 8,626,936 B2
Filed 01/23/2008
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Snapshot based replication | ||
Patent #
US 8,627,024 B2
Filed 11/30/2010
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
SEMANTIC REPLICATION | ||
Patent #
US 20140019411A1
Filed 08/19/2013
|
Current Assignee
Infoblox Incorporated
|
Original Assignee
Infoblox Incorporated
|
Snapshot based replication | ||
Patent #
US 8,635,421 B2
Filed 07/16/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Computer system accessing object storage system | ||
Patent #
US 8,650,359 B2
Filed 08/26/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Control Pool Based Enterprise Policy Enabler for Controlled Cloud Access | ||
Patent #
US 20140053280A1
Filed 12/31/2012
|
Current Assignee
Futurewei Technologies Incorporated
|
Original Assignee
Futurewei Technologies Incorporated
|
REPLICATING DATA ACROSS DATA CENTERS | ||
Patent #
US 20140067759A1
Filed 08/31/2012
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Corporation
|
Virtual machine snapshotting in object storage system | ||
Patent #
US 8,677,085 B2
Filed 08/29/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Incremental restore of data between storage systems having dissimilar storage operating systems associated therewith | ||
Patent #
US 8,688,645 B2
Filed 11/30/2010
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
Method and system for data filing systems | ||
Patent #
US 8,700,565 B2
Filed 01/17/2012
|
Current Assignee
Amadeus
|
Original Assignee
Amadeus
|
Method of balancing workloads in object storage system | ||
Patent #
US 8,769,174 B2
Filed 08/29/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Object storage system | ||
Patent #
US 8,775,773 B2
Filed 08/26/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Management system and methods for object storage system | ||
Patent #
US 8,775,774 B2
Filed 08/26/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Rapid notification system | ||
Patent #
US 8,788,601 B2
Filed 05/26/2011
|
Current Assignee
Micro Focus LLC
|
Original Assignee
Stratify Inc.
|
RECOVERY POINT OBJECTIVE ENFORCEMENT | ||
Patent #
US 20140236891A1
Filed 02/15/2013
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Corporation
|
CLIENT OBJECT REPLICATION BETWEEN A FIRST BACKUP SERVER AND A SECOND BACKUP SERVER | ||
Patent #
US 20140279912A1
Filed 03/14/2013
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
CONTEXT TRANSFER FOR DATA STORAGE | ||
Patent #
US 20140297584A1
Filed 03/27/2014
|
Current Assignee
Schlumberger Technology Corporation
|
Original Assignee
Schlumberger Technology Corporation
|
Semantic replication | ||
Patent #
US 8,874,516 B2
Filed 08/19/2013
|
Current Assignee
Infoblox Incorporated
|
Original Assignee
Infoblox Incorporated
|
Provisional authority in a distributed database | ||
Patent #
US 8,892,516 B2
Filed 06/27/2013
|
Current Assignee
Infoblox Incorporated
|
Original Assignee
Infoblox Incorporated
|
AUTOMATED, TIERED DATA RETENTION | ||
Patent #
US 20140344234A1
Filed 08/04/2014
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
Multitier deduplication systems and methods | ||
Patent #
US 8,898,114 B1
Filed 08/25/2011
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
Write signature command | ||
Patent #
US 8,898,112 B1
Filed 09/07/2011
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Communication network and method for storing message data in a communication network | ||
Patent #
US 8,914,547 B2
Filed 10/11/2007
|
Current Assignee
Avaya Incorporated
|
Original Assignee
Avaya GmbH Company KG
|
Configuring object storage system for input/output operations | ||
Patent #
US 8,914,610 B2
Filed 11/06/2013
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
System and method for managing replication in an object storage system | ||
Patent #
US 8,930,313 B2
Filed 04/22/2013
|
Current Assignee
Dell Products LP
|
Original Assignee
Dell Products LP
|
Management system and methods for object storage system | ||
Patent #
US 8,949,570 B2
Filed 05/08/2014
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Object storage system | ||
Patent #
US 8,959,312 B2
Filed 05/08/2014
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Block status mapping system for reducing virtual machine backup storage | ||
Patent #
US 8,996,468 B1
Filed 04/16/2010
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
Managing incremental cache backup and restore | ||
Patent #
US 9,021,222 B1
Filed 03/28/2012
|
Current Assignee
Lenovoemc Limited
|
Original Assignee
Lenovoemc Limited
|
Method and apparatus for comparing configuration and topology of virtualized datacenter inventories | ||
Patent #
US 9,063,768 B2
Filed 10/10/2011
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
System and methods for host enabled management in a storage system | ||
Patent #
US 9,087,201 B2
Filed 01/04/2013
|
Current Assignee
Infinidat Ltd.
|
Original Assignee
Infinidat Ltd.
|
Performing a non-disruptive software upgrade on physical storage processors having access to virtual storage processors | ||
Patent #
US 9,092,290 B1
Filed 03/15/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Providing a fault tolerant system in a loosely-coupled cluster environment using application checkpoints and logs | ||
Patent #
US 9,098,439 B2
Filed 01/05/2012
|
Current Assignee
GlobalFoundries Inc.
|
Original Assignee
International Business Machines Corporation
|
Method and apparatus for accessing home storage or internet storage | ||
Patent #
US 9,110,606 B2
Filed 01/04/2007
|
Current Assignee
Samsung Electronics Co. Ltd.
|
Original Assignee
Samsung Electronics Co. Ltd.
|
System and method for allocating datastores for virtual machines | ||
Patent #
US 9,134,922 B2
Filed 09/29/2012
|
Current Assignee
VMware Inc.
|
Original Assignee
VMware Inc.
|
Directory server replication | ||
Patent #
US 9,143,559 B2
Filed 12/05/2007
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Control pool based enterprise policy enabler for controlled cloud access | ||
Patent #
US 9,167,050 B2
Filed 12/31/2012
|
Current Assignee
Futurewei Technologies Incorporated
|
Original Assignee
Futurewei Technologies Incorporated
|
Log structured content addressable deduplicating storage | ||
Patent #
US 9,208,031 B2
Filed 03/31/2009
|
Current Assignee
Riverbed Technology Incorporated
|
Original Assignee
Riverbed Technology Incorporated
|
Shared-bandwidth multiple target remote copy | ||
Patent #
US 9,218,313 B2
Filed 01/31/2013
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Synchronizing updates across cluster filesystems | ||
Patent #
US 9,235,594 B2
Filed 08/20/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
ENABLING DATA REPLICATION PROCESSES BETWEEN HETEROGENEOUS STORAGE SYSTEMS | ||
Patent #
US 20160026703A1
Filed 07/24/2014
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
HALO BASED FILE SYSTEM REPLICATION | ||
Patent #
US 20160028806A1
Filed 07/25/2014
|
Current Assignee
Facebook Inc.
|
Original Assignee
Facebook Inc.
|
Client object replication between a first backup server and a second backup server | ||
Patent #
US 9,251,008 B2
Filed 03/14/2013
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
PRESERVING THE INTEGRITY OF A SNAPSHOT ON A STORAGE DEVICE VIA EPHEMERAL WRITE OPERATIONS IN AN INFORMATION MANAGEMENT SYSTEM | ||
Patent #
US 20160042090A1
Filed 08/06/2014
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
Automated, tiered data retention | ||
Patent #
US 9,262,449 B2
Filed 08/04/2014
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
Unified data protection for block and file objects | ||
Patent #
US 9,280,555 B1
Filed 03/29/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Unified datapath architecture | ||
Patent #
US 9,286,007 B1
Filed 03/14/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Virtual storage processor load balancing | ||
Patent #
US 9,304,999 B1
Filed 03/15/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Providing virtual storage processor (VSP) mobility with induced file system format migration | ||
Patent #
US 9,305,071 B1
Filed 09/30/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Systems and methods for compacting a virtual machine file | ||
Patent #
US 9,311,375 B1
Filed 02/07/2012
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
Backup systems and methods for a virtual computing environment | ||
Patent #
US 9,311,318 B1
Filed 02/11/2013
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
Synchronizing Updates Across Cluster Filesystems | ||
Patent #
US 20160103850A1
Filed 12/15/2015
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Managing cache backup and restore for continuous data replication and protection | ||
Patent #
US 9,317,375 B1
Filed 03/30/2012
|
Current Assignee
Lenovoemc Limited
|
Original Assignee
Lenovoemc Limited
|
Transactional replication | ||
Patent #
US 9,317,545 B2
Filed 07/31/2013
|
Current Assignee
Infoblox Incorporated
|
Original Assignee
Infoblox Incorporated
|
METHODS AND SYSTEMS FOR OPTIMAL SNAPSHOT DISTRIBUTION WITHIN A PROTECTION SCHEDULE | ||
Patent #
US 20160139823A1
Filed 04/06/2015
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Nimble Storage In.c
|
Reclaiming space from file system hosting many primary storage objects and their snapshots | ||
Patent #
US 9,400,741 B1
Filed 06/30/2014
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
System and method for synchronization between servers | ||
Patent #
US 9,401,957 B2
Filed 09/14/2007
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Filtered reference copy of secondary storage data in a data storage system | ||
Patent #
US 9,405,482 B2
Filed 03/08/2013
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
System of managing remote resources | ||
Patent #
US 9,405,484 B2
Filed 03/17/2014
|
Current Assignee
Infinidat Ltd.
|
Original Assignee
Infinidat Ltd.
|
Virtual storage processor failover | ||
Patent #
US 9,424,117 B1
Filed 03/15/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Shared-bandwidth multiple target remote copy | ||
Patent #
US 9,436,653 B2
Filed 05/21/2010
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
FILTERED REFERENCE COPY OF SECONDARY STORAGE DATA IN A DATA STORAGE SYSTEM | ||
Patent #
US 20160306558A1
Filed 06/27/2016
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
Providing mobility to virtual storage processors | ||
Patent #
US 9,507,787 B1
Filed 03/15/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Deduplicating container files | ||
Patent #
US 9,569,455 B1
Filed 06/28/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Cataloging system for image-based backup | ||
Patent #
US 9,569,446 B1
Filed 06/08/2011
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
ADAPTIVE BANDWIDTH MANAGER | ||
Patent #
US 20170060696A1
Filed 08/19/2015
|
Current Assignee
ExaGrid Systems Inc.
|
Original Assignee
ExaGrid Systems Inc.
|
HYBRID BACKUP AND RECOVERY MANAGEMENT SYSTEM FOR DATABASE VERSIONING AND VIRTUALIZATION WITH DATA TRANSFORMATION | ||
Patent #
US 20170075765A1
Filed 09/14/2015
|
Current Assignee
ProphetStor Data Services Inc.
|
Original Assignee
ProphetStor Data Services Inc.
|
Leading System Determination | ||
Patent #
US 20170091479A1
Filed 09/30/2015
|
Current Assignee
SAP SE
|
Original Assignee
SAP SE
|
Context transfer for data storage | ||
Patent #
US 9,626,392 B2
Filed 03/27/2014
|
Current Assignee
Schlumberger Technology Corporation
|
Original Assignee
Schlumberger Technology Corporation
|
Methods and systems for removing virtual machine snapshots | ||
Patent #
US 9,665,440 B2
Filed 07/25/2016
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Nimble Storage
|
Shared-bandwidth multiple target remote copy | ||
Patent #
US 9,715,477 B2
Filed 11/24/2015
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Methods and systems for optimal snapshot distribution within a protection schedule | ||
Patent #
US 9,727,252 B2
Filed 04/06/2015
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Hewlett Packard Enterprise Development LP
|
HIERARCHICAL SYSTEM MANAGER ROLLBACK | ||
Patent #
US 20170228296A1
Filed 02/24/2017
|
Current Assignee
Silicon Graphics International Corp.
|
Original Assignee
Silicon Graphics International Corp.
|
SNAPSHOT MANAGEMENT IN DISTRIBUTED FILE SYSTEMS | ||
Patent #
US 20170249335A1
Filed 07/11/2016
|
Current Assignee
Red Hat Inc.
|
Original Assignee
Red Hat Inc.
|
LIFECYCLE SUPPORT FOR STORAGE OBJECTS | ||
Patent #
US 20170255589A1
Filed 05/22/2017
|
Current Assignee
Amazon Technologies
|
Original Assignee
Amazon Technologies
|
Methods and systems for concurrently taking snapshots of a plurality of virtual machines | ||
Patent #
US 9,778,990 B2
Filed 10/08/2014
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Hewlett Packard Enterprise Development LP
|
Optimized copy of virtual machine storage files | ||
Patent #
US 9,778,946 B2
Filed 08/07/2009
|
Current Assignee
Quest Software Inc.
|
Original Assignee
Dell Software Inc.
|
Recovery point objective enforcement | ||
Patent #
US 9,805,104 B2
Filed 02/15/2013
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Technology Licensing LLC
|
Automatically creating multiple replication sessions in response to a single replication command entered by a user | ||
Patent #
US 9,805,105 B1
Filed 03/15/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Halo based file system replication | ||
Patent #
US 9,807,164 B2
Filed 07/25/2014
|
Current Assignee
Facebook Inc.
|
Original Assignee
Facebook Inc.
|
Migrating data objects together with their snaps | ||
Patent #
US 9,830,105 B1
Filed 12/21/2015
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Replicating data across data centers | ||
Patent #
US 9,870,374 B2
Filed 08/31/2012
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Technology Licensing LLC
|
QUERY PROCESSING USING PRIMARY DATA VERSIONING AND SECONDARY DATA | ||
Patent #
US 20180018370A1
Filed 07/15/2016
|
Current Assignee
SAP SE
|
Original Assignee
SAP SE
|
Snap and replicate for unified datapath architecture | ||
Patent #
US 9,881,014 B1
Filed 06/30/2014
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Replication of data objects from a source server to a target server | ||
Patent #
US 9,904,717 B2
Filed 04/25/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Leading system determination | ||
Patent #
US 9,904,796 B2
Filed 09/30/2015
|
Current Assignee
SAP SE
|
Original Assignee
SAP SE
|
Replication of data objects from a source server to a target server | ||
Patent #
US 9,910,904 B2
Filed 08/30/2011
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Redirecting host IO's at destination during replication | ||
Patent #
US 9,916,202 B1
Filed 03/11/2015
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Creating consistent user snaps at destination during replication | ||
Patent #
US 9,983,942 B1
Filed 03/11/2015
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
SEAMLESS DATA MIGRATION IN A CLUSTERED ENVIRONMENT | ||
Patent #
US 20180157429A1
Filed 12/06/2016
|
Current Assignee
Dell Products LP
|
Original Assignee
Dell Products LP
|
Method and system for tiering data | ||
Patent #
US 10,031,675 B1
Filed 03/31/2016
|
Current Assignee
EMC Corporation
|
Original Assignee
EMC Corporation
|
APPARATUS AND METHOD FOR REPLICATING CHANGED-DATA IN SOURCE DATABASE MANAGEMENT SYSTEM TO TARGET DATABASE MANAGEMENT SYSTEM IN REAL TIME | ||
Patent #
US 20180253483A1
Filed 03/01/2018
|
Current Assignee
DataStreams Corp.
|
Original Assignee
DataStreams Corp.
|
Dynamic and optimized management of grid system resources | ||
Patent #
US 10,073,855 B2
Filed 05/21/2015
|
Current Assignee
ExaGrid Systems Inc.
|
Original Assignee
ExaGrid Systems Inc.
|
Cluster file system comprising data mover modules having associated quota manager for managing back-end user quotas | ||
Patent #
US 10,078,639 B1
Filed 09/29/2014
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Method and apparatus for policy-based replication | ||
Patent #
US 10,089,148 B1
Filed 06/30/2011
|
Current Assignee
EMC Corporation
|
Original Assignee
Emc IP Holding Company LLC
|
Listing data objects using a hierarchical dispersed storage index | ||
Patent #
US 10,089,344 B2
Filed 02/25/2013
|
Current Assignee
Pure Storage Inc.
|
Original Assignee
International Business Machines Corporation
|
Replicating a group of data objects within a storage network | ||
Patent #
US 10,108,500 B2
Filed 11/30/2010
|
Current Assignee
Red Hat Inc.
|
Original Assignee
Red Hat Inc.
|
Continuously available program replicas | ||
Patent #
US 10,108,510 B2
Filed 05/01/2012
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
SYNCHRONIZATION STORAGE SOLUTION AFTER AN OFFLINE EVENT | ||
Patent #
US 20180307568A1
Filed 10/21/2016
|
Current Assignee
Softnas LLC
|
Original Assignee
Softnas LLC
|
Handling a virtual data mover (VDM) failover situation by performing a network interface control operation that controls availability of network interfaces provided by a VDM | ||
Patent #
US 10,146,649 B2
Filed 09/26/2017
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Multi-lock caches | ||
Patent #
US 10,176,057 B2
Filed 01/29/2018
|
Current Assignee
Amazon Technologies
|
Original Assignee
Amazon Technologies
|
Reading a portion of data to replicate a volume based on sequence numbers | ||
Patent #
US 10,185,505 B1
Filed 10/28/2016
|
Current Assignee
Pure Storage Inc.
|
Original Assignee
Pure Storage Inc.
|
Replicating data across data centers | ||
Patent #
US 10,204,114 B2
Filed 12/28/2017
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Technology Licensing LLC
|
Managing file deletions in storage systems | ||
Patent #
US 10,261,944 B1
Filed 03/29/2016
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Managing multiple tenants in NAS (network attached storage) clusters | ||
Patent #
US 10,289,325 B1
Filed 07/31/2017
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Accessing file system replica during ongoing replication operations | ||
Patent #
US 10,289,690 B1
Filed 09/22/2014
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Parallelizing and deduplicating backup data | ||
Patent #
US 10,303,656 B2
Filed 08/13/2015
|
Current Assignee
ExaGrid Systems Inc.
|
Original Assignee
ExaGrid Systems Inc.
|
Seamless data migration in a clustered environment | ||
Patent #
US 10,353,640 B2
Filed 12/06/2016
|
Current Assignee
Dell Products LP
|
Original Assignee
Dell Products LP
|
Secure data replication | ||
Patent #
US 10,360,237 B2
Filed 11/22/2017
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
Synchronization of snapshots in a distributed consistency group | ||
Patent #
US 10,365,978 B1
Filed 07/28/2017
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Scalable grid deduplication | ||
Patent #
US 10,387,374 B2
Filed 02/27/2015
|
Current Assignee
ExaGrid Systems Inc.
|
Original Assignee
ExaGrid Systems Inc.
|
Data conversion method and backup server | ||
Patent #
US 10,409,694 B2
Filed 05/07/2018
|
Current Assignee
Huawei Technologies Co. Ltd.
|
Original Assignee
Huawei Technologies Co. Ltd.
|
Unified datapath processing with virtualized storage processors | ||
Patent #
US 10,447,524 B1
Filed 03/14/2013
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Method and system for automatic replication data verification and recovery | ||
Patent #
US 10,459,632 B1
Filed 09/16/2016
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Cluster file system comprising data mover modules having associated quota manager for managing back-end user quotas | ||
Patent #
US 10,489,343 B2
Filed 05/11/2018
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Filtered reference copy of secondary storage data in a data storage system | ||
Patent #
US 10,496,321 B2
Filed 06/27/2016
|
Current Assignee
CommVault Systems Incorporated
|
Original Assignee
CommVault Systems Incorporated
|
Enabling data replication processes between heterogeneous storage systems | ||
Patent #
US 10,628,380 B2
Filed 07/24/2014
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
Synchronization storage solution after an offline event | ||
Patent #
US 10,649,858 B2
Filed 10/21/2016
|
Current Assignee
Softnas LLC
|
Original Assignee
Softnas LLC
|
Efficient volume replication in a storage system | ||
Patent #
US 10,656,850 B2
Filed 12/21/2018
|
Current Assignee
Pure Storage Inc.
|
Original Assignee
Pure Storage Inc.
|
Replication of data objects from a source server to a target server | ||
Patent #
US 10,664,492 B2
Filed 12/04/2017
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Replication of data objects from a source server to a target server | ||
Patent #
US 10,664,493 B2
Filed 12/05/2017
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Managing snaps at a destination based on policies specified at a source | ||
Patent #
US 10,678,650 B1
Filed 03/31/2015
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Method and device for dispatching replication tasks in network storage device | ||
Patent #
US 10,678,749 B2
Filed 05/31/2018
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Synchronizing updates across cluster filesystems | ||
Patent #
US 10,698,866 B2
Filed 12/15/2015
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Replication of versions of an object from a source storage to a target storage | ||
Patent #
US 10,725,708 B2
Filed 07/31/2015
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Atomically managing data objects and assigned attributes | ||
Patent #
US 10,733,161 B1
Filed 12/30/2015
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Snapshot management in distributed file systems | ||
Patent #
US 10,733,153 B2
Filed 07/11/2016
|
Current Assignee
Red Hat Inc.
|
Original Assignee
Red Hat Inc.
|
Restoring NAS servers from the cloud | ||
Patent #
US 10,740,192 B2
Filed 01/31/2018
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Preserving replication to a storage object on a storage node | ||
Patent #
US 10,747,465 B2
Filed 10/30/2018
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Managing, storing, and caching query results and partial query results for combination with additional query results | ||
Patent #
US 10,776,355 B1
Filed 04/30/2018
|
Current Assignee
Splunk Inc.
|
Original Assignee
Splunk Inc.
|
File system provisioning and management with reduced storage communication | ||
Patent #
US 10,789,017 B1
Filed 07/31/2017
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Apparatus and method for replicating changed-data in source database management system to target database management system in real time | ||
Patent #
US 10,795,911 B2
Filed 03/01/2018
|
Current Assignee
DataStreams Corp.
|
Original Assignee
DataStreams Corp.
|
Method and apparatus for mounting and unmounting a stable snapshot copy of a user file system | ||
Patent #
US 10,831,618 B1
Filed 01/31/2019
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Managing data using network attached storage (NAS) cluster | ||
Patent #
US 10,831,718 B1
Filed 07/31/2017
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Managing cloud storage of block-based and file-based data | ||
Patent #
US 10,848,545 B2
Filed 01/31/2018
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
Emc IP Holding Company LLC
|
Lifecycle transition validation for storage objects | ||
Patent #
US 10,853,337 B2
Filed 05/22/2017
|
Current Assignee
Amazon Technologies
|
Original Assignee
Amazon Technologies
|
Multi-partitioning determination for combination operations | ||
Patent #
US 10,896,182 B2
Filed 09/25/2017
|
Current Assignee
Splunk Inc.
|
Original Assignee
Splunk Inc.
|
System and method for managing a plurality of snapshots | ||
Patent #
US 7,475,098 B2
Filed 03/19/2002
|
Current Assignee
NetApp Inc.
|
Original Assignee
Network Appliance Incorporated
|
System and method for managing memory or session resources used for movement of data being copied in a data storage environment | ||
Patent #
US 7,509,465 B1
Filed 09/29/2004
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
STORAGE ARRAY VIRTUALIZATION USING A STORAGE BLOCK MAPPING PROTOCOL CLIENT AND SERVER | ||
Patent #
US 20080005468A1
Filed 05/08/2006
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Managing snapshots in storage systems | ||
Patent #
US 20080104139A1
Filed 10/26/2006
|
Current Assignee
Snowflake Inc.
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
System and method for creating a point-in-time restoration of a database file | ||
Patent #
US 7,373,364 B1
Filed 03/05/2002
|
Current Assignee
NetApp Inc.
|
Original Assignee
Network Appliance Incorporated
|
APPARATUS METHOD AND SYSTEM FOR ASYNCHRONOUS REPLICATION OF A HIERARCHICALLY-INDEXED DATA STORE | ||
Patent #
US 20080250086A1
Filed 05/28/2008
|
Current Assignee
ServiceNow Incorporated
|
Original Assignee
International Business Machines Corporation
|
Method for replicating snapshot volumes between storage systems | ||
Patent #
US 20080250215A1
Filed 06/10/2008
|
Current Assignee
Hitachi America Limited
|
Original Assignee
Hitachi America Limited
|
Switch-based storage services | ||
Patent #
US 7,185,062 B2
Filed 01/18/2002
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Method and apparatus for sequencing transactions globally in a distributed database cluster | ||
Patent #
US 20070061379A1
Filed 09/09/2005
|
Current Assignee
Open Invention Network LLC
|
Original Assignee
Open Invention Network LLC
|
System and Method for Providing Virtual Network Attached Storage Using Excess Distributed Storage Capacity | ||
Patent #
US 20070067263A1
Filed 11/08/2006
|
Current Assignee
Google LLC
|
Original Assignee
ClearCube Technology Incorporated
|
Method for rolling back from snapshot with log | ||
Patent #
US 20070094467A1
Filed 11/14/2005
|
Current Assignee
Hitachi America Limited
|
Original Assignee
Hitachi America Limited
|
APPLICATION OF VIRTUAL SERVERS TO HIGH AVAILABILITY AND DISASTER RECOVERY SOULTIONS | ||
Patent #
US 20070078982A1
Filed 09/22/2006
|
Current Assignee
Leidos Innovations Technology Inc.
|
Original Assignee
Lockheed Martin Corporation
|
Replication of a consistency group of data storage objects from servers in a data network | ||
Patent #
US 20070136389A1
Filed 11/29/2005
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
File based volumes and file systems | ||
Patent #
US 20070136548A1
Filed 12/13/2005
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Resource freshness and replication | ||
Patent #
US 20070168516A1
Filed 02/28/2006
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Corporation
|
Method and system for storing data | ||
Patent #
US 20070198659A1
Filed 01/24/2007
|
Current Assignee
Falconstor Software Incorporated
|
Original Assignee
Falconstor Software Incorporated
|
Data recovery with internet protocol replication with or without full resync | ||
Patent #
US 7,275,177 B2
Filed 06/25/2003
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
SYSTEMS AND METHODS FOR BACKING UP DATA FILES | ||
Patent #
US 20070208783A1
Filed 05/03/2007
|
Current Assignee
KeepitSafe Inc.
|
Original Assignee
Iron Mountain Incorporated
|
Apparatus, system, and method for differential backup using snapshot on-write data | ||
Patent #
US 7,284,019 B2
Filed 08/18/2004
|
Current Assignee
Google LLC
|
Original Assignee
International Business Machines Corporation
|
Method of detecting unauthorized access to a system or an electronic device | ||
Patent #
US 20070255818A1
Filed 04/29/2006
|
Current Assignee
Kolnos Systems Incorporated
|
Original Assignee
Kolnos Systems Incorporated
|
CREATING FREQUENT APPLICATION-CONSISTENT BACKUPS EFFICIENTLY | ||
Patent #
US 20070276885A1
Filed 08/02/2006
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Corporation
|
Method and system for improving LUN-based backup reliability | ||
Patent #
US 6,978,280 B1
Filed 10/12/2000
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol | ||
Patent #
US 6,938,039 B1
Filed 06/30/2000
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
File server having a file system cache and protocol for truly safe asynchronous writes | ||
Patent #
US 5,893,140 A
Filed 11/13/1996
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Physical incremental backup using snapshots | ||
Patent #
US 6,665,815 B1
Filed 06/22/2000
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
Web server content replication | ||
Patent #
US 20060031188A1
Filed 08/05/2005
|
Current Assignee
Excalibur IP LLC
|
Original Assignee
Yahoo Inc.
|
System and method for redirecting access to a remote mirrored snapshot | ||
Patent #
US 7,010,553 B2
Filed 03/19/2002
|
Current Assignee
NetApp Inc.
|
Original Assignee
Network Appliance Incorporated
|
Snapshot copy facility maintaining read performance and write performance | ||
Patent #
US 20060143412A1
Filed 12/28/2004
|
Current Assignee
EMC Corporation
|
Original Assignee
EMC Corporation
|
System and method for restoring a virtual disk from a snapshot | ||
Patent #
US 7,076,509 B1
Filed 03/21/2003
|
Current Assignee
NetApp Inc.
|
Original Assignee
Network Appliance Incorporated
|
Systems and methods for dynamic restorating data | ||
Patent #
US 20060149997A1
Filed 03/15/2005
|
Current Assignee
EMC Corporation
|
Original Assignee
EMC Corporation
|
Writable read-only snapshots | ||
Patent #
US 20060179261A1
Filed 04/11/2003
|
Current Assignee
NetApp Inc.
|
Original Assignee
NetApp Inc.
|
Method and apparatus for backup and recovery using storage based journaling | ||
Patent #
US 20060190692A1
Filed 04/20/2006
|
Current Assignee
Hitachi America Limited
|
Original Assignee
Hitachi America Limited
|
Data recovery with internet protocol replication with or without full resync | ||
Patent #
US 20050015663A1
Filed 06/25/2003
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Multi-protocol sharable virtual storage objects | ||
Patent #
US 20050044162A1
Filed 08/22/2003
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Maintenance of a file version set including read-only and read-write snapshot copies of a production file | ||
Patent #
US 20050065986A1
Filed 09/23/2003
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Data management systems, data management system storage devices, articles of manufacture, and data management methods | ||
Patent #
US 20050114408A1
Filed 11/26/2003
|
Current Assignee
Hewlett Packard Enterprise Development LP
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
Dual channel restoration of data between primary and backup servers | ||
Patent #
US 6,941,490 B2
Filed 11/29/2001
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Apparatus and method for multiple generation remote backup and fast restore | ||
Patent #
US 6,948,089 B2
Filed 01/10/2002
|
Current Assignee
Hitachi America Limited
|
Original Assignee
Hitachi America Limited
|
Internet protocol based disaster recovery of a server | ||
Patent #
US 20050193245A1
Filed 02/04/2004
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Managing client configuration data | ||
Patent #
US 20050234931A1
Filed 04/06/2004
|
Current Assignee
Microsoft Technology Licensing LLC
|
Original Assignee
Microsoft Corporation
|
External encapsulation of a volume into a LUN to allow booting and installation on a complex volume | ||
Patent #
US 20050228950A1
Filed 06/20/2005
|
Current Assignee
Symantec Corporation
|
Original Assignee
Veritas Operating Corporation
|
Generating data set of the first file system by determining a set of changes between data stored in first snapshot of the first file system, and data stored in second snapshot of the first file system | ||
Patent #
US 6,959,310 B2
Filed 02/15/2002
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Apparatus, and associated method, for updating a location register in a mobile, packet radio communication system | ||
Patent #
US 6,675,153 B1
Filed 10/15/1999
|
Current Assignee
Zix Corporation
|
Original Assignee
Zix Corporation
|
Storage virtualization by layering virtual disk objects on a file system | ||
Patent #
US 20040030822A1
Filed 08/09/2002
|
Current Assignee
Network Appliance Incorporated
|
Original Assignee
Network Appliance Incorporated
|
Multi-protocol storage appliance that provides integrated support for file and block access protocols | ||
Patent #
US 20040030668A1
Filed 08/09/2002
|
Current Assignee
Network Appliance Incorporated
|
Original Assignee
Network Appliance Incorporated
|
Data storage with host-initiated synchronization and fail-over of remote mirror | ||
Patent #
US 6,691,245 B1
Filed 10/10/2000
|
Current Assignee
NetApp Inc.
|
Original Assignee
LSI Logic Corporation
|
Apparatus and method for increasing application availability during a disaster fail-back | ||
Patent #
US 6,694,447 B1
Filed 09/29/2000
|
Current Assignee
Oracle America Inc.
|
Original Assignee
Sun Microsystems Incorporated
|
Network block services for client access of network-attached data storage in an IP network | ||
Patent #
US 20040059822A1
Filed 09/25/2002
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Data processing system with mechanism for restoring file systems based on transaction logs | ||
Patent #
US 6,732,124 B1
Filed 02/09/2000
|
Current Assignee
Fujitsu Limited
|
Original Assignee
Fujitsu Limited
|
System and method for backing up a computer system | ||
Patent #
US 20040139128A1
Filed 07/08/2003
|
Current Assignee
Veritas Technologies LLC
|
Original Assignee
Symantec Corporation
|
Physical incremental backup using snapshots | ||
Patent #
US 20040163009A1
Filed 11/03/2003
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Original Assignee
Hewlett-Packard Development Company L.P.
|
File migration device | ||
Patent #
US 20040210583A1
Filed 08/20/2003
|
Current Assignee
Hitachi America Limited
|
Original Assignee
Hitachi America Limited
|
Replication of snapshot using a file system copy differential | ||
Patent #
US 20040267836A1
Filed 06/25/2003
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Method and apparatus for managing replication volumes | ||
Patent #
US 20040260873A1
Filed 06/17/2003
|
Current Assignee
Google LLC
|
Original Assignee
Hitachi America Limited
|
Information processing apparatus for performing file migration on-line between file servers | ||
Patent #
US 20040267758A1
Filed 06/25/2004
|
Current Assignee
NEC Corporation
|
Original Assignee
NEC Corporation
|
Access manager for databases | ||
Patent #
US 20030093413A1
Filed 11/15/2001
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
Data replication system and method | ||
Patent #
US 20030177194A1
Filed 03/17/2003
|
Current Assignee
Shinkuro Inc.
|
Original Assignee
Shinkuro Inc.
|
Method and system for disaster recovery | ||
Patent #
US 20030200480A1
Filed 03/25/2003
|
Current Assignee
Google LLC
|
Original Assignee
Computer Associates Think Inc.
|
Recovery of file system data in file servers mirrored file system volumes | ||
Patent #
US 6,654,912 B1
Filed 10/04/2000
|
Current Assignee
Network Appliance Incorporated
|
Original Assignee
Network Appliance Incorporated
|
Replication of remote copy data for internet protocol (IP) transmission | ||
Patent #
US 20030217119A1
Filed 05/16/2002
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Incrementally restoring a mass storage device to a prior state | ||
Patent #
US 20020112134A1
Filed 12/20/2001
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Collision avoidance in database replication systems | ||
Patent #
US 20020133507A1
Filed 03/29/2002
|
Current Assignee
RPX Corporation
|
Original Assignee
ITI Limited
|
File server system using file system storage, data movers, and an exchange of meta data among data movers for file locking and direct access to shared file systems | ||
Patent #
US 6,324,581 B1
Filed 03/03/1999
|
Current Assignee
Emc IP Holding Company LLC
|
Original Assignee
EMC Corporation
|
Data backup | ||
Patent #
US 20010044910A1
Filed 01/29/2001
|
Current Assignee
Windstream Intellectual Property Services Inc.
|
Original Assignee
CenterBeam Inc.
|
Enterprise data movement system and method which performs data load and changed data propagation operations | ||
Patent #
US 6,016,501 A
Filed 03/30/1998
|
Current Assignee
Informatica LLC
|
Original Assignee
BMC Software Incorporated
|
Method and apparatus for independent and simultaneous access to a common data set | ||
Patent #
US 6,101,497 A
Filed 04/25/1997
|
Current Assignee
EMC Corporation
|
Original Assignee
EMC Corporation
|
Bundling of write data from channel commands in a command chain for transmission over a data link between data storage systems for remote data mirroring | ||
Patent #
US 5,901,327 A
Filed 03/17/1997
|
Current Assignee
EMC Corporation
|
Original Assignee
EMC Corporation
|
Real time backup system | ||
Patent #
US 5,974,563 A
Filed 10/02/1998
|
Current Assignee
Carbonite Incorporated
|
Original Assignee
Network Specialists Inc.
|
Highly reliable online system | ||
Patent #
US 5,596,706 A
Filed 08/10/1994
|
Current Assignee
UFJ Bank Ltd. Hong Kong, Hitachi America Limited, Hitachi Ltd.
|
Original Assignee
UFJ Bank Ltd. Hong Kong, Hitachi America Limited
|
Distributed multi-version commitment ordering protocols for guaranteeing serializability during transaction processing | ||
Patent #
US 5,701,480 A
Filed 04/14/1993
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Original Assignee
Digital Equipment Corporation
|
Commitment ordering for guaranteeing serializability across distributed transactions | ||
Patent #
US 5,504,900 A
Filed 12/05/1994
|
Current Assignee
HTC Corporation
|
Original Assignee
Digital Equipment Corporation
|
Guaranteeing global serializability by applying commitment ordering selectively to global transactions | ||
Patent #
US 5,504,899 A
Filed 12/02/1994
|
Current Assignee
Hewlett-Packard Development Company L.P.
|
Original Assignee
Digital Equipment Corporation
|
System and method for maintaining replicated data coherency in a data processing system | ||
Patent #
US 5,434,994 A
Filed 05/23/1994
|
Current Assignee
International Business Machines Corporation
|
Original Assignee
International Business Machines Corporation
|
2 Claims
-
1. A method of restoring, in a first data storage server, a data storage object from a desired version of the data storage object, the desired version of the data storage object residing in data storage of a second data storage server, said method comprising:
-
the first data storage server communicating with the second data storage server to identify a most recent common base snapshot copy of the data storage object, a first copy of the most recent common base snapshot copy residing in data storage of the first data storage server and a second copy of the most recent common base snapshot copy residing in the data storage of the second data storage server; and the second data storage server transmitting to the first data storage server changes between the desired version of the data storage object and the second copy of the most recent common base snapshot copy; and the first data storage server receiving the changes from the second data storage server and using the changes for restoring, from the first copy of the most recent common base snapshot copy, a local production version of the data storage object; wherein said method is used for restoring, in the first data storage server an iSCSI LUN data storage object from a desired version of the iSCSI LUN data storage object, the desired version of the iSCSI LUN data storage object residing in the data storage of the second data storage server, wherein said method is used for restoring in the first data storage server, a file system data storage object from a desired version of the file system data storage object, the desired version of the file system data storage object residing in the data storage of the second data storage server, and which further includes the first data storage server initially receiving the first copy of the most recent common base snapshot copy remote replication from a third data storage server, and the second data storage server initially receiving the second copy of the most recent common base snapshot copy by remote replication from the third data storage server, and which includes specifying a retention policy for snapshot copies in the third data storage server, and wherein the remote replication from the third data storage server includes the third data storage server replicating the retention policy for snapshot copies to the first data storage server and to the second data storage server.
-
-
2. A method of restoring, in a first data storage server, a data storage object from a desired version of the data storage object, the desired version of the data storage object residing in data storage of a second data storage server, said method comprising:
-
the first data storage server communicating with the second data storage server to identify a most recent common base snapshot copy of the data storage object, a first copy of the most recent common base snapshot copy residing in data storage of the first data storage server and a second copy of the most recent common base snapshot copy residing in the data storage of the second data storage server; and the second data storage server transmitting to the first data storage server changes between the desired version of the data storage object and the second copy of the most recent common base snapshot copy; and the first data storage server receiving the changes from the second data storage server and using the changes for restoring, from the first copy of the most recent common base snapshot copy, a local production version of the data storage object wherein said method is used for restoring in the first data storage server an iSCSI LUN data storage object from a desired version of the iSCSI LUN data storage object, the desired version of the iSCSI LUN data storage object residing in the data storage of the second data storage server, wherein said method is used for restoring in the first data storage server, a file system data storage object from a desired version of the file system data storage object, the desired version of the file system data storage object residing in the data storage of the second data storage server, and which further includes the first data storage server initially receiving the first copy of the most recent common base snapshot copy remote replication from a third data storage server and the second data storage server initially receiving the second copy of the most recent common base snapshot copy by remote replication from the third data storage server, and wherein the remote replication from the third data storage server to the first data storage server includes replicating file handle information from the third data storage server to the first data storage server, and wherein said method further includes the third data storage server issuing a file handle to a network client accessing the file system data storage object, and the network client using the file handle in a read-write request sent to the first data storage server, and the first data storage server receiving the file handle and using the file handle information replicated from the third data storage system for accessing a production version of the file system data storage object in said first data storage server.
-
1 Specification
The present invention relates generally to data processing, and more particularly to replication of data storage objects from computer data storage of servers in a data network.
Remote copy systems have been used for automatically providing data backup at a remote site in order to insure continued data availability after a disaster at a primary site. Such a remote copy facility is described in Ofek, U.S. Pat. No. 5,901,327 issued May 4, 1999, entitled “Bundling of Write Data from Channel Commands in a Command Chain for Transmission over a Data Link Between Data Storage Systems For Remote Data Mirroring.” This remote copy facility uses a dedicated network link and a link-layer protocol for 1:1 replication between a primary storage system and a secondary storage system.
More recently remote copy systems have been used for wide-area distribution of read-only data. Wide-area distribution of the read-only data is useful for preventing remote users from overloading a local server, and for reducing signal transmission delay because the remote users may access remote copies nearer to them. For example, as described in Raman et al., U.S. Patent Application Publication No. US 2003/0217119 A1, published Nov. 20, 2003, incorporated herein by reference, consistent updates are made automatically over a wide-area IP network, concurrently with read-only access to the remote copies. A replication control protocol (RCP) is layered over TCP/IP providing the capability for a remote site to replicate and rebroadcast blocks of the remote copy data to specified groups of destinations, as configured in a routing table.
Currently there is a need for replicating diverse data storage objects in a way that is scalable and efficient and may use a replication control protocol for one-to-many replication and cascaded replication over a data network.
In accordance with one aspect, the invention provides a method of restoring, in a first data storage server, a data storage object from a desired version (which may or may not be the most recent version) of the data storage object. The desired version of the data storage object resides in data storage of a second data storage server. The method includes the first data storage server communicating with the second data storage server to identify a most recent common base snapshot copy of the data storage object. A first copy of the most recent common base snapshot copy resides in data storage of the first data storage server and a second copy of the most recent common base snapshot copy resides in the data storage of the second data storage server. The second data storage server transmits to the first data storage server changes between the desired version of the data storage object and the second copy of the most recent common base snapshot copy. The first data storage server receives the changes from the second data storage server and uses the changes for restoring, from the first copy of the most recent common base snapshot copy, a local production version of the data storage object. The method is used for restoring, in the first data storage server, an iSCSI LUN data storage object from a desired version of the iSCSI LUN data storage object. The desired version of the iSCSI LUN data storage object resides in the data storage of the second data storage server. The method is also used for restoring, in the first data storage server, a file system data storage object from a desired version of the file system data storage object. The desired version of the file system data storage object resides in the data storage of the second data storage server.
In accordance with another aspect, the invention provides method of replicating data from a first data storage server to a second data storage server and to a third data storage server. The method includes configuring a replication session from the first data storage server to the second data storage server. The replication session has a specified destination data storage server and a specified policy for retention of snapshot copies. The replication session also has a specified policy for propagation. The second data storage server is specified as the destination data storage server of the replication session, and the third data storage server is specified in the specified policy for propagation. The first data storage server executes a job including the configured replication session. Execution of the job causes data being replicated from the first data storage server to the second data storage server, snapshot copies of the replicated data being created and stored in data storage of the second data storage server in accordance with the specified policy for retention of snapshot copies, the second data storage server forwarding the replicated data to the third data storage server in accordance with the specified policy for propagation, and snapshot copies of the replicated data being created and stored in data storage of the third data storage server in accordance with the specified policy for retention of snapshot copies.
In accordance with still another aspect, the invention provides a method including a first data storage server replicating data to a second data storage server in a data network, and during the replication of data from the first data storage server to the second data storage server, adjusting a bandwidth allocation share for the replication of data from the first data storage server to the second data storage server. The first data storage server paces transmission of the replicated data in response to the adjusted bandwidth allocation share for the replication of data from the first data storage server to the second data storage server.
In accordance with a final aspect, the invention provides a method of data replication and access in a data network. The data network includes a first data storage server having first data storage and a second data storage server having second data storage. The method includes the first data storage server replicating a file and file handle information to the second data storage server. The file handle information indicates where the file is stored in the first data storage, and the second data storage server stores the file in the second data storage. The method also includes a network client obtaining, from the first data storage server, a file handle for the file. The file handle indicates where the file is stored in the first data storage, and the network client sends a file access request including the file handle to the second data storage server. The method further includes the second data storage server receiving the file access request from the network client, decoding the file handle from the file access request, and using the file handle and the file handle information for locating the file in the second data storage for accessing the file. In this fashion, by using the file handle obtained by accessing the file system view or namespace in the primary server, the network client may directly read or write to the file in the secondary server without accessing the file system view or namespace in the secondary server.
Additional features and advantages of the invention will be described below with reference to the drawings, in which:
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown in the drawings and will be described in detail. It should be understood, however, that it is not intended to limit the invention to the particular forms shown, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the invention as defined by the appended claims.
With reference to
In the data network of
As shown in
The data processor 31 is programmed for replication of data storage objects of multiple data storage object types from the primary storage volumes 32 over the IP network to the secondary data storage server 28. For example, the multiple data storage object types include iSCSI Logical Unit Numbers (LUNs), logical volumes or file systems, directories, files, and virtual servers. A virtual server includes storage volumes allocated to data of clients of the virtual server and also configuration information of the virtual server. The virtual server may also include user mapping and session state information useful for resuming client operations upon fail-over of the virtual server from the primary data storage server 24 to the secondary data storage server 28. See, for example, John Hayden et al., “Internet Protocol Based Disaster Recovery of a Server,” U.S. Published Patent Application No. 2005-0193245 published Sep. 1, 2005, incorporated herein by reference.
The data processor 31 is further programmed with multiple software program modules including common software program modules for management of replication of the data storage objects of the multiple data storage object types. As shown in
The data processor 31 is programmed with a snapshot copy facility 47 for making a snapshot copy of a data storage object concurrent with network client read-write access to a production copy of the data storage object. For example, the snapshot copy facility 47 can be constructed as described in Armangau et al., U.S. Patent Application Publication No. US 2004/0267836 published Dec. 30, 2004, incorporated herein by reference; Armangau et al., U.S. Patent Application Publication No. US 2005/0015663 A1, published Jan. 20, 2005, incorporated herein by reference; or Bixby et al., U.S. Patent Application Pub. No. 2005/0065986 published Mar. 24, 2005, incorporated herein by reference. The snapshot copy facility 47 can make snapshot copies of various kinds of data storage objects such as LUNs, files, directories, file systems, or volumes, concurrent with client read-write access to the data storage objects. For example, the data storage object is encapsulated in a file or logical volume, and the snapshot copy facility 47 makes a snapshot copy of the file or logical volume containing the storage data storage object if the data storage object is not a file or logical volume.
The multiple software program modules further include a version control module 48 executable by the data processor 31 for determining if two data storage objects are the same or have a common base so that only a difference needs to be transmitted for replication or restoration of one of the two data storage objects. A “version” of a data storage object refers generally to a production copy of the data storage object or a snapshot copy of the data storage object. The version control module 48 accesses a version database (DB) 49 containing unique world-wide signatures of the versions of the data storage objects in the primary storage. A unique world-wide signature includes a server ID, a version Set ID, and a Version Number. The Version Set ID identifies the particular storage object, and the Version Number identifies a particular snapshot of the storage object, or the production version of the storage object. For a file system, the Version Set ID includes the file system ID and the file system name. An iSCSI LUN is uniquely identified by a Target ID, a LUN (logical unit number) and a file system ID. A target is a collection of LUNs. The Target ID is also known as an iSCSI Qualified Name (IQN). The version database 49 stores the signatures of the versions separate from the replication sessions. The version database 49 is a data structure maintained by a version set module 121 shown in
The multiple software program modules include a copier module 53 for asynchronous replication of data storage objects from the primary storage volumes 32 to the secondary data storage server 28. The asynchronous replication can be performed as described in Raman et al., U.S. Patent Application Publication No. US 2003/0217119 A1, published Nov. 20, 2003, incorporated herein by reference. The replication manager 42 issues replication jobs that schedule asynchronous replication sessions conducted by the copier module 53. Each of the asynchronous replication sessions has a unique world-wide session signature. The session signature includes the name of the source version of the data storage object being replicated, the name of the source server, the name of the destination version of the data storage object being replicated, and the name of the destination server. The network client or user requesting the replication session may also define an alias name for the replication session. The session signature is constant until the replication session is deleted. Recreating a deleted replication session will result in a new session signature, but stop/start or reverse of a replication session will preserve the session signature.
The data processor 33 of the secondary data storage server 28 is also programmed with a snapshot copy facility 50, a version control module 51, a version database 52, and a copier module 54. A replication session is initially set up over a data storage server interconnect control (DIC) Transmission Control Protocol (TCP) connection 63 between the replication manager 42 of the primary data storage server 24 and the replication manager 45 of the secondary data storage server 28. The DIC-TCP connection between the replication managers 42, 45 facilitates execution of pre-replication and post-replication events. The actual transmission of the replicated data in accordance with a Replication Control Protocol (RCP) occurs over a separate TCP connection 64 between the copier module 53 in the primary data storage server 24 and a copier module 54 in the secondary data storage server 28.
For example, the primary replication manager 42 can use the DIC-TCP connection sending commands for determining whether or not the secondary data storage server 28 is a valid destination for replication, for configuring the secondary data storage server 28 for replication, for preparing the secondary data storage server for replication, for beginning replication, and for aborting the replication. As further described in Milena Bergant et al., “Replication of a Consistency Group of Data Storage Objects from Servers in a Data Network,” U.S. patent application Ser. No. 11/288,578 filed Nov. 29, 2005, incorporated herein by reference, such pre-replication and post replication events are useful for replicating a consistency group of objects from more than one server. Preparation for replication can be begun at each of the servers, and if any of the servers fail to report that replication can be done without causing a consistency error, then preparation can be aborted so that write access by applications will not be disrupted while the potential problem is diagnosed and eliminated.
Data storage objects of multiple data storage object types are defined by a version stack 55 common to the multiple data storage object types, and software program modules 56, 57, 58 specific to the multiple data storage object types. The virtual server module 56 defines the virtual server data storage object type, the file module 57 defines the file or directory data storage object type, and the iSCSI LUN module 58 defines the iSCSI LUN data storage object type. For example, data storage objects of these different data storage object types are stored in a respective volume version object consisting of logical data blocks of a sparse container file, and the respective software modules 56, 57, 58 defining the different data storage object types are executable by the data processor 31 for addressing of the data storage objects in the respective volume version object. In a similar fashion, the data processor 33 in the secondary data storage server 28 also has a common version stack 59 and software program modules 60, 61, 62 specific to the multiple data storage object types.
A production file itself can be a container for a UNIX-based file system. In this case, the logical extent of the production file serves as a logical volume upon which the file system is built. By using a single production file as a container for a file system, it is possible for the copier 55 to operating upon the production file to replicate the entire file system contained in the production file. Attributes of the container file may indicate when the version of the file system in the container file was created and last accessed. Further details regarding the use of a file as a logical volume for a UNIX-based file system are found in Virendra M. Mane, “File Based Volumes and File Systems,” U.S. patent application Ser. No. 11/301,975 filed Dec. 13, 2005, incorporated herein by reference.
The replication process asynchronously replicates the versions from the primary data storage 32 to the secondary data storage 34, but the number of versions retained on the secondary storage need not be the same as the number of versions retained on the primary storage. For example, when the oldest snapshot is deleted from primary or secondary storage, only the data blocks that are not shared with a younger version are deleted, and the shared blocks are kept together with the youngest version using these blocks. As shown in
In a preferred implementation, only changes to a data storage object between two snapshots in the primary data storage are replicated to the secondary data storage. For example, in
In general, different numbers of snapshot copies of a data storage object may be retained in the primary data storage 32 and the secondary data storage 34 depending on a snapshot retention policy for the primary storage and a snapshot retention policy for the secondary data storage. In the absence of an explicit snapshot retention policy, when a replication session has been configured and started for a data storage object, there will be a production version and two snapshots for the data storage object in the primary data storage, and there will be one production version and one snapshot in the secondary data storage, so that changes between the two snapshots in the primary storage can be replicated to the production version in the secondary storage, and once all of the changes between the two snapshots have been replicated to the production version in the secondary storage, the snapshot copy in the secondary storage can be updated or refreshed with all of the changes that have been replicated to the production version in the secondary storage. This ensures that if the versions of the data storage object in the primary storage become inaccessible due to a failure or disaster, then a recent and consistent snapshot copy is available from the secondary storage. In addition, if a remote client is permitted to have concurrent read-only access to the snapshot copy in the secondary data storage, then it is desirable for the updating or refreshing of this snapshot copy to be inhibited when the remote client is granted a read-lock upon this snapshot copy. So that the replication process may continue when a remote client retains a read-lock upon this snapshot copy in the secondary data storage, it is desirable to make a second, more recent snapshot copy from the replicated changes at the time that all of the changes between the two snapshot copies in the primary storage have been replicated to the production copy in the secondary data storage if the read-lock held by the remote client has not been released by that time.
In general, the snapshot facility 47 and the copier 53 in the primary data storage server 24 may replicate a snapshot copy of a data storage object from the primary data storage server 24 to the secondary data storage server 28 concurrent with read-write access by applications to the production version data storage object. Then the copier 53 may replicate a change or “delta” of the production version of the data storage object since snapshot copy from the primary data storage server 24 to the secondary data storage server 28. In a background process, the secondary data storage server 28 may replay the changes to the data storage object since the snapshot copy in order to maintain a remote backup copy of the production version of the data storage object for remote read-only access or for restoration in case of corruption or loss of the primary copy of the production version of the data storage object. The retention time of the snapshots in the secondary data storage 34 may be greater or less than the retention time of the snapshots in the primary data storage 32, as set by snapshot retention policies specified for particular data storage objects in the primary and secondary data storage. The secondary data storage server 28 may also consolidate or merge changes between successive snapshots so that snapshots of less frequent intervals are retained in the secondary data storage 34 than the intervals used for snapshot retention in the primary data storage 32.
The delta extent format provides a means of converting between different formats of a data storage object in the primary and secondary data storage servers. For example, the primary data storage server 24 could use a “copy on first write” snapshot copy facility 47 that copies logical blocks of a file system upon a first write at the logical block level of the logical volume containing the file system, and the secondary data storage server could use a “write anywhere” snapshot copy facility 50 that always writes new file data to newly allocated file system blocks and links the newly allocated file system blocks to the inode of the file in the file system.
The cached disk array 94 may contain different kinds of data storage objects that are accessible by applications using different kinds of storage access protocols. For example, the cached disk array 94 is shown to contain a UNIX-based file system 98 accessible through the NFS protocol, a MS-Windows file system 99 accessible through the CIFS protocol, and set of storage LUNs 100 accessible through the iSCSI protocol. Because the data movers 91, 92, 93 share access to the cached disk array 94, the data movers can be programmed and configured to permit any network client to use any one of the data movers to provide read-write access to any of the data storage objects 98, 99, and 100. Alternatively, the data movers can be programmed and configured so that a particular data storage object in the cached disk array 94 is accessible to a client only through a particular one of the data movers.
In addition to the replication manager (DpReplica) 42 and the scheduler 43, the data protection manager 41 also includes a DpRequest module 117 for handling requests for data protection services, a DpService module 118 for servicing the data protection requests, a DP task manager 119, and a DP policy module 120.
The common version stack 55 includes a Version Set module 121, a Volume Version Set module 122, a Version module 123, a Block Version module 124, and a Volume Version module 125.
The copier module (DpCopier) 53 accesses a Delta module 129, which accesses a File Delta module 127 for replication of an individual file in a logical volume, or accesses a Volume Version Delta module 128 for replication of a logical volume. For transmission of the delta extents over the RCP-TCP connection 64, the copier module 53 accesses a transport stack including a transport module 130, an RCP transport module 131, a throttle bandwidth module 132, and a TCP module 133. The throttle bandwidth module 132 paces transmission of chunks of data over a TCP connection to use a specified bandwidth allocation, as further described below with reference to
The DpInit module 114 is provided for recovering Data Protection objects such as Version Set, Replication and Task Session from an on-disk database maintained by each object. DpInit is started at reboot by a “dpinit” entry in a boot.cfg file. For each object to be recovered, a DpRequest is created and executed to recover an in-memory database. The DpInit module 114 calls the DicService module 115 to recover the database and to initialize a dispatcher. The DicService module 115 also can be called to record an event in the database.
The DpCopier module 53 functions as an interface between the Delta module 129 and the Transport module 130 to handle transfer of data. The DpCopier module 53 has a state machine called DpCopierSender for opening a transfer session, getting the data from the Delta module 129 and sending further to the Transport module 130. The DpCopier module 53 also has a state machine called DpCopierReceiver for receiving the data from the Transport module 130 and sending the data to the Delta module 129.
The DpCopier module 53 uses a DpCopierAck module 141 to handle acknowledgement of transferred data. The DpCopier Ack module 141 has a mechanism called DpCopierAckSender for receiving an acknowledgement from the Delta module 129 and sending further data to the Transport module 130. The DpCopier Ack module 141 also has a mechanism called DpCopierReceiver for receiving an acknowledgement from the Transport module 130 and updating the in-memory and on-disk database with a restart address required for re-starting transfer. A RcpTransport module 131 provides data transfer on an IP network using the RCP protocol. The Transport module 130 provides a generic interface for transferring data to various media. The Transport class is inherited by other classes that provide media specific implementation.
The FileVersion module 57 provides a snapshot of a file. The FileVersion class is derived from the class of the BlkVersion module 124 and implements the virtual functions defined in the Version module 123 and BlkVersion classes. The FileVersion module 57 carries out uncached I/O operations and getting or setting file attributes.
The FileDelta module 127 provides a mechanism for processing and consuming delta between two files. The FileDelta module 127 uses interfaces provided by the Delta module 129. In a processing mode, the Delta module 129 provides interfaces for building an extended table of blocks between two versions, reading data from a version, formatting data in the extended table, and inserting data in a queue. In a consuming mode, the Delta module 129 provides interfaces for formatting data, removing data from the queue, and writing the data.
The BlkVersion module 124 defines interfaces that apply to block devices. Two examples of block devices are volume and iSCSI LUN. The interfaces include uncached I/O interfaces, block size, and number of blocks.
The Version class is derived from a VcsNode class defined by a VcsNode module 142. Each Version object (production copy or snapshot) has a name and a reference count. A Version object also contains a list of Branches. A Version module 232 provides interfaces for getting or setting attributes (including opaque data that is used by Windows clients via CBMAPI).
A FileVersionSet module 144 uses a directory as the place holder for a collection of FileVersion objects. The FileVersionSet module 144 implements virtual functions defined in a VersionSet class. The FileVersionSet module 144 maintains the file handle of the directory and attributes table as private data.
VersionSet class represents a collection of Version objects. The VersionSet module 121 provides common interfaces to perform lookup, create, delete, restore, undo restore, commit restore and list versions operations. The VersionSet module 121 maintains an in-memory database and reference counter as protected data members and provides insertion and other helper methods as protected methods. The VersionSet module 121 provides a static data member for maintaining a list of all initialized VersionSet objects. It also defines a static method to initialize a VersionSet object given the VersionSetID.
A Branch class 143 is used internally to represent a branch node. It contains a list of Version objects that belong to the branch. The list may include a Version object for a temporary writable snapshot. In a specific implementation, the user may not create a version of a temporary writable snap, and instead the version list of a branch node has one node which is a temporary writable snap that the user may access.
The DP manager includes a DpService module 118 providing a general interface between the DpRequest module 117 and corresponding Dp objects for the particular DpRequest type. The Dp objects are stored in an in-memory database accessed by a DpContext module 152 to keep track of requests in progress. Each DpService type is a single instance. The DpRequest module 117 calls the interface executeCmd(DpRequest*) to find and call objects responsible for processing of a request. After the request is processed, the DpRequest module 117 calls the interface completeCmd(DpRequest*) to update the in-memory database and return a reply to the originator of the request.
For example, a ReplicaStart request starts a replication session on the primary data move and the secondary data mover. The request is initiated by the NBS request handler (37 in
For example, a ReplicaMark request is processed to mark a snapshot to be replicated. The request is initiated by the NBS request handler (37 in
One instance of the DpReplica class is created on the primary data mover and the secondary data mover for a particular replication session. The DpReplica module provides a mechanism to configure a session to establish remote connection for a control and a data path, to manage transfer of snapshots from the primary data mover to the secondary data mover, to provide consistency between the primary data mover and the secondary data mover, and to maintain the on-disk database for persistency on reboot.
One instance of the DpPolicy class is created per Version Set. All replication sessions running on same Version Set are sharing the same DpPolicy object. The DpPolicy module 120 provides a PolicyScheduler mechanism using the scheduler 43 for driving the transfer of snapshots to be replicated. The PolicyScheduler controls the number of transfers in progress from the primary data mover to the secondary data mover. Transfer is scheduled by the scheduler 43 on first come first served basis. There is a single instance of PolicyScheduler per data mover. The PolicyScheduler maintains a list of snapshots for each replication session based on mark and unmark operations in the in-memory and on-disk database. The PolicyScheduler also protects from deletion the snapshot delta in transfer and the base snapshot on the primary data mover and the secondary data mover.
A DpVersion object is created per Version Set. A DP Version module 48 provides an interface for accessing Version Set interfaces. The DP Version module 48 maintains the on-disk database to provide persistency at reboot.
The DpTaskManager module 119 provides a mechanism for handling specific commands running on top of the replication session such as failover and copy reverse. Failover aborts a replication session, sets a copy in the secondary data mover to production in read/write mode and sets the source of the copy in the primary data mover to read-only mode. Copy Reverse copies a snapshot from the secondary data mover back to the primary data mover. A DpTaskManager object is created just for the time of the currently running operation. The object is maintained in the on-disk database to provide persistency at reboot.
A DpTunnel module 153 provides a generic interface for sending requests between the primary data mover and the secondary data mover and returning the corresponding response. It supports only synchronous mode and no persistency at reboot. Operations handled by the DpTunnel module 153 are ValidateIp, VersionCreate, VersionCreateBranch, VersionDelete, VersionGetAttribute, VersionGetOpaqueData, VersionSetOpaqueData, VersionPromote, VersionDemote and VersionList. The DpTunnel module 153 provides a DpTunnelDicService interface for processing and replying to simple requests between the primary and secondary data movers in synchronous mode. The DpTunnel module 153 also provides a DpTunnelDicMsgService interface for processing and replying to multiple block requests between the primary and secondary data movers in synchronous mode. The DpTunnel module 153, the DpTask Manager module 119, and the DP Replica module 42 use a DicSession module 154 for communication over a DIC-TCP connection.
In a first step 161 of
In step 162, the system administrator may set up CIFS services on a selected one of the data movers (91, 92, 93 in
In a first step 171 of
Finally, in step 176, the system administrator either selects a “manual refresh” option, or specifies a “max out of sync” time in minutes. A “manual refresh” results in a remote copy of the data storage object that is updated to the state of the local production copy at the time that the “manual refresh” is requested. Such a “manual refresh” can be requested any number of times during the replication session. If the system administrator specifies a “max out of sync” time, then the replication process will keep a remote copy of the data storage object that is out of sync with the local production copy by no more than the “max out of sync” time, taking into account not only the time to transmit the replicated data from the primary data storage server to the secondary data storage server, but also the time required to process the delta and read it from the primary in addition to the time required to write the replicated data into the secondary data storage at the destination. The “max out of sync” time can have preset limits, such as a minimum value of 1 minute and a maximum value of 1440 minutes (i.e., 24 hours).
In a first step 181 of
In step 188, if there is a storage quota applicable to the data storage object to be snapshot copied, then the free storage space for allocation to the storage object is compared to a specified fraction of the storage quota, and if the free storage space is less than this specified fraction of the storage quota, execution branches to step 189 to execute a predetermined “low quota” policy for the data storage object. Otherwise, execution continues from step 188 to step 190. In step 190, if the object is currently being replicated, then the snapshot copy is retained that is otherwise deleted at the end of the current replication cycle. If the object is not currently being replicated, then the snapshot copy facility is invoked to make another snapshot copy of the data storage object.
There can be multiple destinations for replication of the same data storage object from the same primary data storage server. For example, if a replication session is requested from the same source to a different destination as an existing session, then a new replication session will be started but it will share the same source data. A different remote snapshot policy can be enforced at each destination. The replication manager in the primary data storage server will recognize the new replication session and the existing replication session as belonging to a combined one-to-many replication process. In this way multiple replication sessions from the same source can be done more efficiently by sharing the same source data.
Replication sessions can be associated with jobs. There can be jobs that are just local on the primary, for example to create snapshots on the primary that are not replicated. There can be jobs that establish multiple replication sessions, for a one-to-many replication. For example:
Job 1 (Rep Session 0)
Job 2 (Rep Session 1, Rep Session 2)
Job 3 (Local Snapshot Creation)
In this example, if Rep Session 1 and Rep Session 2 have the same source data, then Job 2 will be recognized as a one-to-many replication process.
As shown in
In the fail-over example of
In step 206 in
In step 206, if the first secondary does not have the most recent snapshot of the object, then execution branches to step 210. In step 210, if the second secondary does not have the most recent snapshot of the object, then the recovery process is terminated because there is no snapshot of the object in secondary storage for recovery of transaction processing by the application. If the second secondary has the most recent snapshot of the object, then execution continues from step 210 to step 211. In step 211, the second secondary refreshes object-2 with the most recent snapshot. Then in step 212 the second secondary performs a full remote copy of object-2 from the second secondary to the first secondary, so that object-1 on the first secondary is replaced with this full remote copy of object-2. Execution then continues from step 212 to step 209 so that transaction processing by the application resumes at the first secondary upon the full remote copy of object-2, and the first secondary replicates change of the object to the second secondary.
In step 205 of
In step 214 of
In step 213 of
In a first step 231 of
In practice, the steps of
In step 252, if a common base snapshot of the object is not found, then execution branches to step 255. In step 255, the secondary data storage server replicates to the primary data storage server the desired version in the secondary storage, and the primary data storage server restores the primary production copy of the object with this replicated data.
The primary data storage server can give network clients read-write access to the production copy on a priority basis during the restoration process, by servicing the network clients with data obtained from the secondary data storage server. See, for example, Armangau, “Instantaneous restoration of a production copy from a snapshot copy in a data storage system,” U.S. Pat. No. 6,957,362 issued Oct. 18, 2005, incorporated herein by reference.
In the example of
For example, an administrator at the control station of the primary data storage server can configure network properties of a remote secondary data storage server using the following set of commands, which set up authentication and a throttle bandwidth schedule for replication to the remote secondary:
nas_cel <remoteSystem>
- -server {<mover_name>|id=<slot>|ALL}
- {-create <remoteslot> -interfaces <hostname|ipAddr>, . . . -type <securityType> [-throttle <sched>]
- |-modify <remoteslot> -interfaces <hostname|ipAddr>, . . . -throttle <sched>
- |-delete <remoteslot>
- |-verify <remoteslot>
- |-list}
- <sched> means {Su|M|T|W|H|F|Sa}*HH:MM-HH:MM/<Kb/sec> for example “MTWHF08:00-18:00/10000;Sa08:00-13:00/25000”
- -server {<mover_name>|id=<slot>|ALL}
For a particular replication job, the system administrator can use the following command to change the maximum out-of-sync time (SLA, in minutes) and specify the state of a flag (True or False) to turn on and off propagation from a remote secondary to cascade destinations previously set for the remote secondary:
repJobID# SLA [propagate_flag]
A problem with replication over IP is that packets from various sources may share a link to a common destination. There is a good likelihood that if changes from various sources are replicated to the common destination as soon as the changes are made to production versions at these sources, then network congestion may occur. Network congestion under these circumstances can be eliminated by determining a maximum bandwidth of the link or network port at the destination, and by throttling the transmission bandwidth from the various sources so that the various sources share the total maximum bandwidth at which data can be delivered to and received at the common destination.
As shown in
A specific example of throttling the RCP-TCP bandwidth is shown in
In the data network of
A problem with switching-over client file access during replication is that persistent file handles that have been issued by the source file server normally do not identify the versions of the replicated files that reside on the destination file server. Therefore, when the destination file server 314 begins servicing the original clients 302, 303 of the source file server 304, any persistent file handles issued by the source file server 304 to the original clients 302, 303 normally are not usable for proper access to the replicated files in the destination file server 314.
A file handle is used in the NAS NFS protocol for uniquely identifying a file. In order for a client to access a specified file in a file server, the client first sends a lookup request to the file server. The lookup request includes a path name specifying the file. In response to the lookup request, the server returns a file handle to the client. The client then uses the file handle in subsequent requests to access the file. A file handle typically includes a file system identifier (FSID) and a file identifier (fid). The FSID uniquely identifies the file system within the file server for client access at a particular IP address of the file server. The file identifier (fid) typically is the inode number of the inode of the file.
Typically the FSID is automatically generated by a file server when a file system is created. The client specifies a name for a new file system, and the file server allocates storage for the file system and assigns a unique FSID to the file system. The FSID is part of the metadata of the file system. The automatic assignment of the FSID ensures that any two file systems accessible via the same IP address of the file server do not use the same FSID. If two file systems accessible via the same IP address would use the same FSID, then the file handles would be ambiguous. This would be a very undesirable situation, because an ambiguous file handle may cause a client to read from, write to, or delete the wrong file.
For some purposes, it is desirable to permit a system administrator to change the FSID of a file system. For example, a file server may contain a mirror or backup copy of an original file system, and if the original file system becomes inaccessible or corrupted, the system administrator may remove or unmount the original file system, and then change the FSID of the mirror or backup copy to the FSID of the original file system. In this fashion, the mirror or backup copy assumes the identity of the original file system.
For switch-over of access to replicated file systems to be accessed with NFS, it is desirable for the destination file server to use the same file system metadata to generate file handles as the source file server. This permits clients to use persistent file handles for file access concurrent with the switch-over. Therefore the switch-over can be entirely transparent to the clients, without the destination file server generating any “stale handle” error when the destination server receives a file access request including a file handle returned from the source file server. The NFS mounts and the cached file handles in the clients will not go stale so that the clients will not need to reboot or issue new lookup requests to access files in the destination server.
A transparent switch-over of client access may result when the destination file server assumes the IP address of the source file server upon switch-over in order to intercept file access requests from the original clients of the source file server, and when the destination file server uses the same file handle format, FSID, and fids for the destination file system as used by the source file server for the source file system. For example, a transparent switch-over of client access results when the following three conditions are satisfied: (1) after the switch-over, the destination file server appears to have the same IP address as the primary file server prior to the switch-over; (2) after the switch-over, the destination file server is either using the same media access control (MAC) address as the source file server prior to the switch-over OR the clients are all at least one router hop away from the source destination file server and the destination file server (so that the clients see the MAC of the router not of the file servers); and (3) the file handle generation logic for the destination file server is the same as for the source file server.
The first two conditions can be satisfied in various ways, such as by using the network configuration of
If the file handle generation logic in the destination server is identical to the file handle generation logic in the source server, the clients will pick up where they left off (with all of their cached file handles still valid). If the file handle generation is different (e.g., if the FSIDs of the migrated files on the destination server are different from the FSIDs of the original files on the source server) then the clients'"'"' mounts will go stale and the clients will have to umount/remount or reboot, depending on the particular circumstances.
To provide file handle generation logic in the destination file server that is identical to the file handle generation logic in the source file server without changing the conventional format of the file handle, the respective FSIDs of the migrated files in the destination server should be set identical to the FSIDs of the original files in the source server. This is not disruptive so long as the destination server is not already using the FSIDs of the original files to be migrated. Therefore, when a replication session is created and the destination is specified as a volume pool (not an explicit file system), the destination server attempts to assign the same FSID to the new file system being created on the destination server as the FSID on the source file server so long as this FSID is not already being used on the destination server. If this FSID is already being used on the destination server, then either a different FSID is chosen and any file handles issued for the original FSID by the source file server are invalidated (e.g., by unmounting the file system on the clients of the source file server so that the original FSID from the source file server is purged from the file handle caches of these clients) or the FSID of the file system already using it on the destination server is changed and any file handles issued for the original FSID by the destination file server are invalidated (e.g., by unmounting the file system on the clients of the destination file server so that the original FSID from the destination file server is purged from the file handle caches of these clients).
In step 332, a replication session is created for each source file system, and the free volume pool of the destination file server is specified as the destination file system for the replication session. At the start of the replication session, the source file server sends to the destination file server the source file system metadata including the FSID of the source file system, and the destination file server allocates a storage volume from its free volume pool to contain the secondary copy of the source file system.
In step 333, the destination file server checks whether or not the FSID of each source file system is already in use on the destination file server. If the FSID of the source file server is not already in use on the destination file server, then the destination file server assigns the same FSID to the storage volume to contain the secondary copy of the source file system. If the FSID of the source file server is already in use on the destination file server, then the destination file server can automatically resolve the conflict by unmounting the conflicting volume already in use on the destination server, assigning a new FSID to the conflicting volume already in use on the destination server, and remounting the conflicting volume already in use on the destination server. This process causes the clients having access to the conflicting volume already in use on the destination server to purge their caches of any stale file handles of the file system of the conflicting volume already in use on the destination server. Once any stale file handles are no longer usable by clients of the destination file server, the old FSID is assigned to the storage volume that was allocated to contain the secondary copy of the source file system. Therefore, by the end of step 333, the destination file server has assigned the FSID of the source file system to the storage volume that the destination file server allocates to contain the secondary copy of the source file system.
Once any FSID conflict is resolved and a storage volume of the destination file server is allocated to contain the secondary copy of the source file system, the replication session begins replicating data of the source file system to the allocated destination volume.
In step 334, when it is desired to switch file access of the original clients of the source file server over to the destination file server, the destination file server assumes the IP address of the source file server. So long as the source file server is operational, the replication of data of the source file system to the allocated destination volume may continue so that the destination file system is brought up to date with the source file system. Preferably the replication occurs in background, concurrent with the destination file server giving priority to servicing of file access requests from original clients of the source file server. For example, if the destination file server receives a client request to access data that has not yet been replicated from the source file system to the volume allocated to contain the secondary copy of the source file system, then the destination file server responds by fetching the data from the source file system on the source file server, storing the data in the volume allocated to contain the secondary copy of the source file system, and then accessing the data in the secondary copy of the source file system. Further details of such a process of replication in background concurrent with a destination file server giving priority to servicing file access requests from network clients can be found in Bober et al. U.S. Pat. No. 6,938,039 issued Aug. 30, 2005, incorporated herein by reference.
In view of the above, a data storage server is programmed for management, version control, and scheduling of replication of data storage objects of multiple data storage object types. The multiple data storage object types include iSCSI LUNs, file systems, virtual servers, directories, and files. The version control determines if two data storage objects are the same or have a common base so that only a difference needs to be transmitted for replication or restoration. The scheduler controls the timing of snapshot creation and deletion, and replication transmission to one or more remote destinations. A replication job may specify coincident replication sessions, and if the coincident replication sessions specify the same data storage object and different destinations, then the replication data is read once from the data storage object and transmitted to the different destinations in a “one to many” replication. A cascaded replication can be set up by configuring a replication session with a list of propagation destinations, so that the replication data is received at a destination of the session and automatically forwarded from the destination of the session to each of the propagation destinations. Concurrent replication sessions having the same destination share reception bandwidth of the destination. For fair and efficient usage of the total reception bandwidth, a respective bandwidth allocation share is adjusted for each session, and the data transmission of each session to the common destination is paced in accordance with the respective bandwidth allocation share. The remote replication of a file from a primary data storage server to a secondary data storage server includes remote replication of corresponding file handle information so that a file handle issued by a primary data storage server to a network client can be used by the network client for accessing the replicated file in the secondary data storage server.