Data transformation offloading in an artificial intelligence infrastructure
First Claim
Patent Images
1. A method of data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘
- GPU’
) servers, the method comprising;
scheduling, by a unified management plane, one or more transformations for one or more of the storage systems to apply to the dataset;
scheduling, by the unified management plane, execution of one or more machine learning algorithms associated with the machine learning model by the one or more GPU servers;
storing, within the storage system, a dataset;
identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and
generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.
2 Assignments
0 Petitions
Accused Products
Abstract
Data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘GPU’) servers, including: storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset.
187 Citations
17 Claims
-
1. A method of data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘
- GPU’
) servers, the method comprising;scheduling, by a unified management plane, one or more transformations for one or more of the storage systems to apply to the dataset; scheduling, by the unified management plane, execution of one or more machine learning algorithms associated with the machine learning model by the one or more GPU servers; storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset. - View Dependent Claims (2, 3, 4, 5, 6)
- GPU’
-
7. An artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘
- GPU’
) servers, the artificial intelligence infrastructure configured to carry out the steps of;scheduling, by a unified management plane, one or more transformations for one or more of the storage systems to apply to the dataset; scheduling, by the unified management plane, execution of one or more machine learning algorithms associated with the machine learning model by the one or more GPU servers; storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset. - View Dependent Claims (8, 9, 10, 11, 12)
- GPU’
-
13. An apparatus for data transformation offloading in an artificial intelligence infrastructure that includes one or more storage systems and one or more graphical processing unit (‘
- GPU’
) servers, the apparatus comprising a computer processor and a computer memory operatively coupled to the computer processor, the computer memory having disposed within it computer program instructions that, when executed by the computer processor, cause the apparatus to carry out the steps of;scheduling, by a unified management plane, one or more transformations for one or more of the storage systems to apply to the dataset; scheduling, by the unified management plane, execution of one or more machine learning algorithms associated with the machine learning model by the one or more GPU servers; storing, within the storage system, a dataset; identifying, in dependence upon one or more machine learning models to be executed on the GPU servers, one or more transformations to apply to the dataset; and generating, by the storage system in dependence upon the one or more transformations, a transformed dataset. - View Dependent Claims (14, 15, 16, 17)
- GPU’
Specification