Conserving Computing Resources during Network Parallel Processing
First Claim
1. A parallel processing device, comprising:
- one or more memory devices operable to store a queue of processing jobs to run; and
a parallel processing engine implemented by a processor communicatively coupled to the one or more memory devices, the parallel processing engine configured to;
access the queue of processing jobs to run; and
execute a shell script for each particular processing job in the queue of processing jobs to run, the shell script configured to;
access a queue size parameter associated with the particular processing job;
calculate a container size for the particular processing job based on the queue size parameter;
access a parallel partitions parameter associated with the particular processing job;
access a configuration variable associated with the particular processing job;
determine whether the configuration variable associated with the particular processing job matches a predetermined value;
in response to determining that the configuration variable associated with the particular processing job matches the predetermined value, dynamically generate a configuration file for the particular processing job, the configuration file configured to instruct a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to the parallel partitions parameter, the configuration file comprising;
randomized scratch directories for computing nodes within the network of computing systems; and
the calculated container size for the particular processing job; and
trigger the particular processing job to run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.
1 Assignment
0 Petitions
Accused Products
Abstract
A parallel processing device includes a parallel processing engine implemented by a processor. The parallel processing engine is configured to execute a shell script for each particular processing job in a queue of processing jobs to run. The shell script is configured to dynamically generate a configuration file for each particular processing job. The configuration file instructs a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to a parallel partitions parameter associated with the particular job. The configuration file includes randomized scratch directories for computing nodes within the network of computing systems and a calculated container size for the particular processing job. Each processing job is run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job.
10 Citations
20 Claims
-
1. A parallel processing device, comprising:
-
one or more memory devices operable to store a queue of processing jobs to run; and a parallel processing engine implemented by a processor communicatively coupled to the one or more memory devices, the parallel processing engine configured to; access the queue of processing jobs to run; and execute a shell script for each particular processing job in the queue of processing jobs to run, the shell script configured to; access a queue size parameter associated with the particular processing job; calculate a container size for the particular processing job based on the queue size parameter; access a parallel partitions parameter associated with the particular processing job; access a configuration variable associated with the particular processing job; determine whether the configuration variable associated with the particular processing job matches a predetermined value; in response to determining that the configuration variable associated with the particular processing job matches the predetermined value, dynamically generate a configuration file for the particular processing job, the configuration file configured to instruct a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to the parallel partitions parameter, the configuration file comprising; randomized scratch directories for computing nodes within the network of computing systems; and the calculated container size for the particular processing job; and trigger the particular processing job to run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A parallel processing method, comprising:
-
accessing, by a parallel processing engine, a queue of processing jobs to run; executing, by the parallel processing engine, a shell script for each particular processing job in the queue of processing jobs to run; accessing, by the shell script, a queue size parameter associated with the particular processing job; calculating, by the shell script, a container size for the particular processing job based on the queue size parameter; accessing, by the shell script, a parallel partitions parameter associated with the particular processing job; accessing, by the shell script, a configuration variable associated with the particular processing job; determining, by the shell script, whether a configuration variable associated with each particular processing job matches a predetermined value; in response to determining that the configuration variable associated with each particular processing job matches the predetermined value, dynamically generating a configuration file by the shell script for the particular processing job, the configuration file configured to instruct a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to the parallel partitions parameter, the configuration file comprising; randomized scratch directories for computing nodes within the network of computing systems; and the calculated container size for the particular processing job; and triggering, by the shell script, each particular processing job to run on the network of computing systems according to its associated dynamically-generated configuration file. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer program product comprising executable instructions stored in a non-transitory computer readable medium such that when executed by a processor causes the processor to implement a parallel processing engine configured to:
-
access a queue of processing jobs to run; and execute a shell script for each particular processing job in the queue of processing jobs to run, the shell script configured to; access a queue size parameter associated with the particular processing job; calculate a container size for the particular processing job based on the queue size parameter; access a parallel partitions parameter associated with the particular processing job; access a configuration variable associated with the particular processing job; determine whether the configuration variable associated with the particular processing job matches a predetermined value; in response to determining that the configuration variable associated with the particular processing job matches the predetermined value, dynamically generate a configuration file for the particular processing job, the configuration file configured to instruct a network of computing systems to run the particular processing job using a particular number of parallel partitions corresponding to the parallel partitions parameter, the configuration file comprising; randomized scratch directories for computing nodes within the network of computing systems; and the calculated container size for the particular processing job; and trigger the particular processing job to run on the network of computing systems according to the dynamically-generated configuration file of the particular processing job. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification