×

Systems and methods for computing applications

  • US 9,152,470 B2
  • Filed: 09/06/2012
  • Issued: 10/06/2015
  • Est. Priority Date: 09/07/2011
  • Status: Active Grant
First Claim
Patent Images

1. A system for dynamic deployment of computing applications comprising:

  • one or more linked repositories storing blueprints, graphs, and components;

    one or more processors configured to receive a command to deploy at least one computing application to process at least one input data stream, and in response, trigger deployment of the at least one computing application over a plurality of disparate host systems,the at least one computing application realized by a blueprint of the blueprints in the one or more linked repositories;

    at least one host system of the disparate host systems comprising;

    a cloud agent and one or more cloud engines instantiated by the cloud agent;

    wherein the cloud agent is configured to receive the command to deploy the at least one computing application to process the at least one input data stream and, in response, instantiate the one or more cloud engines on the respective host system and provide a running environment for the one or more cloud engines;

    wherein the one or more cloud engines are configured to dynamically construct the at least one computing application on the respective host system by realizing requirements of the blueprint of the at least one computing application, the requirements identifying at least one graph from the graphs stored in the linked one or more repositories, a plurality of components from the components stored in the linked one or more repositories, and by sending a request to the one or more linked repositories to load the blueprint, the at least one graph, and the plurality of components on the respective host system; and

    wherein the one or more cloud engines deploy the dynamically constructed at least one computing application on the respective host system by;

    instantiating the at least one graph using the blueprint, the graph representing a workflow of the plurality of components, the workflow defining an arrangement of the plurality of components;

    detecting that the plurality of components comprise a first set of components written for a first architecture and a second set of components written for a second architecture;

    representing at least a portion of the graph as a first subgraph and a second subgraph;

    using the first subgraph to define connections between the first set of components using pins and the workflow of the graph, and using the second subgraph to define connections between the second set of components using additional pins and the workflow of the graph, each component of the first and second sets of components having at least one input pin for receiving at least one input data container and at least one output pin for providing at least one output data container, the respective component transforming the input data container into the output data container using a computing processing mechanism, each component being a distribution plug-in unit to provide a portable and isolated dependency set for the computing processing mechanism of the respective component;

    connecting the second subgraph to the first subgraph using at least one additional input in and at least one additional output in to pass at least a portion of the data containers between the first subgraph and the second subgraph, the connected first and second subgraphs maintaining the workflow of the graph;

    instantiating a running process instance of the first architecture on the respective host system for the first set of components of the first subgraph;

    instantiating a running process instance of the second architecture on another host system of the disparate host systems for the second set of components of the second subgraph, the running process instance of the first architecture being linked to the running process instance of the second architecture by the cloud engine at deployment time to link the first subgraph and the second subgraph; and

    processing the input data stream as a plurality of data containers using the plurality of components to generate an output data stream, the plurality of data containers flowing between the plurality of components using the pins;

    using the at least one additional input pin and the at least one additional output pin to pass at least a portion of the data containers between the running process instance of first architecture on the respective host system and the running process instance of the second architecture of the other host system to generate the output data stream.

View all claims
  • 5 Assignments
Timeline View
Assignment View
    ×
    ×