Multi-Cloud Framework for Microservice-Based Applications
1. A method, comprising:
- maintaining, using at least one processing device, a structural state of an application comprising a plurality of microservices hosted in a plurality of distinct cloud environments, wherein the structural state of the application is maintained over time and comprises, for each microservice, an indication of the cloud environment that hosts the respective microservice;
obtaining, using the at least one processing device, a source code for each of the plurality of microservices of the application and deployment instructions for each of the plurality of distinct cloud environments; and
deploying, using the at least one processing device, the plurality of microservices of the application using the structural state of the application, the source code for each of the plurality of microservices and the deployment instructions for each of the plurality of distinct cloud environments.
A multi-cloud framework is provided for microservice-based applications. An exemplary method comprises maintaining a structural state of an application comprising a plurality of microservices hosted in a plurality of distinct cloud environments. The structural state of the application is maintained over time and comprises, for each microservice, an indication of the cloud environment that hosts the respective microservice. A source code is maintained for each of the plurality of microservices of the application and deployment instructions are maintained for each of the plurality of distinct cloud environments. The plurality of microservices of the application are deployed using the structural state of the application, the source code for each of the plurality of microservices and the deployment instructions for each of the plurality of distinct cloud environments.
- 1. A method, comprising:
maintaining, using at least one processing device, a structural state of an application comprising a plurality of microservices hosted in a plurality of distinct cloud environments, wherein the structural state of the application is maintained over time and comprises, for each microservice, an indication of the cloud environment that hosts the respective microservice; obtaining, using the at least one processing device, a source code for each of the plurality of microservices of the application and deployment instructions for each of the plurality of distinct cloud environments; and deploying, using the at least one processing device, the plurality of microservices of the application using the structural state of the application, the source code for each of the plurality of microservices and the deployment instructions for each of the plurality of distinct cloud environments.
- View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
- 12. A system, comprising:
a memory; and at least one processing device, coupled to the memory, operative to implement the following steps; maintaining a structural state of an application comprising a plurality of microservices hosted in a plurality of distinct cloud environments, wherein the structural state of the application is maintained over time and comprises, for each microservice, an indication of the cloud environment that hosts the respective microservice; obtaining a source code for each of the plurality of microservices of the application and deployment instructions for each of the plurality of distinct cloud environments; and deploying the plurality of microservices of the application using the structural state of the application, the source code for each of the plurality of microservices and the deployment instructions for each of the plurality of distinct cloud environments.
- View Dependent Claims (13, 14, 15, 16)
- 17. A computer program product, comprising a tangible machine-readable storage medium having encoded therein executable code of one or more software programs, wherein the one or more software programs when executed by at least one processing device perform the following steps:
- View Dependent Claims (18, 19, 20)
The field relates generally to the deployment of software applications.
Software applications are increasingly deployed as a collection of microservices. In addition, a number of software providers are increasingly using multiple cloud environments to host their applications and/or data. A need remains for improved techniques for deploying microservice-based applications across multiple cloud environments.
In one embodiment, a method comprises maintaining a structural state of an application comprising a plurality of microservices hosted in a plurality of distinct cloud environments, wherein the structural state of the application is maintained over time and comprises, for each microservice, an indication of the cloud environment that hosts the respective microservice; obtaining a source code for each of the plurality of microservices of the application and deployment instructions for each of the plurality of distinct cloud environments; and deploying the plurality of microservices of the application using the structural state of the application, the source code for each of the plurality of microservices and the deployment instructions for each of the plurality of distinct cloud environments.
In some embodiments, a resource usage of one or more of the microservices of the application is monitored based on one or more user-defined metrics. Queries are optionally processed with respect to the resource usage.
In at least one embodiment, one or more of the microservices are moved to a different cloud environment using one or more of (i) a manual intervention of a user, and (ii) an automated optimization process based on resource usage data and collected predefined optimization parameters.
Other illustrative embodiments include, without limitation, apparatus, systems, methods and computer program products comprising processor-readable storage media.
Illustrative embodiments of the present disclosure will be described herein with reference to exemplary communication, storage and processing devices. It is to be appreciated, however, that the disclosure is not restricted to use with the particular illustrative configurations shown. One or more embodiments of the disclosure provide a multi-cloud framework for microservice-based applications.
In one or more embodiments, a multi-cloud framework is provided for stateless microservice-based applications that can be implemented across multiple cloud environments. A user creates an application as a series of code fragments corresponding to individual microservices, and each microservice can be implemented using different technologies, such as Container as a Service (CaaS) and Function as a Service (FaaS). The application microservices can thus reside in different cloud environments (e.g., public clouds and/or private clouds). The disclosed framework is responsible for deploying the application and keeping track of the structural state of the application. Generally, the structural state of an application identifies the clouds that run particular versions of the microservices at any given point in time.
In one or more embodiments, the disclosed multi-cloud framework performs the following tasks, at an application level:
application billing: report how much each application and each application microservice costs at defined intervals of time;
application monitoring: monitor the resource usage of each application and application microservice according to metrics defined by end users, storing resource usage information in, for example, a monitoring repository;
moving portions of an application among different clouds; and
application optimization: by providing the microservices monitoring information to an application scheduler/optimizer, the application scheduler/optimizer can decide to move parts of an application among clouds in order to substantially optimize the application according to user defined optimization parameters (e.g., cost).
In at least one embodiment, the disclosed multi-cloud framework allows enterprises to build software with the following characteristics:
a software architecture where an application is built comprising multiple microservices;
application microservices can run on different cloud technologies, such as CaaS or FaaS, and the microservices can be hosted in multiple clouds, both public clouds as well as private, on-premises clouds;
users can query at a given moment the list of microservices that comprise an application, as well as where the microservices are running and which microservice version is deployed at each cloud;
users will be able to define metrics in order to monitor the resource usage of application microservices and query for resource usage; and
users will be able to migrate application microservices among clouds, either manually or automatically by use of an application optimizer/scheduler.
In some embodiments, the disclosed multi-cloud framework performs application management tasks, such as registering or deregistering clouds that will host applications, uploading and removing applications to/from clouds, and starting and stopping microservices.
Thus, in the example of
In one or more embodiments, the multi-cloud framework administration tool 110 keeps the application structural state 120 up-to-date, as new microservices are created or deleted on different cloud environments 130.
The disclosed multi-cloud framework allows for the use of multiple microservice types, such as CaaS and FaaS for the implementation of microservices. In this manner, a user can initially decide to execute one or more microservices in a cloud environment 130 using a CaaS microservice type and then decide to migrate the one or more microservices to another cloud environment 130 using a FaaS microservice type, as discussed further below.
It is noted that the disclosed multi-cloud framework is optionally extensible and allows for the registering of other microservice types other than CaaS or FaaS, as would be apparent to a person of ordinary skill in the art.
As shown in
In various embodiments, the local application repository 140 could be any kind of structured data repository, ranging from a folder structure in the operating system file system to a full-fledged commercial Database Management System, depending on organizational concerns such as Information Technology infrastructure norms or security policies. One important consideration is the separation of source code 150 and installation/configuration files. For instance, imagine that Application X is comprised of three microservices, F1, F2 and F3, and that there are three possible places for the microservices to be installed:
Cloud A as CaaS;
Cloud A as FaaS; and
Cloud B as CaaS.
Consider the difference between deploying an application and starting/stopping an application. Deploying an application means merely uploading the application files to a storage area in the cloud, so it can later instantiate the service (e.g., run it).
Now consider a specific cloud that offers both capabilities, CaaS and FaaS, and that has the source code files for a service already uploaded. If both deployment scripts are also uploaded, this service can be instantiated either as FaaS or as CaaS.
In some embodiments, a cloud 130 can be any private cloud (e.g., Pivotal Cloud Foundry) or public cloud (e.g., Microsoft Azure or Google Cloud Platform (GCP)). In the case of public clouds, for example, different cloud regions (e.g., geographic locations) can be seen as different clouds. For example, different cloud regions may comprise the following:
Before creating an application, the multi-cloud framework administration tool 110 provides methods that allow users to register clouds in the multi-cloud framework. The registering process obtains credentials (e.g., access tokens) and endpoints (e.g., HTTP (Hypertext Transfer Protocol) URLs (uniform resource locators)) so the disclosed multi-cloud framework can invoke cloud services via their native commands. The disclosed multi-cloud framework will then maintain a table of registered clouds with their credentials and endpoints.
Moreover, for each type of Cloud that takes part in the multi-cloud system, a piece of the disclosed multi-cloud framework is developed specifically to interact with the cloud'"'"'s native application programming interface (API), as discussed below.
Along with the orchestrator object 320, there are cloud-specific objects 330 for each cloud that the system supports. Each cloud object 330 implements a common set of commands defined in a common interface. For example, all cloud objects should in principle expose the same API. As a consequence, the orchestrator object 320 may issue the same command to all clouds 130 supported by the multi-cloud framework, and each cloud-specific component will in turn execute the command by using the native API provided by the cloud provider. For instance, assume that Clouds 130-1 and Cloud 130-2 are both Microsoft Azure clouds, and Cloud 130-3 is a Google Cloud. In this case, cloud objects 330-1 and 330-2 could be components responsible for executing commands in clouds 130-1 and 130-2, respectively, using the Azure REST API, while cloud object 330-3 could be the component responsible for executing commands in cloud 130-3, using the Google Cloud Platform REST API.
For example, suppose the user 310 issues the following command:
in order to stop all microservices that are part of Application someApplication from running. When the orchestrator object 320 executes this command, the orchestrator object 320 retrieves a list of all clouds 130 that have services that are part of someApplication, and the orchestrator object 320 will issue a command to each corresponding cloud object 330 to stop the microservices that they host and that are part of the Application, with the following command:
Monitoring and Optimizing
Each cloud 130 should offer the possibility for the usage of its resources to be monitored and, in fact, public clouds typically do so. The disclosed multi-cloud framework leverages these capabilities and lets users decide which resources they want to monitor, if any, either for each microservice or for the whole application. The multi-cloud framework then asks the clouds to send the desired monitoring information at specified intervals. With the collected information, the optimizer/scheduler can decide if there is a better way of allocating microservices among the clouds.
In some embodiments, the monitoring and optimization processes rely on a MonitorRepository, as discussed further below in conjunction with
The disclosed multi-cloud framework optionally allows for the definition of metrics that can be monitored over time. A metric comprises:
a name—a name that will be used throughout the multi-cloud framework, such as ‘CPU’, ‘Memory’, or ‘OverallCost’,
a unit—a measure unit, such as GHz, GB, or US$; and
a timestamp range—a time interval, such as ‘hour’, or ‘day’.
This metric definition allows for two important steps optionally performed by the framework administrator on setting up the whole framework monitoring process:
to specify a list of metrics that a cloud 130 can monitor for each service type (e.g., CaaS, FaaS or another custom microservice type); and
to define a list of metrics that needs to be collected for each Application microservice at a given time interval.
Once these pieces of information are configured in the multi-cloud framework, a user 310 can issue a command so the application can start to be monitored. In this case, the user 310 would issue a command to the orchestrator object 320 such as:
and the orchestrator object 320, in turn, issues a command to each cloud 130 that hosts microservices that belong to someApplication:
StartMonitoringServiceAsync(app_name, service_name, List<Metric>).
At this stage, each cloud 130 can start to send metric values to the MonitorRepository, as discussed further below in conjunction with
a timestamp—timestamp of the instant the measure was taken;
a name—a name that will be used throughout the framework, such as ‘CPU’, ‘Memory’, or ‘OverallCost’;
a unit—a measure unit, such as GHz, GB, or US$; and
a value—the value that was gauged by the cloud 130.
In a nutshell, the orchestrator object 320 instructs the clouds 130 about which metrics they should work with, and the clouds 130 send back the measures related to those metrics.
The aim of the optimization is to come up with a move plan, if needed. A move plan is a map depicting, for an application, where each microservice resides and where each microservice should be moved. To accomplish this, the optimizer needs resource usage data and an optimization metric.
An optimization metric can optionally be explicitly set by the user 310 for each application. Along with the metric to be optimized, the user 310 needs to inform the orchestrator object 320 if this metric is supposed to be substantially maximized or substantially minimized.
The MonitorRepository stores all measures sent by different clouds 130, and the MonitorRepository organizes the measures in records with the following exemplary structure:
Based on the specified optimization metric, the optimizer uses the data in the MonitorRepository to analyze the resource usage of each application and to come up with a move plan. The orchestrator object 320 can use this move plan to actually move the microservice among clouds 130. A move plan is a map where each record can have, for example, the following exemplary format:
The optimization algorithm used to create the move plan is outside the scope of the present disclosure. Off-the-shelf algorithms can be used, or new algorithms can be created, as would be apparent to a person of ordinary skill in the art. Generally, an optimization algorithm can be plugged into the disclosed multi-cloud framework, as the needed parameters for such algorithms are readily available.
Current software applications, even microservice-based software applications designed to work in a cloud 130, do not have a multi-cloud implementation, in the sense that you cannot design an application from the beginning to have its microservices spread among clouds and yet keep the logical view of the application at each moment. For example, how much an application is consuming, in relation to defined resources, cannot easily be measured, or even how much it will cost to run an application for a given time period on a public cloud. For instance, if an application comprises seven microservices running on public cloud A and nine microservices running on an on-premise cloud B, there are solutions in the market that will indicate how much your virtual machines/containers are spending on each cloud 130, but the available solutions will not indicate how much the application as a whole (all microservices) are consuming.
Companies in various business segments are increasingly using multiple clouds to host their applications or data. In parallel, modern techniques for software construction are becoming increasingly more popular:
at the design level, microservices-oriented architectures address the problem of DevOps agility: the ability to quickly evolve a complex software, by breaking the software into small pieces that can be deployed and versioned separately while the software continues to run with no interruptions; and
at the implementation level, two cloud paradigms are well suited to realize the concept of microservices: Caas and Faas.
At the present time, there is no multi-cloud software framework that allows for the use of these two paradigms simultaneously and the exploitation of its advantages. Existing solutions are typically based on virtual machines in order to utilize predefined metrics (e.g., CPU and/or memory).
There are multi-cloud software solutions in the market that will allow for offline migration of virtual machines/containers, but with no distinction of what is inside a virtual machine or container. Virtual machines and containers are treated as infrastructure resources, and they will not be able to respond easily to tasks such as migrating an entire application or parts of an application from one cloud to another, while the logical view of the application is maintained. Moreover, optimization at the application level (e.g., the ability to recommend the best clouds for each application microservice to be hosted) simply does not exist.
Multi-Cloud Framework Using Application Structural State
One or more embodiments of the disclosure provide a multi-cloud framework that allows for the creation of multi-cloud microservice-based applications.
As shown in
Thereafter, during step 420, the exemplary multi-cloud framework process 400 obtains the source code 150 for the microservices F of the application X and the deployment instructions 160 for each distinct cloud environment 130.
Finally, the microservices F of the application X are deployed during step 430 using the structural state 120 of application, the source code 150 for the microservices F, and the deployment instructions 160 for each distinct cloud environment 130.
In one or more embodiments, logical components are (loosely) represented according to the Object-Oriented Programming paradigm. The notation used herein is a means of logically grouping components and does not suggest or reinforce any kind of implementation, as would be apparent to a person of ordinary skill in the art. In this paradigm, logical components are called classes—they act as a blueprint for the creation of objects and they describe which operations objects can perform, as well as what information an object stores internally (its variables). An object is what exists in memory and interacts with others to accomplish the intentions of the program. Components are referred to herein as classes to reinforce the conceptual level context, and not the implementation level.
Cloud Type and Service Type
A ServiceType defines how a service is being implemented, such as CaaS or FaaS.
As noted above, the disclosed multi-cloud framework is extensible. Thus, both cloud type and service type classes allow users to add new cloud providers (either public or on-premises) as well as service types that clouds offer.
Service and Version
a Public class 620—accepts requests and send responses via networking common architectures such as REST over HTTP/HTTP;
a Timed class 630—executed at defined schedule using CRON tables (e.g., a time-based job scheduler);
a StorageEvent class 640—executed when notified about a defined Condition, such as a new file that is uploaded to a known storage area.
Each service can store multiple versions, using a Version class 610; therefore each version has a local path where the files related to the version are stored. The disclosed multi-cloud framework allows for an application to upgrade or downgrade a service version.
The orchestrator component 710 is the main coordinator of the multi-cloud application environment 100 of
orchestration—keeping the coherence of the application among clouds 130, allowing for deployment, removal or relocation of microservices;
resource monitoring—the orchestrator component 710 communicates with the monitor object 740, which in turn communicates with monitor agents 750-1 through 750-3 for different clouds, so as to collect user-defined metric values; and
application scheduling—the orchestrator component 710 communicates with the application scheduler 730 so the application scheduler 730 can use data collected by the monitor 740 to calculate and suggest a move plan back to the orchestrator component 710.
One orchestrator component 710 can reside on a local desktop and will allow the cloud administrator to manage the multi-cloud application environment 100 of
In one or more embodiments the orchestrator component 710 stores a dictionary containing the structural state 120 of each application:
Each time a user calls an operation that is supposed to be performed on an application, the orchestrator component 710 uses this dictionary to know which clouds host which microservices of that application, and in turn the orchestrator component 710 calls the cloud-specific objects to carry on operations specific to the services that each cloud hosts.
The orchestrator component 710 object also keeps the URL for the monitor 740 and the application scheduler 730, so the orchestrator component 710 can ask these two objects to execute operations related to Monitoring and Application Scheduling. The Monitor 740 and the Application Scheduler 730 reside in principle in the same device as the orchestrator component 710, but they can also reside on any cloud, as an alternative implementation, as would be apparent to a person of ordinary skill in the art.
Each cloud 130 can be classified according to a CloudType and, for each cloud 130 that will be part of the multi-cloud application environment 100 of
In one or more embodiments, there are different implementations of cloud objects 720, one for each supported CloudType. The various cloud objects 720 implement substantially the same list of operations in some embodiments (e.g., the same API that the orchestrator uses to communicate with them). The exemplary logical architecture 700 of
In a similar manner as cloud objects 720, each cloud (e.g., either public or on-premises clouds) should have a monitor agent object 750-1 through 750-3 running, either on the respective cloud 130 or in the same device as the monitor object 740—both implementations are possible. The monitor agent object 750 is responsible for monitoring user-defined metrics related to microservices that are allocated on one specific cloud and for sending the metrics data to a user-defined repository, which can optionally reside on the same cloud 130.
While different cloud objects 720 exist for different CloudTypes, different monitor agents 750 also exist for different CloudTypes, because they use the native-provided APIs to carry out their operations. In one or more embodiments, the different monitor agents 750 implement substantially the same API.
The monitor object 740 communicates with the different monitor agents 750 in order to order them to start or stop monitoring microservices. The monitor object 740 receives monitoring reports from each monitor agent 750 responsible for monitoring clouds 130 and aggregates them in reports that are saved to a repository. This repository with aggregated data can be used to send monitoring reports to the orchestrator component 710 or the repository can be used by the application scheduler 730 to create move plans.
The monitor 740 keeps information about monitor agents 750, specifically which microservices are being monitored by which monitor agents 750 in which cloud 130 and which metrics are being monitored for each microservice.
As shown in
The application scheduler 730 uses the data accumulated in the monitor repository 760 used by the monitor 740 to analyze the accumulated data and create a move plan. It also allows the users to create Clots. A clot is a list of microservices that cannot be moved separately. Either they are moved together or they do not take part in the move plan.
In some embodiments, the application scheduler 730 is a single object which optionally lives on the same site as the monitor repository 760.
While the orchestrator component 710, the application scheduler 730 and the monitor 740 are separate components in the exemplary logical architecture 700 of
Flow of Operations
A number of representative examples are provided of how the framework components interact via their APIs of
The following examples assume the following prerequisite steps are already done in one or more exemplary embodiments:
1. a local repository 140 with the application code and installation scripts is readily available, as described above;
2. an orchestrator component 710 is running and accessible to end-users;
3. a cloud object 720 responsible for each cloud 130 is running in the multi-cloud application environment 100;
4. the clouds 130 that are part of the multi-cloud application environment 100 have been registered via operation Orchestrator.RegisterCloud( ), so the orchestrator component 710 can get an access token to send commands to them (the Orchestrator may also be referred to herein as Horizon);
5. the Application to be uploaded was registered via operation Orchestrator.RegisterApplication( ).
From Orchestrator Component 710 to Cloud
In this example, the process of uploading an application is outlined. A user calls the operation Orchestrator.UploadApplication( ):
1. the orchestrator component 710 gets a list of clouds this application will be hosted in, as well as which ServiceType to use for each cloud.
2. for each cloud in this list, the orchestrator component 710 calls Cloud.UploadApplication( ) with the application name and the path where the files to be uploaded reside.
Ultimately, the command Cloud.UploadApplication( ) issued by the Cloud Object 330 for each cloud will call the native API provided by the respective cloud 130 in order to upload the application files to their local storage.
From Orchestrator Component 710 to Monitor 740 to Monitor Agent 750
In this example, the steps needed to start the monitoring process of an application are outlined.
The following prerequisites steps are assumed to be already done in one or more exemplary embodiments:
1. a monitor repository 760 (e.g., a data store to store data sent by the monitor 740 should be running;
2. a monitor object 740 is running anywhere in the multi-cloud application environment 100 and connected to the monitor repository 760;
3. a monitor agent object 750 is running for each cloud 130 where monitoring should happen, with is private repository; and
4. the monitor 740 should already have registered each monitor agent 750, so the monitor 740 can access their operations.
The process follows:
1. user calls operation Orchestrator.RegisterMonitor( ) to access the operations of monitor 740;
2. user calls Orchestrator.ConfigureApplicationMonitoring( ) to specify, for each microservice, which metrics the user wants to monitor;
3. the orchestrator component 710 retrieves its stored list of clouds 130 where the application resides, as well as the list of microservices per cloud 130 for the application; and
4. For each cloud and its services in the list, the following steps are performed:
- a. orchestrator component 710 calls operation Monitor.ConfigureCloudServiceMonitoring( ) passing along the cloud name, the microservices to be monitored along with the metrics to monitored on them;
- b. monitor 740 retrieves the monitor agent 750 associated to this cloud;
- c. monitor 740 calls MonitorAgent.StartMonitoringServiceAsync( ) providing the microservices to be monitored as well as the specified metrics for it.
Ultimately, it is the monitor agent 750 that will call the native API of its respective cloud 130 to start monitoring the services on its cloud 130. The monitored data is then stored its local repository.
Going further, if a user wants to get a monitoring report for an application, given a start time and an end time, the user will call the operation Orchestrator.GetApplicationMonitoringReport( ). The orchestrator component 710 will know which clouds to address, so the orchestrator component 710 will be able to issue the operation Monitor.GetCloudMonitoringReport( ) for each one. The monitor object 740 knows which monitor agent 750 is responsible for each cloud 130, and then calls operation MonitorAgent.GetMonitoringReport( ) to each of them. Each monitor agent 750 will look up its local repository and return to the monitor 740 the set of measures for that time period. The monitor 740 will aggregate all results on its monitor repository 760 and return to the orchestrator component 710.
From Orchestrator Component 710 to Application Scheduler 730
In this example, the steps needed to start the application scheduler 730 for a specific application are outlined.
The following prerequisites steps are assumed to be already done in one or more exemplary embodiments:
1. an application scheduler object 730 is running anywhere in the multi-cloud application environment 100 and connected to the monitor repository 760, so the former can query stored data on the latter;
2. the application scheduler object 730 should be registered by the orchestrator component 710 so the orchestrator component 710 can call its operations.
The exemplary process follows:
A user calls the following operation:
in order to inform:
the application the user wants to schedule for optimization;
the metric that will be used for optimization;
the optimization direction (e.g., minimization or maximization); and
the time interval to be used by the optimizer (e.g., 10 min, 30 min, etc.) to generate move plans.
Then, the user can call the following operation:
in order to inform the following:
if the generated move plan will be automatically executed or just returned as a report to the Orchestrator.
From then on, the application scheduler 730 will start to query the monitor repository 760 at specific time intervals to retrieve relevant measures and use it to feed the optimization algorithm and generate the move plan.
It is important to note that, in the description above, the orchestrator component 710 can directly call ApplicationScheduler operations (e.g., Orchestrator.ApplicationScheduler . . . ). This logical representation is different from the scheme Orchestrator>Monitor>MonitorAgent, in that there is no inherent task needed to be done by the orchestrator component 710 before it calls the ApplicationScheduler operations. However, this relationship could have been represented otherwise, for example:
Orchestrator.SetOptimizationMetric( ) calls
ApplicationScheduler.SetOptimizationMetric( ); and
and so on.
Alternative Logical Architecture
As noted above, the exemplary logical architecture 700 of
In some embodiments, the disclosed multi-cloud framework for microservice-based applications enables an application comprising multiple microservices to be deployed to a plurality of different clouds. Among other benefits, the disclosed multi-cloud framework for microservice-based applications can provide the structural state of an application at a given time (e.g., for each application, identify the microservices running on each cloud, and/or which version of each microservice is currently running). In addition, one or more microservices of an application that are running in one cloud can optionally be moved to another cloud (e.g., to change the structural state 120 of the application in an automatically orchestrated way) and/or the microservice implementation can be changed from one cloud to another cloud (e.g., CaaS to FaaS and vice-versa, as long as the cloud providers provide such services).
One or more embodiments of the disclosure provide improved methods, apparatus and computer program products for deploying an application comprising multiple microservices to a plurality of different clouds. The foregoing applications and associated embodiments should be considered as illustrative only, and numerous other embodiments can be configured using the techniques disclosed herein, in a wide variety of different applications.
It should also be understood that the disclosed multi-cloud framework for microservice-based applications, as described herein, can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer. As mentioned previously, a memory or other storage device having such program code embodied therein is an example of what is more generally referred to herein as a “computer program product.”
The disclosed techniques for deploying an application comprising multiple microservices to a plurality of different clouds may be implemented using one or more processing platforms. One or more of the processing modules or other components may therefore each run on a computer, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.”
As noted above, illustrative embodiments disclosed herein can provide a number of significant advantages relative to conventional arrangements. It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated and described herein are exemplary only, and numerous other arrangements may be used in other embodiments.
In these and other embodiments, compute services can be offered to cloud infrastructure tenants or other system users as a Platform as a Service (PaaS) offering, although numerous alternative arrangements are possible.
Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprise cloud infrastructure including virtual machines implemented using a hypervisor that runs on physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines under the control of the hypervisor. It is also possible to use multiple hypervisors each providing a set of virtual machines using at least one underlying physical machine. Different sets of virtual machines provided by one or more hypervisors may be utilized in configuring multiple instances of various components of the system.
These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as a cloud-based orchestrator component 710, or portions thereof, are illustratively implemented for use by tenants of such a multi-tenant environment.
Cloud infrastructure as disclosed herein can include cloud-based systems such as Amazon Web Services (AWS), Google Cloud Platform (GCP) and Microsoft Azure. Virtual machines provided in such systems can be used to implement at least portions of a cloud-based orchestrator platform in illustrative embodiments. The cloud-based systems can include object stores such as Amazon S3, GCP Cloud Storage, and Microsoft Azure Blob Storage.
In some embodiments, the cloud infrastructure additionally or alternatively comprises a plurality of containers implemented using container host devices. For example, a given container of cloud infrastructure illustratively comprises a Docker container or other type of Linux Container (LXC). The containers may run on virtual machines in a multi-tenant environment, although other arrangements are possible. The containers may be utilized to implement a variety of different types of functionality within the storage devices. For example, containers can be used to implement respective processing devices providing compute services of a cloud-based system. Again, containers may be used in combination with other virtualization infrastructure such as virtual machines implemented using a hypervisor.
Illustrative embodiments of processing platforms will now be described in greater detail with reference to
The cloud infrastructure 1400 further comprises sets of applications 1410-1, 1410-2, . . . 1410-L running on respective ones of the VMs/container sets 1402-1, 1402-2, . . . 1402-L under the control of the virtualization infrastructure 1404. The VMs/container sets 1402 may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs.
In some implementations of the
An example of a hypervisor platform that may be used to implement a hypervisor within the virtualization infrastructure 1404 is the VMware® vSphere® which may have an associated virtual infrastructure management system such as the VMware® vCenter™. The underlying physical machines may comprise one or more distributed processing platforms that include one or more storage systems.
In other implementations of the
As is apparent from the above, one or more of the processing modules or other components of the exemplary logical architecture 700 may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure 1400 shown in
The processing platform 1500 in this embodiment comprises at least a portion of the given system and includes a plurality of processing devices, denoted 1502-1, 1502-2, 1502-3, . . . 1502-K, which communicate with one another over a network 1504. The network 1504 may comprise any type of network, such as a wireless area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as WiFi or WiMAX, or various portions or combinations of these and other types of networks.
The processing device 1502-1 in the processing platform 1500 comprises a processor 1510 coupled to a memory 1512. The processor 1510 may comprise a microprocessor, a microcontroller, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other type of processing circuitry, as well as portions or combinations of such circuitry elements, and the memory 1512, which may be viewed as an example of a “processor-readable storage media” storing executable program code of one or more software programs.
Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used.
Also included in the processing device 1502-1 is network interface circuitry 1514, which is used to interface the processing device with the network 1504 and other system components, and may comprise conventional transceivers.
The other processing devices 1502 of the processing platform 1500 are assumed to be configured in a manner similar to that shown for processing device 1502-1 in the figure.
Again, the particular processing platform 1500 shown in the figure is presented by way of example only, and the given system may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, storage devices or other processing devices.
Multiple elements of an information processing system may be collectively implemented on a common processing platform of the type shown in
For example, other processing platforms used to implement illustrative embodiments can comprise different types of virtualization infrastructure, in place of or in addition to virtualization infrastructure comprising virtual machines. Such virtualization infrastructure illustratively includes container-based virtualization infrastructure configured to provide Docker containers or other types of LXCs.
As another example, portions of a given processing platform in some embodiments can comprise converged infrastructure such as VxRail™, VxRack™, VxBlock™, or Vblock® converged infrastructure commercially available from VCE, the Virtual Computing Environment Company, now the Converged Platform and Solutions Division of Dell EMC.
It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform.
Also, numerous other arrangements of computers, servers, storage devices or other components are possible in the information processing system. Such components can communicate with other elements of the information processing system over any type of network or other communication media.
As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality shown in one or more of the figures are illustratively implemented in the form of software running on one or more processing devices.
It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art.