METHOD FOR ALLOWING DISTRIBUTED RUNNING OF AN APPLICATION AND RELATED DEVICE AND INFERENCE ENGINE
First Claim
1. A method for allowing distributed running of an application between a device and a server connected via a network, the method comprising the following steps carried out by the device:
- obtaining a device profile including resource capacity characteristics of said device;
obtaining an application profile including resource consumption characteristics of said application;
obtaining device metrics relating to real-time resource usage with respect to said device;
obtaining offload rules defining conditions under which an application is to be run at least in part on a server and/or on a device, the conditions involving device resource capacity, application resource consumption and device real-time resource usage; and
making a decision by an inference engine to run said application at least in part on said server and/or on said device, by evaluating the offload rules applied to said device profile, application profile and device metrics.
1 Assignment
0 Petitions
Accused Products
Abstract
Method for allowing distributed running of an application between a device and a server connected via a network. The method includes the following steps carried out by the device: obtaining a device profile including resource capacity characteristics of the device; obtaining an application profile including resource consumption characteristics of the application; obtaining device metrics relating to real-time resource usage with respect to the device; obtaining offload rules defining conditions under which an application is to be run at least in part on a server and/or on a device, the conditions involving device resource capacity, application resource consumption and device real-time resource usage; and making a decision by an inference engine to run the application at least in part on the server and/or on the device, by evaluating the offload rules applied to the device profile, application profile and device metrics.
32 Citations
11 Claims
-
1. A method for allowing distributed running of an application between a device and a server connected via a network, the method comprising the following steps carried out by the device:
-
obtaining a device profile including resource capacity characteristics of said device; obtaining an application profile including resource consumption characteristics of said application; obtaining device metrics relating to real-time resource usage with respect to said device; obtaining offload rules defining conditions under which an application is to be run at least in part on a server and/or on a device, the conditions involving device resource capacity, application resource consumption and device real-time resource usage; and making a decision by an inference engine to run said application at least in part on said server and/or on said device, by evaluating the offload rules applied to said device profile, application profile and device metrics. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A device for allowing distributed running of an application between the device and a server connected via a network, the device comprising:
-
a unit configured to obtain a device profile including resource capacity characteristics of said device; a unit configured to obtain an application profile including resource consumption characteristics of said application; a unit configured to obtain device metrics relating to real-time resource usage with respect to said device; a unit configured to obtain offload rules defining conditions under which an application is to be run at least in part on a server and/or on a device, the conditions involving device resource capacity, application resource consumption and device real-time resource usage; and an inference engine configured to make a decision to run said application at least in part on said server and/or on said device, by evaluating the offload rules applied to said device profile, application profile and device metrics.
-
-
11. An inference engine for use in cooperation with a device arranged for allowing distributed running of an application between the device and a server connected via a network and comprising a unit for obtaining a device profile including resource capacity characteristics of said device, a unit for obtaining an application profile including resource consumption characteristics of said application, a unit for obtaining device metrics relating to real-time resource usage with respect to said device, and a unit for obtaining offload rules defining conditions under which an application is to be run at least in part on a server and/or on a device, the conditions involving device resource capacity, application resource consumption and device real-time resource usage, wherein the inference engine is arranged to make a decision to run said application at least in part on said server and/or on said device, by evaluating the offload rules applied to said device profile, application profile and device metrics.
Specification