Dynamic intervention with software applications
First Claim
1. A system for teaching a learning objective comprising:
- a software application comprising an application logic component and that provides a user view and a facilitator view; and
intervention control arranged to interface directly with the application logic component.
1 Assignment
0 Petitions
Accused Products
Abstract
A system for teaching a learning objective is provided. The system includes a software application comprising an application logic component and that provides a user view and a facilitator view. An intervention control is arranged to interface directly with the application logic component. A method is also disclosed for teaching a learning objective and includes the steps of executing a software application that includes an application logic component and that provides a user view and a facilitator view; and using an intervention control to interface directly with the application logic component. The system and method may be implemented as part of programmed instructions in a computer system and/or a computer readable medium.
-
Citations
125 Claims
-
1. A system for teaching a learning objective comprising:
-
a software application comprising an application logic component and that provides a user view and a facilitator view; and intervention control arranged to interface directly with the application logic component. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63)
-
2. The system of claim 1, further comprising a monitoring system for assessing the performance of a learning objective.
-
3. The system of claim 2, wherein the intervention control comprises an intervention configuration interface and/or an intervention logic component that provides for an automated response to an output from the monitoring system.
-
4. The system of claim 1, being distributed over one or more computing nodes, and comprising a data communications system between at least the software application and the intervention control.
-
5. The system of claim 2, being distributed over one or more computing nodes, and comprising a data communications system between two or more of the software application, the intervention control and the monitoring system.
-
6. The system of claim 1, wherein the software application is distributed over one or more computing nodes and comprises a data communications system permitting communication between at least two of a facilitator, a user and the application logic component.
-
7. The system of claim 4, wherein the distributed environment is a client/server environment.
-
8. The system of claim 4, wherein the distributed environment is a peer-to-peer environment.
-
9. The system of claim 4, wherein the software application can be concurrently used by one or more users and zero or more facilitators.
-
10. The system of claim 1, wherein the software application is at least one of:
- a game, a learning application, a games based learning application or a training simulation.
-
11. The system of claim 1, wherein the intervention control comprises a cues component for the provision of cues to the user of the software application.
-
12. The system of claim 11, wherein the cues component comprises a cue controller interface arranged to receive input from the intervention control to directly configure and dynamically update the application logic.
-
13. The system of claim 12, wherein the cue controller interface allows the intervention control to generate a cue as an object for use in the software application.
-
14. The system of claim 12, wherein the cues are predefined and can be selected from a list by a facilitator and/or the intervention control.
-
15. The system of claim 12, wherein parameter values of the cues can be configured by the intervention control.
-
16. The system of claim 11, wherein the cues comprise one or more of:
- graphic elements, text elements, audio elements, video elements or haptic elements.
-
17. The system of claim 11, wherein the cues are context-sensitive.
-
18. The system of claim 11, wherein the cues are user-sensitive.
-
19. The system of claim 17, wherein the arrangement of a cue is computed in relation to a point of reference of an environment of the software application.
-
20. The system of claim 19, wherein the relation between a cue and the point of reference is one of proximity, orientation, relative location, absolute location, presence or absence of other objects and/or users.
-
21. The system of claim 17, wherein the particular form of a cue is computed in relation to an environment of the software application.
-
22. The system of claim 21, wherein the relation between a cue and an environment can be one of size, geometry, colour, texture, shading, transparency, material, contrast, or brightness.
-
23. The system of claim 17, wherein the temporal aspects of a cue are computed in relation to an environment of the software application.
-
24. The system of claim 23, wherein the relation between a cue and an environment can be one of appearance time, lifetime, flash rate, number of repetitions, speed, velocity, or dynamic resizing.
-
25. The system of claim 11, wherein the cues are learning objective-sensitive.
-
26. The system of claim 25, wherein a cue can depend on at least one of past achievements, current objectives, targets, schedule for a user or a team of users.
-
27. The system of claim 11, wherein a cue can be used by and/or configured for a single user and/or a group or team of users.
-
28. The system of claim 1, wherein the software application comprises an application view component for the transmission of display signals.
-
29. The system of claim 28, wherein the display signals comprise a view generated for the facilitator from at least one of:
- part or whole of any user'"'"'s view of the application including the user'"'"'s view of a two dimensional or three dimensional virtual environment and user interface controls, and a defined camera view.
-
30. The system of claim 29, wherein the display signals comprise multiple views from more than one user or teams of users.
-
31. The system of claim 30, wherein the multiple views are arranged in a tiled or stacked window arrangement on one or more screens.
-
32. The system of claim 28, wherein the display signals comprise one or more of:
- camera position information, camera orientation, camera viewing angle, and virtual environment objects.
-
33. The system of claim 28, wherein the display signals comprise compressed or uncompressed image data.
-
34. The system of claim 1, wherein the software application includes program code for facilitating audio and/or visual communication with one or more users.
-
35. The system of a claim 3, wherein the intervention control comprises a control component for allocating the usage of controls within the software application between two or more of the user, the facilitator and automatic control by the intervention logic component.
-
36. The system of claim 35, wherein the control component comprises a control interface arranged to receive input from the facilitator and/or from the intervention logic component to directly configure and dynamically update the application logic component.
-
37. The system of claim 35, wherein the control interface allows a facilitator to define and/or the intervention logic component to generate a control as an object for use in the software application.
-
38. The system of claim 35, wherein the controls are predefined and can be selected from a list by a facilitator and/or the intervention logic component.
-
39. The system of claim 35, wherein parameter values of the controls can be configured by a facilitator and/or the intervention logic component.
-
40. The system of claim 35, wherein the controls comprise a range of user interface controls comprising one or more of menus, single buttons, radio buttons, sliders, 2D (x-y coordinate) input means, 3D (x-y-z coordinate) input means, waypoints input means, text, avatar or virtual character control means for head, body, hands, legs, objects carried by avatar, objects held in hand, object manipulation in the environment, virtual vehicle control means including direction, speed), and virtual creature control means.
-
41. The system of claim 35, wherein the runtime control over any combination of the controls is dynamically assigned to any combination of:
- users, teams of users, facilitators, intervention logic component.
-
42. The system of claim 41, wherein the application logic component comprises a conflict arbitration logic component that determines the outcome in the case of simultaneous exercise of two or more control actions.
-
43. The system of claim 42, wherein the conflict arbitration logic component comprises one of:
- facilitator priority, user priority, intervention logic priority, or the majority, minority, or minimum percentage of user actions ascertained in a poll of users.
-
44. The system of claim 42, wherein the conflict arbitration logic component considers control actions within a specified time interval to determine the outcome.
-
45. The system of claim 35, wherein the software application comprises a feedback mechanism where the status of control by the facilitator control and/or the intervention logic component is displayed to the user.
-
46. The system of claim 35, wherein the software application comprises a feedback mechanism where the status of control by the user and/or the intervention logic component is displayed to the facilitator.
-
47. The system of claim 2, wherein the monitoring system is arranged to generate an alarm for the facilitator based on the user'"'"'s performance in the software application.
-
48. The system of claim 47, wherein said performance in the software application is a quantified achievement of one or more learning objectives.
-
49. The system of claim 2, wherein the monitoring system is arranged to generate an alarm for the intervention logic component in order to trigger an automatic intervention response.
-
50. The system of claim 49, wherein the intervention response is a command to do one of:
- generate, select, configure, and takes effect on one or multiple of cues, controls, views, and communication actions.
-
51. The system of claim 47, wherein the intervention logic receives information about the degree of achievement of a user'"'"'s learning objectives, user actions and the state of the software application, and determines alarms based on the information and logged historic information.
-
52. The system of claim 47, wherein an alarm is generated based on monitoring one or more of:
- an activity or inactivity level, the exploration by the user of unknown territory, the action by the user of going around in circles or repeating unsuccessful actions, explicit user requests for intervention, usage of peer support, progress over time and achievement rate, and difficulty level of software application.
-
53. The system of claim 47, wherein the said software application comprises one or more measured objectives formed from a related group of software application measurements;
- and is adapted for the assessment of a learning objective, said learning objective being formed from;
the selection of one or more measured objectives; defined outcome conditions for the measured objectives; and selected outcome activities that each invoke a command in the learning application.
- and is adapted for the assessment of a learning objective, said learning objective being formed from;
-
54. The system of claim 1, wherein a user of the software application can set a preference for a level of intervention to be provided.
-
55. The system of claim 1, wherein a facilitator of the software application can set a preference for any user for a level of intervention to be provided.
-
56. The system of claim 54, wherein the user preference corresponds to a variation of learning objective thresholds that define when the monitoring system triggers the alarm.
-
57. The system of claim 1, wherein a facilitator can view and intervene in one or more software applications at the same time.
-
58. The system of claim 1, wherein a user can temporarily or permanently assume the role of a facilitator in a peer support capacity.
-
59. The system of claim 1, further comprising a correlation component for the correlation of a user or a team profile to a specific software application, said correlation component comprising an instruction set for a computer comprising:
-
a system for interrogating a user profile and reading a learning objective as a first input; a system for interrogating the software application and reading a learning objective as a second input; a calculation component for determining a relevance of the first input learning objective to the second input learning objective; and a system for adapting the software application in accordance with the determined relevance and/or updating the user profile in accordance with the determined relevance.
-
-
60. The system of claim 59, wherein said system for adapting the software application comprises instructions for applying a weighting factor to at least one of existing experience data in the user profile related to the learning objective of the software application and performance data related to the subsequent use of the software application.
-
61. The system of claim 59, wherein the calculation component comprises a semantic profiling component adapted to determine a quantitative similarity of meaning between the first input learning objective and the second input learning objective for use as the relevance.
-
62. A software application for use with the system of claim 1.
-
63. A computer program product comprising the software application of claim 62.
-
2. The system of claim 1, further comprising a monitoring system for assessing the performance of a learning objective.
-
-
64. A method for teaching a learning objective comprising:
-
executing a software application comprising an application logic component and that provides a user view and a facilitator view; and using an intervention control to interface directly with the application logic component. - View Dependent Claims (65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125)
-
65. The method of claim 64, further comprising the step of assessing the performance of a learning objective with a monitoring system.
-
66. The method of claim 65, wherein the intervention control comprises an intervention configuration interface and/or:
an intervention logic component that provides for an automated response to an output from the monitoring system.
-
67. The method of claim 64, being distributed over one or more computing nodes, and comprising the step of communicating data between at least the software application and the intervention control.
-
68. The method of claim 67 when dependent from claim 65 or claim 66, further comprising the step of communicating data between two or more of the software application, the intervention control and the monitoring system.
-
69. The method of claim 64, wherein the software application is distributed over one or more computing nodes and the method comprises the step of communicating data between at least two of a facilitator, a user and the application logic component.
-
70. The method of claim 67, wherein the distributed environment is a client/server environment.
-
71. The method of claim 67, wherein the distributed environment is a peer-to-peer environment.
-
72. The method of claim 67, wherein the software application is concurrently used by one or more users and zero or more facilitators.
-
73. The method of claim 64, wherein the software application is at least one of:
- a game, a learning application, a games based learning application or a training simulation.
-
74. The method of claim 64, wherein the intervention control comprises a cues component for the provision of cues to the user of the software application.
-
75. The method of claim 74, wherein the cues component comprises a cue controller interface arranged to receive input from the intervention control to directly configure and dynamically update the application logic.
-
76. The method of claim 75, wherein the cue controller interface allows the intervention control to generate a cue as an object for use in the software application.
-
77. The method of claim 75, wherein the cues are predefined and can be selected from a list by a facilitator and/or the intervention control.
-
78. The method of claim 75, wherein parameter values of the cues can be configured by the intervention control.
-
79. The method of claim 74, wherein the cues comprise one or more of:
- graphic elements, text elements, audio elements, video elements or haptic elements.
-
80. The method of claim 74, wherein the cues are context-sensitive.
-
81. The method of claim 74, wherein the cues are user-sensitive.
-
82. The method of claim 80, wherein the arrangement of a cue is computed in relation to a point of reference of an environment of the software application.
-
83. The method of claim 82, wherein the relation between a cue and the point of reference is one of proximity, orientation, relative location, absolute location, presence or absence of other objects and/or users.
-
84. The method of claim 80, wherein the particular form of a cue is computed in relation to an environment of the software application.
-
85. The method of claim 84, wherein the relation between a cue and an environment can be one of size, geometry, colour, texture, shading, transparency, material, contrast, or brightness.
-
86. The method of claim 80, wherein the temporal aspects of a cue are computed in relation to an environment of the software application.
-
87. The method of claim 84, wherein the relation between a cue and an environment can be one of appearance time, lifetime, flash rate, number of repetitions, speed, velocity, or dynamic resizing
-
88. The method of claim 74, wherein the cues are learning objective-sensitive.
-
89. The method of claim 88, wherein a cue can depend on at least one of past achievements, current objectives, targets, schedule for a user or a team of users.
-
90. The method of claim 74, wherein a cue can be used by and/or configured for a single user and/or a group or team of users.
-
91. The method of claim 64, further comprising the step of transmitting display signals to/from an application view component of the software application.
-
92. The method of claim 91, wherein the display signals comprise a view generated for the facilitator from at least one of:
- part or whole of any user'"'"'s view of the application including the user'"'"'s view of a two dimensional or three dimensional virtual environment and user interface controls, and a defined camera view.
-
93. The method of claim 92, wherein the display signals comprise multiple views from more than one user or teams of users.
-
94. The method of claim 93, wherein the multiple views are arranged in a tiled or stacked window arrangement on one or more screens.
-
95. The method of claim 91, wherein the display signals comprise one or more of:
- camera position information, camera orientation, camera viewing angle), and virtual environment objects.
-
96. The method of claim 91, wherein the display signals comprise compressed or uncompressed image data.
-
97. The method of claim 64, wherein the method comprises the step of audio and/or visual communication with one or more users via the software application.
-
98. The method of claim 66, wherein the method further comprises the step of allocating the usage of controls within the software application between two or more of the user, the facilitator and automatic control by the intervention logic component, via a control component provided as part of the intervention control.
-
99. The method of claim 98, wherein the control component comprises a control interface arranged to receive input from the facilitator and/or from the intervention logic component to directly configure and dynamically update the application logic component.
-
100. The method of claim 98, wherein the control interface allows a facilitator to define and/or the intervention logic component to generate a control as an object for use in the software application.
-
101. The method of claim 98, wherein the controls are predefined and can be selected from a list by a facilitator and/or the intervention logic component.
-
102. The method of claim 98, wherein parameter values of the controls can be configured by a facilitator and/or the intervention logic component.
-
103. The method of claim 98, wherein the controls comprise a range of user interface controls comprising one or more of menus, single buttons, radio buttons, sliders, 2D (x-y coordinate) input means, 3D (x-y-z coordinate) input means, waypoints input means, text, avatar or virtual character control means for head, body, hands, legs, objects carried by avatar, objects held in hand, object manipulation in the environment, virtual vehicle control means including direction, speed), and virtual creature control means.
-
104. The method of claim 98, wherein the runtime control over any combination of the controls is dynamically assigned to any combination of:
- users, teams of users, facilitators, intervention logic component.
-
105. The method of claim 104, wherein the application logic component comprises a conflict arbitration logic component that determines the outcome in the case of simultaneous exercise of two or more control actions.
-
106. The method of claim 105, wherein the conflict arbitration logic component comprises one of:
- facilitator priority, user priority, intervention logic priority, or the majority, minority, or minimum percentage of user actions ascertained in a poll of users.
-
107. The method of claim 105, wherein the conflict arbitration logic component considers control actions within a specified time interval to determine the outcome.
-
108. The method of claim 98, wherein the software application comprises a feedback mechanism where the status of control by the facilitator control and/or the intervention logic component is displayed to the user.
-
109. The method of claim 98, wherein the software application comprises a feedback mechanism where the status of control by the user and/or the intervention logic component is displayed to the facilitator.
-
110. The method of claim 64, wherein the monitoring system generates an alarm for the facilitator based on the user'"'"'s performance in the software application.
-
111. The method of claim 110, wherein said performance in the software application is a quantified achievement of one or more learning objectives.
-
112. The method of claim 64, wherein the monitoring system generates an alarm for the intervention logic component in order to trigger an automatic intervention response.
-
113. The method of claim 112, wherein the intervention response is a command to do one of:
- generate, select, configure, and takes effect on one or multiple of cues, controls, views, and communication actions.
-
114. The method of claim 110, wherein the intervention logic receives information about the degree of achievement of a user'"'"'s learning objectives, user actions and the state of the software application, and determines alarms based on these information and logged historic information.
-
115. The method of claim 110, wherein an alarm is generated based on monitoring one or more of:
- an activity or inactivity level, the exploration by the user of unknown territory, the action by the user of going around in circles or repeating unsuccessful actions, explicit user requests for intervention, usage of peer support, progress over time and achievement rate, and difficulty level of software application.
-
116. The method of claim 110, wherein the said software application comprises one or more measured objectives formed from a related group of software application measurements;
- and is adapted for the assessment of a learning objective, said learning objective being formed from;
the selection of one or more measured objectives; defined outcome conditions for the measured objectives; and selected outcome activities that each invoke a command in the learning application.
- and is adapted for the assessment of a learning objective, said learning objective being formed from;
-
117. The method of claim 64, wherein a user of the software application can set a preference for a level of intervention to be provided.
-
118. The method of claim 64, wherein a facilitator of the software application can set a preference for any user for a level of intervention to be provided.
-
119. The method of claim 117, wherein the user preference corresponds to a variation of learning objective thresholds that define when the monitoring system triggers the alarm.
-
120. The method of claim 64, wherein a facilitator can view and intervene in one or more software applications at the same time.
-
121. The method of claim 64, wherein a user can temporarily or permanently assume the role of a facilitator in a peer support capacity.
-
122. The method of claim 64, wherein the method further comprises the step of correlating a user or a team profile to a specific software application, via a correlation component comprising an instruction set for a computer comprising:
-
a system for interrogating a user profile and reading a learning objective as a first input; a system for interrogating the software application and reading a learning objective as a second input; a calculation component for determining a relevance of the first input learning objective to the second input learning objective; and a system for adapting the software application in accordance with the determined relevance and/or updating the user profile in accordance with the determined relevance.
-
-
123. The method of claim 122, wherein said system for adapting the software application comprises instructions for applying a weighting factor to at least one of existing experience data in the user profile related to the learning objective of the software application and performance data related to the subsequent use of the software application.
-
124. The method of claim 122, wherein the calculation component comprises a semantic profiling component adapted to determine a quantitative similarity of meaning between the first input learning objective and the second input learning objective for use as the relevance.
-
125. A computer readable medium having instructions encoded thereon that, when run on a computer, provides a system for teaching a learning objective that provide the steps of claim 64.
-
65. The method of claim 64, further comprising the step of assessing the performance of a learning objective with a monitoring system.
-
Specification
- Resources
-
Current AssigneeITI Scotland Limited (Scottish Government)
-
Original AssigneeITI Scotland Limited (Scottish Government)
-
InventorsAstheimer, Peter
-
Granted Patent
-
Time in Patent OfficeDays
-
Field of Search
-
US Class Current717/175
-
CPC Class CodesG09B 19/00 Teaching not covered by oth...G09B 5/00 Electrically-operated educa...G09B 7/00 Electrically-operated teach...