Brush, carbon-copy, and fill gestures
First Claim
Patent Images
1. A method performed on a computing device, the method comprising:
- recognizing, by the computing device, a first user input as selecting and holding an object displayed at a first location on a touch screen, the first user input being recognized as a first touch input;
recognizing, by the computing device, a second user input as drawing an empty frame at a second location on the touch screen, where the second location is separate and distinct from the first location, the empty frame recognized as being drawn while the object is being selected and held via the first user input, the second user input being recognized as a second touch input that defines at least two points of the empty frame, where the first touch input and the second touch input are each provided by a finger or a stylus; and
detecting, by the computing device, a fill gesture from the recognized first and second user inputs, the fill gesture effective to use the object selected via the first user input to fill the empty frame defined via the second user input so as to generate and display on the touch screen a filled frame that comprises a version of the selected object within the filled frame at the second location while the selected object remains displayed at the first location.
2 Assignments
0 Petitions
Accused Products
Abstract
Techniques involving gestures and other functionality are described. In one or more implementations, the techniques describe gestures that are usable to provide inputs to a computing device. A variety of different gestures are contemplated, including bimodal gestures (e.g., using more than one type of input) and single modal gestures. Additionally, the gesture techniques may be configured to leverage these different input types to increase the amount of gestures that are made available to initiate operations of a computing device.
390 Citations
20 Claims
-
1. A method performed on a computing device, the method comprising:
-
recognizing, by the computing device, a first user input as selecting and holding an object displayed at a first location on a touch screen, the first user input being recognized as a first touch input; recognizing, by the computing device, a second user input as drawing an empty frame at a second location on the touch screen, where the second location is separate and distinct from the first location, the empty frame recognized as being drawn while the object is being selected and held via the first user input, the second user input being recognized as a second touch input that defines at least two points of the empty frame, where the first touch input and the second touch input are each provided by a finger or a stylus; and detecting, by the computing device, a fill gesture from the recognized first and second user inputs, the fill gesture effective to use the object selected via the first user input to fill the empty frame defined via the second user input so as to generate and display on the touch screen a filled frame that comprises a version of the selected object within the filled frame at the second location while the selected object remains displayed at the first location. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computing device comprising:
-
one or more modules implemented at least partially in hardware, the one or more modules configured to perform operations comprising; recognizing, by the computing device, a first user input as selecting and holding an object displayed at a first location on a touch screen, the first user input being recognized as a first touch input; recognizing, by the computing device, a second user input as drawing an empty frame at a second location on the display device, where the second location is separate and distinct from the first location, the second user input being recognized as a second touch input that defines at least two points of the empty frame, where the first touch input and the second touch input are each provided by a finger or a stylus, the empty frame recognized as being drawn while the object is being selected and held via the first user input; and detecting, by the computing device, a fill gesture from the recognized first and second user inputs, the fill gesture effective to use the object selected via the first user input to fill the empty frame defined via the second user input so as to generate and display on the touch screen a filled frame that comprises a version of the selected object within the filled frame at the second location while the selected object remains displayed at the first location. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. One or more hardware computer readable storage devices comprising instructions stored thereon that, responsive to execution by a computing device, cause the computing device to implement a gesture module configured to:
-
recognize a first user input as selecting and holding an object displayed at a first location on a touch screen, the first user input being recognized as a first of touch input; recognize a second user input as drawing an empty frame at a second location on the display device, where the second location is separate and distinct from the first location, the empty frame recognized as being drawn while the object is being selected and held via the first user input, the second user input being recognized as a second touch input that defines at least two points of the empty frame, where the first touch input and the second touch input are each provided by a finger or a stylus; and detect a fill gesture from the recognized first and second user inputs, the fill gesture effective to use the object selected via the first user input to fill the empty frame defined via the second user input so as to generate and display on the touch screen a filled frame that comprises a version of the selected object within the filled frame at the second location while the selected object remains displayed at the first location. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification