Method and system for providing adaptive arrangement and representation of user interface elements
First Claim
1. A method comprising:
- determining an arrangement of one or more user interface elements based on user profile information, content information, contextual information, or a combination thereof;
rendering a representation of a first three-dimensional object in a user interface of a user device,wherein the representation includes one or more surface segments,wherein the first three-dimensional object is a particular three-dimensional shape;
associating the one or more user interface elements respectively with the one or more surface segments based on the arrangement;
determining that the user device has been rotated;
manipulating, based on the rotation of the user device, the representation of the first three-dimensional object within a virtual three-dimensional space to expose the one or more user interface elements associated with the one or more surface segments that are visible in the user interface, wherein a direction of the manipulation is based on a direction of the rotation of the user device;
receiving a first user interaction input that indicates a selection of the user interface element;
rendering, based on the first user interaction input, a representation of a second three-dimensional object to present one or more additional user interface elements, that are associated with the selected user interface element,wherein the second three-dimensional object is a same three-dimensional shape as the particular three-dimensional shape of the first three-dimensional object;
receiving a second user interaction input, in which a first finger is held in place over one of the one or more user interface elements and in which a second finger is swiped, the second user interaction input indicating a selection of another one of the one or more user interface elements associated with the first object; and
rendering, based on the other user interaction input, a two-dimensional object that includes at least two facets, of the plurality of facets of the first three-dimensional object, arranged in two dimensions.
1 Assignment
0 Petitions
Accused Products
Abstract
An approach is provided for rendering a representation of a three-dimensional object in a user interface. The approach includes determining an arrangement of one or more user interface elements based on user profile information, content information, contextual information, or a combination thereof. The approach also includes rendering a representation of a three-dimensional object in a user interface, wherein the representation includes one or more surface segments. The approach also includes associating the one or more user interface elements respectively with the one or more surface segments based on the arrangement. The user interaction input manipulates the representation of the three dimensional object within in a virtual three-dimensional space to expose the one or more user interface elements associated with the one or more surface segments that are visible in the user interface.
16 Citations
17 Claims
-
1. A method comprising:
-
determining an arrangement of one or more user interface elements based on user profile information, content information, contextual information, or a combination thereof; rendering a representation of a first three-dimensional object in a user interface of a user device, wherein the representation includes one or more surface segments, wherein the first three-dimensional object is a particular three-dimensional shape; associating the one or more user interface elements respectively with the one or more surface segments based on the arrangement; determining that the user device has been rotated; manipulating, based on the rotation of the user device, the representation of the first three-dimensional object within a virtual three-dimensional space to expose the one or more user interface elements associated with the one or more surface segments that are visible in the user interface, wherein a direction of the manipulation is based on a direction of the rotation of the user device; receiving a first user interaction input that indicates a selection of the user interface element; rendering, based on the first user interaction input, a representation of a second three-dimensional object to present one or more additional user interface elements, that are associated with the selected user interface element, wherein the second three-dimensional object is a same three-dimensional shape as the particular three-dimensional shape of the first three-dimensional object; receiving a second user interaction input, in which a first finger is held in place over one of the one or more user interface elements and in which a second finger is swiped, the second user interaction input indicating a selection of another one of the one or more user interface elements associated with the first object; and rendering, based on the other user interaction input, a two-dimensional object that includes at least two facets, of the plurality of facets of the first three-dimensional object, arranged in two dimensions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 13)
-
-
8. A user device comprising:
-
a non-transitory computer-readable memory device storing processor-executable instructions; and one or more processors configured to execute the processor-executable instructions, wherein executing the processor-executable instructions causes the one or more processors to; determine an arrangement of one or more user interface elements based on user profile information, content information, contextual information, or a combination thereof; render a representation of a first three-dimensional object in a user interface of the user device, wherein the representation includes one or more surface segments situated on a plurality of surfaces of the first-three dimensional object, wherein the first three-dimensional object is a particular three-dimensional shape; associate the one or more user interface elements respectively with the one or more surface segments based on the arrangement; determine that the user device has been rotated; manipulate, based on the rotation of the user device, the representation of the first three-dimensional object within a virtual three-dimensional space to expose the one or more user interface elements associated with the one or more surface segments that are visible in the user interface, wherein a direction of the manipulation is based on a direction of the rotation of the user device; receive a first user interaction input that indicates a selection of the user interface element; and render, based on the first user interaction input, a representation of a second three-dimensional object to present one or more additional user interface elements, that are associated with the selected user interface element, wherein the second three-dimensional object is a same three-dimensional shape as the particular three-dimensional shape of the first three-dimensional object; receive a second user interaction input, in which a first finger is held in place over one of the one or more user interface elements and in which a second finger is swiped, the second user interaction input indicating a selection of another one of the one or more user interface elements associated with the first object; and render, based on the second user interaction input, a two-dimensional object that includes at least two surfaces, of the plurality of surfaces of the first three-dimensional object, arranged in two dimensions. - View Dependent Claims (9, 10, 11, 12, 14)
-
-
15. A non-transitory computer-readable medium storing processor-executable instructions, which, when executed by one or more processors of a user device, cause the one or more processors to:
-
render a representation of a first three-dimensional object in a user interface of the user device, wherein the representation includes one or more surface segments that are each associated with one user interface element, wherein the first three-dimensional object is a particular three-dimensional shape that includes a plurality of facets arranged in three dimensions; determine that the user device has been rotated; manipulate, based on the rotation of the user device, the representation of the first three-dimensional object within a virtual three-dimensional space to expose the one or more user interface elements associated with the one or more surface segments that are visible in the user interface, wherein a direction of the manipulation is based on a direction of the rotation of the user device; receive a first user interaction input that indicates a selection of the user interface element; render, based on the first interaction input, a representation of a second three-dimensional object to present one or more additional user interface elements, that are associated with the selected user interface element, wherein the second three-dimensional object is a same three-dimensional shape as the particular three-dimensional shape of the first three-dimensional object; receive a second user interaction input that indicates a selection of another one of the one or more user interface elements associated with the first object; and render, based on the second user interaction input, a two-dimensional object that includes at least two facets, of the plurality of facets of the first three-dimensional object, arranged in two dimensions. - View Dependent Claims (16, 17)
-
Specification