The present invention relates to anticipatory lighting from device screens based on user profiles. Systems, methods, and computer readable storage mediums are provided for determining the mood of a user, deriving an appropriate lighting scheme, and then implementing the lighting scheme on all devices within a predetermined proximity to the user. Furthermore, when the user begins a task, the devices can track the user and use the lighting from the nearby screens to offer functional lighting.
A device may detect a negative emotion of a user and identify, based on detecting the negative emotion of the user, a task being performed by the user in relation to an item. The device may obtain, based on identifying the task, information to aid the user in performing the identified task in relation to the item. The information may include at least one of information, obtained from a memory associated with the device, in a help document, a user manual, or an instruction manual relating to performing the task in relation to the item; information, obtained from a network, identifying a document relating to performing the task in relation to the item; or information identifying a video relating to performing the task in relation to the item. The device may provide the obtained information to the user.
Disclosed herein is an “activity assistant” and an “activity assistant user interface” that provides users with dynamically-selected “activities” that are intelligently tailored to the user's world. For example, a graphical UI includes selectable context elements, each of which corresponds to a user-attribute whose value provides a signal to the activity assistant. In response to selecting a parameter associated with at least one of the selectable context elements, a first signal is generated and provided to the activity assistant. In response to providing the signal, one or more activities are populated and ordered based, at least in part, on the signal, and subsequently displayed. The parameters may include a current mood of a user, a current location of the user, associations with other users, and a time during which the user desires to carry out the activity in some examples.
Implementations generally relate to selecting soundtracks. In some implementations, a method includes determining one or more sound mood attributes of one or more soundtracks, where the one or more sound mood attributes are based on one or more sound characteristics. The method further includees determining one or more visual mood attributes of one or more visual media items, where the one or more visual mood attributes are based on one or more visual characteristics. The method further includes selecting one or more of the soundtracks based on the one or more sound mood attributes and the one or more visual mood attributes. The method further includes generating an association among the one or more selected soundtracks and the one or more visual media items, wherein the association enables the one or more selected soundtracks to be played while the Pone or more visual media items are displayed.