Embodiments are provided for communicating notifications and other textual data associated with applications installed on an electronic device. According to certain aspects, a user can interface with an input device to send (218) a wake up trigger to the electronic device. The electronic device retrieves (222) application notifications and converts (288) the application notifications to audio data. The electronic device also sends (230) the audio data to an audio output device for annunciation (232). The user may also use the input device to send (242) a request to the electronic device to activate the display screen. The electronic device identifies (248) an application corresponding to an annunciated notification, and activates (254) the display screen and initiates the application.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
Using the techniques discussed herein, a set of images is captured by one or more array imagers (106). Each array imager includes multiple imagers configured in various manners. Each array imager captures multiple images of substantially a same scene at substantially a same time. The images captured by each array image are encoded by multiple processors (112, 114). Each processor can encode sets of images captured by a different array imager, or each processor can encode different sets of images captured by the same array imager. The encoding of the images is performed using various image-compression techniques so that the information that results from the encoding is smaller, in terms of storage size, than the uncompressed images.
H04N 13/282 - Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/62 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding by frequency transforming in three dimensions
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining
H04N 23/45 - Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/75 - Circuitry for compensating brightness variation in the scene by influencing optical camera components
H04N 23/80 - Camera processing pipelines; Components thereof
H04N 23/957 - Light-field or plenoptic cameras or camera modules
3.
SYSTEMS AND METHODS FOR SYNCRONIZING MULTIPLE ELECTRONIC DEVICES
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
4.
MOVING CONTENT BETWEEN SET TOP BOX AND END DEVICES IN HOME
A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and receives the candidate content from the DVR. The content moving device transcodes the candidate content at times independent of a request from the PMP for the content. The content moving device may provide a list of available transcoded content to the PMP for selection, and provide selected content to the PMP. The content moving device may also provide information relating to any protection schemes of the content provided to the PMP, such as DRM rights and decryption keys. The content moving device performs the often computationally intense and time consuming transcoding of user content to enable the user to move content between different user devices in a convenient manner.
H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/41 - Structure of client; Structure of client peripherals
5.
ALERT PERIPHERAL FOR NOTIFICATION OF EVENTS OCCURRING ON A PROGRAMMABLE USER EQUIPMENT WITH COMMUNICATION CAPABILITIES
An alert peripheral device that provides sensory notification to a user of the device includes: a power subsystem; a communication mechanism by which notification signals is received from a first user equipment (UE) that generates and transmits the notification signals in response to detection of specific events at the first UE; and a response notification mechanism that provides a sensory response of the peripheral device following receipt of a notification of a detected event (NDE) signal. The device further includes an embedded controller coupled to each of the other components and which includes firmware that when executed on the embedded controller configures the embedded controller to: establish a communication link between the communication mechanism and the first UE; and in response to detecting a receipt of the NDE signal from the first UE, trigger the response notification mechanism to exhibit the sensory response.
H04M 19/04 - Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
H04M 1/72412 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
H04W 4/14 - Short messaging services, e.g. short message service [SMS] or unstructured supplementary service data [USSD]
6.
NAME COMPOSITION ASSISTANCE IN MESSAGING APPLICATIONS
A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
H04L 51/48 - Message addressing, e.g. address format or anonymous messages, aliases
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
7.
SYSTEMS AND METHODS FOR EQUALIZING AUDIO FOR PLAYBACK ON AN ELECTRONIC DEVICE
Embodiments are provided for receiving a request to output audio at a first speaker and a second speaker of an electronic device, determining that the electronic device is oriented in a portrait orientation or a landscape orientation, identifying, based on the determined orientation, a first equalization setting for the first speaker and a second equalization setting for the second speaker, providing, for output at the first speaker, a first audio signal with the first equalization setting, and providing, for output at the second speaker, a second audio signal with the second equalization setting.
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
Method, apparatus and computer readable media for receiving a multiprogram program transport service that includes one or more compressed video services and one or more 3D-2D conversion options, generating an uncompressed video signal by performing a decoding portion of a transcoding operation for one of the one or more of the video services, determining from the 3D-2D conversion option whether a 3D-2D conversion is to be performed, performing a scale conversion on the uncompressed video according to a specified type of 3D-2D conversion, generating a compressed video service by performing an encoding portion of a transcoding operation on the uncompressed video that has been scale converted, and generating a second multiprogram program transport service that includes the compressed video signal that has been 3D-2D converted.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
9.
AUDIO ROUTING SYSTEM FOR ROUTING AUDIO DATA TO AND FROM A MOBILE DEVICE
A method includes receiving sound by a first audio unit installed in an electrical outlet, routing audio data corresponding to the received sound from the first audio unit to a second audio unit installed in a second electrical outlet, and sending the audio data to a mobile device using a wireless link between the mobile device and the second audio unit. Routing the audio data may include receiving the audio data from the first audio unit by a third audio unit and routing the audio data to the second audio unit by the third audio unit serving as a router. The data may be routed using table driven routing, on-demand routing or some other appropriate routing protocol. The method may also include performing voice recognition on the audio data and detecting a command word and routing command word data to the second audio unit.
Techniques and apparatuses for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.
Techniques and/or apparatuses receive an indication that a user has entered a rating of first media content, determine, responsive to the indication that the user has entered the rating of the first media content, whether the user consumed the first media content prior to entering the rating. Responsive to a determination that the user did not consume the first media content prior to entering the rating, the techniques and/or apparatuses provide an indication that the user did not consume the first media content prior to entering the rating or weight the rating based on the determination that the user did not consume the first media content prior to entering the rating of the first media content.
Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.
A communication system provides secure communication between two nodes in a self-organizing network without the need for a centralized security or control device. A first node of the two nodes is provisioned with one or more security profiles, auto-discovers a second node of the two nodes, authenticates the second node based on a security profile of the one or more security profiles, selects a security profile of the one or more security profiles to encrypt a communication session between the two nodes, and encrypts the communication session between the two nodes based on the selected security profile. The second node also is provisioned with the same one or more security profiles, authenticates the first node based on a same security profile as is used to authenticate the second node, and encrypts the communication session based on the same security profile as is used for encryption by the first node.
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
In general, in one aspect, the disclosure describes a Universal Plug and Play (UPnP) Remote Access Server (RAS) to provide a communication channel between UPnP Remote Access Clients (RACs) connected thereto. The UPnP RAS maintains local discovery information for UPnP devices connected to a local network and remote discovery information for remote UPnP devices communicating therewith. The UPnP RAS provides the remote UPnP devices communicating therewith with the local discovery information and the remote discovery information. The remote discovery information is utilized by a first remote UPnP device to discover a second UPnP device and vice versa. After discovery, a first remote UPnP device can communicate with a second UPnP device and vice versa.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A method includes providing a variant playlist file that identifies a plurality of variant streams each corresponding to a different encoding of a same media presentation; tracking a first set of media segments encoded at a first bitrate that correspond to a first playlist file for a first variant stream associated with the variant playlist file; responsive to a second encoded bitrate associated with a second set of media segments that correspond to a second variant stream being higher than the first encoded bitrate: determining a number of media segments to include in a plurality of media segments from the second set of media segments that correspond to the first set of media segments; and providing, to the client device, a second playlist file that identifies a plurality of media segments from the second set of media segments that correspond to respective ones of the first set of media segments.
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/482 - End-user interface for program selection
H04N 21/658 - Transmission by the client directed to the server
16.
Alert peripheral for notification of events occurring on a programmable user equipment with communication capabilities
An alert peripheral device that provides sensory notification to a user of the device includes: a power subsystem; a communication mechanism by which notification signals is received from a first user equipment (UE) that generates and transmits the notification signals in response to detection of specific events at the first UE; and a response notification mechanism that provides a sensory response of the peripheral device following receipt of a notification of a detected event (NDE) signal. The device further includes an embedded controller coupled to each of the other components and which includes firmware that when executed on the embedded controller configures the embedded controller to: establish a communication link between the communication mechanism and the first UE; and in response to detecting a receipt of the NDE signal from the first UE, trigger the response notification mechanism to exhibit the sensory response.
H04M 19/04 - Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
H04M 1/72412 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
H04W 4/14 - Short messaging services, e.g. short message service [SMS] or unstructured supplementary service data [USSD]
17.
Audio routing system for routing audio data to and from a mobile device
A method includes receiving sound by a first audio unit installed in an electrical outlet, routing audio data corresponding to the received sound from the first audio unit to a second audio unit installed in a second electrical outlet, and sending the audio data to a mobile device using a wireless link between the mobile device and the second audio unit. Routing the audio data may include receiving the audio data from the first audio unit by a third audio unit and routing the audio data to the second audio unit by the third audio unit serving as a router. The data may be routed using table driven routing, on-demand routing or some other appropriate routing protocol. The method may also include performing voice recognition on the audio data and detecting a command word and routing command word data to the second audio unit.
A method on a mobile device for a wireless network is described. An audio input is monitored for a trigger phrase spoken by a user of the mobile device. A command phrase spoken by the user after the trigger phrase is buffered. The command phrase corresponds to a call command and a call parameter. A set of target contacts associated with the mobile device is selected based on respective voice validation scores and respective contact confidence scores. The respective voice validation scores are based on the call parameter. The respective contact confidence scores are based on a user context associated with the user. A call to a priority contact of the set of target contacts is automatically placed if the voice validation score of the priority contact meets a validation threshold and the contact confidence score of the priority contact meets a confidence threshold.
Using the techniques discussed herein, a set of images is captured by one or more array imagers (106). Each array imager includes multiple imagers configured in various manners. Each array imager captures multiple images of substantially a same scene at substantially a same time. The images captured by each array image are encoded by multiple processors (112, 114). Each processor can encode sets of images captured by a different array imager, or each processor can encode different sets of images captured by the same array imager. The encoding of the images is performed using various image-compression techniques so that the information that results from the encoding is smaller, in terms of storage size, than the uncompressed images.
H04N 13/282 - Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04N 23/45 - Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
H04N 23/80 - Camera processing pipelines; Components thereof
H04N 23/957 - Light-field or plenoptic cameras or camera modules
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/62 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding by frequency transforming in three dimensions
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 23/75 - Circuitry for compensating brightness variation in the scene by influencing optical camera components
G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining
20.
Name composition assistance in messaging applications
A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
H04L 51/48 - Message addressing, e.g. address format or anonymous messages, aliases
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
H04L 61/4594 - Address books, i.e. directories containing contact information about correspondents
21.
Moving content between set top box and end devices in home
A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and receives the candidate content from the DVR. The content moving device transcodes the candidate content at times independent of a request from the PMP for the content. The content moving device may provide a list of available transcoded content to the PMP for selection, and provide selected content to the PMP. The content moving device may also provide information relating to any protection schemes of the content provided to the PMP, such as DRM rights and decryption keys. The content moving device performs the often computationally intense and time consuming transcoding of user content to enable the user to move content between different user devices in a convenient manner.
H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/41 - Structure of client; Structure of client peripherals
Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/658 - Transmission by the client directed to the server
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/439 - Processing of audio elementary streams
23.
Method and apparatus for providing a secure communication in a self-organizing network
A communication system provides secure communication between two nodes in a self-organizing network without the need for a centralized security or control device. A first node of the two nodes is provisioned with one or more security profiles, auto-discovers a second node of the two nodes, authenticates the second node based on a security profile of the one or more security profiles, selects a security profile of the one or more security profiles to encrypt a communication session between the two nodes, and encrypts the communication session between the two nodes based on the selected security profile. The second node also is provisioned with the same one or more security profiles, authenticates the first node based on a same security profile as is used to authenticate the second node, and encrypts the communication session based on the same security profile as is used for encryption by the first node.
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
Methods and systems for a content server to select sets of video streams having different encoding parameters for transmitting the sets of video streams to a media device are disclosed herein. In some embodiments, a method for transmitting video streams for a media program from a server to a media device includes: selecting, by the server, first encoding parameters including a first bitrate for a first set of video streams for the media program based on a first estimated bandwidth capacity for a network linking the server and the media device, transmitting the first set of video streams from the server to the media device, determining, by the server, second encoding parameters including a second bitrate for a second set of video streams for the media program, and transmitting the second set of video streams from the server to the media device.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 47/25 - Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
H04L 65/613 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/6373 - Control signals issued by the client directed to the server or network components for rate control
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
A method in an electronic device, the method includes projecting infrared (“IR”) light from a plurality of light emitting diodes (“LEDs”) disposed proximate to the perimeter of the electronic device, detecting, by a sensor, IR light originating from at least two of the plurality of LEDs reflected from off of a person, and carrying out a function based on the relative strength of the detected IR light from the LEDs.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 1/16 - Constructional details or arrangements
G06F 1/3231 - Monitoring the presence, absence or movement of users
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
H04M 1/72448 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
H04M 1/02 - Constructional features of telephone sets
26.
Sharing media among remote access clients in a universal plug and play environment
In general, in one aspect, the disclosure describes a Universal Plug and Play (UPnP) Remote Access Server (RAS) to provide a communication channel between UPnP Remote Access Clients (RACs) connected thereto. The UPnP RAS maintains local discovery information for UPnP devices connected to a local network and remote discovery information for remote UPnP devices communicating therewith. The UPnP RAS provides the remote UPnP devices communicating therewith with the local discovery information and the remote discovery information. The remote discovery information is utilized by a first remote UPnP device to discover a second UPnP device and vice versa. After discovery, a first remote UPnP device can communicate with a second UPnP device and vice versa.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A method including: transmitting, by a control device, a first portion of content comprising a first portion of a signal corresponding to a multimedia presentation characteristic of a peripheral device; receiving, when an adjustment of the signal is below an adjustment threshold, a first instance of an input indicating a request to change the multimedia presentation characteristic; in response to receiving the first instance of the input, adjusting a second portion of the signal and transmitting a second portion of the content comprising the adjusted second portion of the signal; receiving, when the adjustment of the signal is at or above the adjustment threshold, a second instance of the input; and transmitting, in response to receiving the second instance of the input, a communication signal to the peripheral device to adjust a peripheral device control of an output of the multimedia presentation characteristic.
H04M 1/72412 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
28.
RETRIEVAL OF DATA ACROSS MULTIPLE PARTITIONS OF A STORAGE DEVICE USING DIGITAL SIGNATURES
A system and method for exchanging data among partitions of a storage device is disclosed. For example, data stored in a first partition is exchanged with an application included in the first partition or with a second application included in a second partition. In one embodiment, the second application is associated with a global certificate while the first application is associated with a different platform certificate. A verification module included in the first partition receives a request for data and determines if the request for data is received from the first application. If the request for data is not received from the first application, the verification module determines whether the request is received from the second application and whether the global certificate is an authorized certificate. For example, the verification module determines whether the global certificate is included in a listing of authorized certificates.
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
30.
Method and apparatus for distribution of 3D television program materials
Method, apparatus and computer readable media for receiving a multiprogram program transport service that includes one or more compressed video services and one or more 3D-2D conversion options, generating an uncompressed video signal by performing a decoding portion of a transcoding operation for one of the one or more of the video services, determining from the 3D-2D conversion option whether a 3D-2D conversion is to be performed, performing a scale conversion on the uncompressed video according to a specified type of 3D-2D conversion, generating a compressed video service by performing an encoding portion of a transcoding operation on the uncompressed video that has been scale converted, and generating a second multiprogram program transport service that includes the compressed video signal that has been 3D-2D converted.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
31.
Method and apparatus for using image data to aid voice recognition
A device performs a method for using image data to aid voice recognition. The method includes the device capturing image data of a vicinity of the device and adjusting, based on the image data, a set of parameters for voice recognition performed by the device. The set of parameters for the device performing voice recognition include, but are not limited to: a trigger threshold of a trigger for voice recognition; a set of beamforming parameters; a database for voice recognition; and/or an algorithm for voice recognition. The algorithm may include using noise suppression or using acoustic beamforming.
A method includes receiving sound by a first audio unit installed in an electrical outlet, routing audio data corresponding to the received sound from the first audio unit to a second audio unit installed in a second electrical outlet, and sending the audio data to a mobile device using a wireless link between the mobile device and the second audio unit. Routing the audio data may include receiving the audio data from the first audio unit by a third audio unit and routing the audio data to the second audio unit by the third audio unit serving as a router. The data may be routed using table driven routing, on-demand routing or some other appropriate routing protocol. The method may also include performing voice recognition on the audio data and detecting a command word and routing command word data to the second audio unit.
Techniques and apparatuses for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.
A recipient communication device and method wherein a user authenticates a message that is being received. The method includes receiving, by a messaging utility of the recipient communication device, a message transmitted from a sender communication device. The messaging utility determines that one of (a) sender authentication of the message and (b) recipient authentication to open the message is required. In response to sender authentication being required, the recipient communication device transmits a request to the sender communication device for sender authentication of the message, and receives a certification of the message based on an authentication of a user input via the sender communication device. When recipient authentication is required, the recipient is prompted to enter biometric input at the recipient device. In one embodiment, a clearinghouse service authenticates a user of a communication device in order for the recipient communication device to receive certification of the user and/or the message.
Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
H04H 60/52 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations of users
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 13/00 - Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
H04N 5/445 - Receiver circuitry for displaying additional information
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/658 - Transmission by the client directed to the server
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/439 - Processing of audio elementary streams
36.
Method and device with intelligent media management
A method (300) and device (200) with intelligent media management is disclosed. The method (300) can include: streaming (310) media content in a wireless communication device; identifying (320) a media signature of the streamed media content; searching (330) a stored library for the identified media signature; and playing (340) locally stored media content, if the search results in finding a match with the identified media signature in the stored library. Thus, when a match occurs, locally stored media content replaces the streamed media content, to provide substantially lower power consumption and enhanced battery life in connection with wireless communication devices.
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/439 - Processing of audio elementary streams
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
H04L 67/5683 - Storage of data provided by user terminals, i.e. reverse caching
H04N 21/61 - Network physical structure; Signal processing
H04L 65/00 - Network arrangements, protocols or services for supporting real-time applications in data packet communication
37.
Method and apparatus for evaluating trigger phrase enrollment
An electronic device includes a microphone that receives an audio signal that includes a spoken trigger phrase, and a processor that is electrically coupled to the microphone. The processor measures characteristics of the audio signal, and determines, based on the measured characteristics, whether the spoken trigger phrase is acceptable for trigger phrase model training. If the spoken trigger phrase is determined not to be acceptable for trigger phrase model training, the processor rejects the trigger phrase for trigger phrase model training.
G10L 15/18 - Speech classification or search using natural language modelling
G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 21/0264 - Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
A method includes receiving sound by a first audio unit installed in an electrical outlet, routing audio data corresponding to the received sound from the first audio unit to a second audio unit installed in a second electrical outlet, and sending the audio data to a mobile device using a wireless link between the mobile device and the second audio unit. Routing the audio data may include receiving the audio data from the first audio unit by a third audio unit and routing the audio data to the second audio unit by the third audio unit serving as a router. The data may be routed using table driven routing, on-demand routing or some other appropriate routing protocol. The method may also include performing voice recognition on the audio data and detecting a command word and routing command word data to the second audio unit.
A system is provided for use with a video input signal and a video unit. The video input signal can be one of a two dimensional video signal and a three dimensional video signal. The video unit can display a three dimensional video and a two dimensional video. The system includes a receiver portion, a processing portion, a switching portion and an output portion. The receiver portion can receive the video input signal. The processing portion can output a first signal in a first mode of operation and can output a second signal in a second mode of operation, wherein the first signal is based on the video input signal and the second signal is based on the video input signal. The switching portion can switch the processing portion from the first mode of operation to the second mode of operation. The output portion can provide an output signal to the video unit, wherein the output signal is based on the first signal when the processing portion operates in the first mode of operation and wherein the output signal is based on the second signal when the processing portion operates in the second mode of operation. The first signal includes a two dimensional video signal, whereas the second signal includes a three dimensional video signal.
H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
H04N 13/139 - Format conversion, e.g. of frame-rate or size
A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and receives the candidate content from the DVR. The content moving device transcodes the candidate content at times independent of a request from the PMP for the content. The content moving device may provide a list of available transcoded content to the PMP for selection, and provide selected content to the PMP. The content moving device may also provide information relating to any protection schemes of the content provided to the PMP, such as DRM rights and decryption keys. The content moving device performs the often computationally intense and time consuming transcoding of user content to enable the user to move content between different user devices in a convenient manner.
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
41.
Alert peripheral for notification of events occurring on a programmable user equipment with communication capabilities
An alert peripheral device that provides sensory notification to a user of the device includes: a power subsystem; a communication mechanism by which notification signals is received from a first user equipment (UE) that generates and transmits the notification signals in response to detection of specific events at the first UE; and a response notification mechanism that provides a sensory response of the peripheral device following receipt of a notification of a detected event (NDE) signal. The device further includes an embedded controller coupled to each of the other components and which includes firmware that when executed on the embedded controller configures the embedded controller to: establish a communication link between the communication mechanism and the first UE; and in response to detecting a receipt of the NDE signal from the first UE, trigger the response notification mechanism to exhibit the sensory response.
H04M 19/04 - Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
H04M 1/72412 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
H04W 4/14 - Short messaging services, e.g. short message service [SMS] or unstructured supplementary service data [USSD]
42.
Electronic device with gesture detection system and methods for using the gesture detection system
A method in an electronic device, the method includes projecting infrared (“IR”) light from a plurality of light emitting diodes (“LEDs”) disposed proximate to the perimeter of the electronic device, detecting, by a sensor, IR light originating from at least two of the plurality of LEDs reflected from off of a person, and carrying out a function based on the relative strength of the detected IR light from the LEDs.
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 1/16 - Constructional details or arrangements
G06F 1/3231 - Monitoring the presence, absence or movement of users
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/042 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
H04M 1/72448 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
H04M 1/02 - Constructional features of telephone sets
43.
Adaptive method for biometrically certified communication
A communication device and method for authentication of a message being transmitted from the communication device. The method includes receiving, by a messaging utility, content of a message provided for transmission from the communication device. Based on a determination that the message requires user authentication before the message is transmitted to a recipient, the method further includes selecting, based on contextual data, one or more biometric capturing components of the communication device; triggering at least one selected biometric capturing component to capture a corresponding biometric input from a user of the communication device; and transmitting the message when the biometric input as belonging to an authorized user of the communication device. In one embodiment, a clearinghouse service authenticates a biometric input from a user of the communication device in order to certify the user and/or the message.
An apparatus obtains application state information for another device and displays a login screen on a display that provides information for at least one application running on the other device. The information displayed may be an icon corresponding to an application running on the other device. The application state information may include an application identifier, a content identifier and a pointer to a location at which a given content is accessed by the application. An apparatus includes a display, application state monitor logic, operative to obtain application state information for another device, and login screen configuration logic, operatively coupled to the display. The login screen configuration logic is operative to configure a login screen on the display to provide information for at least one application running on the other device, based on the application state information for the other device obtained by the application state monitor logic.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/04842 - Selection of displayed objects or displayed text elements
H04W 4/60 - Subscription-based services using application servers or record carriers, e.g. SIM application toolkits
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
A mobile device system and related method are disclosed by which the device is able to communicate wirelessly not only via a Wide Area Network (WAN) link but also via an alternate link such as a Wi-Fi link. In one embodiment, the method includes receiving a command from a remote server, via the WAN link, to establish a Wi-Fi link when possible. The method further includes, upon establishing the Wi-Fi link, sending a message for receipt by the server indicating that the Wi-Fi link has been established, and receiving software update information from the server, the information being communicated to the mobile device via the Wi-Fi link. Further, the method includes one or both of (1) sending an acknowledgement for receipt by the server indicating that the information has been received and (2) receiving an instruction from the server that communications via the Wi-Fi link be ended.
Methods and systems for a content server to select sets of video streams having different encoding parameters for transmitting the sets of video streams to a media device are disclosed herein. In some embodiments, a method for transmitting video streams for a media program from a server to a media device includes: selecting, by the server, first encoding parameters including a first bitrate for a first set of video streams for the media program based on a first estimated bandwidth capacity for a network linking the server and the media device, transmitting the first set of video streams from the server to the media device, determining, by the server, second encoding parameters including a second bitrate for a second set of video streams for the media program, and transmitting the second set of video streams from the server to the media device.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/6373 - Control signals issued by the client directed to the server or network components for rate control
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04L 12/825 - Adaptive control, at the source or intermediate nodes, upon congestion feedback, e.g. X-on X-off
A method on a mobile device for processing an audio input is described. A trigger for the audio input is received. At least one parameter is determined for an audio processor based on at least one input characteristic for the audio input. The audio input is routed to the audio processor with the at least one parameter.
A navigation system and various methods of using the system are described herein. Search query results are refined by the system and are prioritized based at least in part upon sub-search categories selected during the searching process. Sub-searches can be represented by graphical icons displayed on the user interface.
A method (300) and apparatus (400) for three-dimensional television (3DTV) image adjustment includes loading (342, 344) default 2D-to-3D image setting values from a default settings memory to a user adjustment settings memory, annunciating (346) the default 2D-to-3D image setting values, receiving (361, 362) a 2D-to-3D image settings value adjustment, saving (370) the 2D-to-3D image settings value adjustment in the user adjustment settings memory, and applying (390) the 2D-to-3D image settings value adjustment to a 2D-to-3D converted image. These methods and apparatuses allow individual users to set 3DTV image settings to their personal preferences to compensate for brightness reductions caused by 3DTV glasses, depth perception sensitivities, and other image quality factors.
Embodiments are provided for communicating notifications and other textual data associated with applications installed on an electronic device. According to certain aspects, a user can interface with an input device to send (218) a wake up trigger to the electronic device. The electronic device retrieves (222) application notifications and converts (288) the application notifications to audio data. The electronic device also sends (230) the audio data to an audio output device for annunciation (232). The user may also use the input device to send (242) a request to the electronic device to activate the display screen. The electronic device identifies (248) an application corresponding to an annunciated notification, and activates (254) the display screen and initiates the application.
A method and system for monitoring a location via a called telephony communication device is disclosed. The method at the called telephony communication device includes receiving a request from a calling telephony communication device. Further, the method includes determining whether the received request is for monitoring the location. The method further includes automatically transmitting audio/video data captured via the called telephony communication device to the calling telephony communication device when the received request is determined to be one for monitoring the location.
A method and apparatus for creating video clips is provided herein. A method includes displaying, by a client device, one or more video streams, displaying, by the client device, a plurality of soft buttons, each of the plurality of soft buttons associated with a different length of time, receiving, by the client device, a selection of a soft button from the plurality of soft buttons, and displaying, by the client device, a presentation area including one or more segments of the one or more video streams, the one or more segments of the length of time associated with the selected soft button and captured from the one or more video streams, wherein the presentation area has an x-axis representing time. The method further includes receiving, by the client device, selection information indicating for each of the one or more segments a starting point and an ending point selected by a user from the presentation area, and displaying, by the client device, an indication of the creation of a video clip from the one or more segments based on the selection information.
G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
53.
Name composition assistance in messaging applications
A method includes identifying, at an electronic device a candidate name responsive to user input indicating a salutational trigger during composition of a body of a message of a messaging application. Identifying the candidate name including at least one of: parsing a recipient-specific portion of a recipient message address of the message; parsing a display name associated with the recipient message address; parsing a content of the message body; parsing an attachment name associated with an attachment field of the message; identifying the candidate name from a contact record selected from a contacts database based on a recipient-specific portion of a recipient message address of the message; and parsing user-readable content of an application from which composition of the message was triggered. The method further includes facilitating composition of a recipient name in the body of the message based on the candidate name.
A method and apparatus for selecting between multiple gesture recognition systems includes an electronic device determining a context of operation for the electronic device that affects a gesture recognition function performed by the electronic device. The electronic device also selects, based on the context of operation, one of a plurality of gesture recognition systems in the electronic device as an active gesture recognition system for receiving gesturing input to perform the gesture recognition function, wherein the plurality of gesture recognition systems comprises an image-based gesture recognition system and a non-image-based gesture recognition system.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
55.
Systems and methods for equalizing audio for playback on an electronic device
A method includes receiving a request to output audio at a speaker of an electronic device, determining whether the speaker of the electronic device is facing substantially towards or away from a support surface, identifying, based on whether the speaker of the electronic device is facing substantially towards or away from the support surface, an equalization setting, and providing, for output at the speaker of the electronic device, an audio signal with the equalization setting.
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
H04R 3/04 - Circuits for transducers for correcting frequency response
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
In general, in one aspect, the disclosure describes a Universal Plug and Play (UPnP) Remote Access Server (RAS) to provide a communication channel between UPnP Remote Access Clients (RACs) connected thereto. The UPnP RAS maintains local discovery information for UPnP devices connected to a local network and remote discovery information for remote UPnP devices communicating therewith. The UPnP RAS provides the remote UPnP devices communicating therewith with the local discovery information and the remote discovery information. The remote discovery information is utilized by a first remote UPnP device to discover a second UPnP device and vice versa. After discovery, a first remote UPnP device can communicate with a second UPnP device and vice versa.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/12 - Arrangements, apparatus, circuits or systems, not covered by a single one of groups characterised by the data terminal
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
Method, apparatus and computer readable media for receiving a multiprogram program transport service that includes one or more compressed video services and one or more 3D-2D conversion options, generating an uncompressed video signal by performing a decoding portion of a transcoding operation for one of the one or more of the video services, determining from the 3D-2D conversion option whether a 3D-2D conversion is to be performed, performing a scale conversion on the uncompressed video according to a specified type of 3D-2D conversion, generating a compressed video service by performing an encoding portion of a transcoding operation on the uncompressed video that has been scale converted, and generating a second multiprogram program transport service that includes the compressed video signal that has been 3D-2D converted.
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
Techniques (300, 400, 500) and apparatuses (100, 200, 700) for recognizing accented speech are described. In some embodiments, an accent module recognizes accented speech using an accent library based on device data, uses different speech recognition correction levels based on an application field into which recognized words are set to be provided, or updates an accent library based on corrections made to incorrectly recognized speech.
The various implementations described herein include methods and systems for collecting media associated with a mobile device. In one aspect, a method is performed at a computing system. The method comprises receiving and storing, without user interaction, video and audio data captured during a predefined time period by a plurality of distributed video devices configured to monitor one or more vicinities, and mobile device presence information from which presence of mobile devices in vicinity of the video devices can be determined throughout the predefined time period. The method further comprises receiving from a requestor a request to identify from the captured video and audio data a first subset associated with a first person. The request includes first information of a mobile device associated with the first person. In response to the request, the first subset based on the mobile device presence information is identified and transmitted to the requestor.
H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04W 4/029 - Location-based management or tracking services
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
The various embodiments described herein include methods, devices, and systems for coupling wireless devices. In one aspect, a method includes receiving, at a server, an image captured by a first electronic device; obtaining, based at least in part on the received image, connection information for facilitating a wireless connection between the first electronic device and a second electronic device; and transmitting to the first electronic device the obtained connection information for facilitating a wireless connection between the first electronic device and a second electronic device.
H04L 29/12 - Arrangements, apparatus, circuits or systems, not covered by a single one of groups characterised by the data terminal
G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/46 - Extraction of features or characteristics of the image
G06K 9/62 - Methods or arrangements for recognition using electronic means
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
A method including: transmitting, by a control device, a first portion of content comprising a first portion of a signal corresponding to a multimedia presentation characteristic of a peripheral device; receiving, when an adjustment of the signal is below an adjustment threshold, a first instance of an input indicating a request to change the multimedia presentation characteristic; in response to receiving the first instance of the input, adjusting a second portion of the signal and transmitting a second portion of the content comprising the adjusted second portion of the signal; receiving, when the adjustment of the signal is at or above the adjustment threshold, a second instance of the input; and transmitting, in response to receiving the second instance of the input, a communication signal to the peripheral device to adjust a peripheral device control of an output of the multimedia presentation characteristic.
A method (300) and device (200) with intelligent media management is disclosed. The method (300) can include: streaming (310) media content in a wireless communication device; identifying (320) a media signature of the streamed media content; searching (330) a stored library for the identified media signature; and playing (340) locally stored media content, if the search results in finding a match with the identified media signature in the stored library. Thus, when a match occurs, locally stored media content replaces the streamed media content, to provide substantially lower power consumption and enhanced battery life in connection with wireless communication devices.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/439 - Processing of audio elementary streams
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04N 21/61 - Network physical structure; Signal processing
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
A method (400, 500) of advertising. The method can include communicating to a client (106) a hyperlink corresponding to a virtual world (300) and associating with the hyperlink an identifier corresponding to an advertisement (302) to be presented to a user (108) in the virtual world during a user session. The method also can include identifying an identifier (114) corresponding to an advertisement to be displayed in a virtual world during a user session in response to receiving a request (116) from a client identifying a uniform resource identifier corresponding to the virtual world, and presenting the advertisement within the virtual world during the user session. A method (600) of providing financial incentives for advertising can include receiving an advertising activity indicator (120, 122) and processing the advertising activity indicator to determine financial incentives to be provided to an entity associated with the website.
Generating a reconstructed may include reading a binary codeword corresponding to a transform coefficient for a sub-block of a transform unit, the transform coefficient having a quantized value, identifying a value of a parameter variable as zero in response to a determination that the transform coefficient is a first transform coefficient for the sub-block, and otherwise as an updated parameter variable value, converting the binary codeword into a symbol based on the value of the parameter variable, determining the absolute value of the transform coefficient corresponding to the symbol, wherein the quantized value for the transform coefficient is equal to or greater than a threshold value, by adding the threshold value to the symbol, including the transform coefficient in the sub-block of the transform unit, and generating a portion of the reconstructed frame based on the transform unit.
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
H04N 19/18 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
65.
Method for collecting media associated with a mobile device
A media collection system (102) uses media collection devices (107) to record media in the vicinity of a mobile device (104). A method (300) for collecting media associated with a user of a mobile device (104) includes the mobile device detecting (304) a broadcast signal from a communication node of the media collection system (102) at a radio interface of the mobile device. Then the mobile device requests (308) a media collection service of the media collection system. In response, the mobile device receives (314) an access identifier from the media collection system. The access identifier can be used to access media collected by the media collection system. The mobile device can then cease a self-collection activity while in the vicinity of the media collection system.
H04N 5/77 - Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
Using the techniques discussed herein, a set of images is captured by one or more array imagers (106). Each array imager includes multiple imagers configured in various manners. Each array imager captures multiple images of substantially a same scene at substantially a same time. The images captured by each array image are encoded by multiple processors (112, 114). Each processor can encode sets of images captured by a different array imager, or each processor can encode different sets of images captured by the same array imager. The encoding of the images is performed using various image-compression techniques so that the information that results from the encoding is smaller, in terms of storage size, than the uncompressed images.
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04N 5/238 - Circuitry for compensating for variation in the brightness of the object by influencing optical part of the camera
H04N 19/107 - Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/62 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding by frequency transforming in three dimensions
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining
67.
Accessing a cloud-based service using a communication device linked to another communication device via a peer-to-peer ad hoc communication link
Arrangements described herein relate to accessing a cloud based service. Responsive to a user of a first communication device initiating access to the cloud based service via the first communication device, a prompt for a valid password to be entered to access the cloud based service can be received by the first communication device. Responsive to the valid password required to access the cloud based service not being stored on the first communication device, the first communication device can automatically retrieve the valid password from a second communication device via a peer-to-peer ad hoc communication link between the first communication device and the second communication device. The valid password can be automatically provided, by the first communication device, to a login service for the cloud based service to obtain access by the first communication device to the cloud based service.
Embodiments are provided for receiving a request to output audio at a first speaker and a second speaker of an electronic device, determining that the electronic device is oriented in a portrait orientation or a landscape orientation, identifying, based on the determined orientation, a first equalization setting for the first speaker and a second equalization setting for the second speaker, providing, for output at the first speaker, a first audio signal with the first equalization setting, and providing, for output at the second speaker, a second audio signal with the second equalization setting.
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
H04R 3/04 - Circuits for transducers for correcting frequency response
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
Disclosed are techniques that provide a “best” picture taken within a few seconds of the moment when a capture command is received (e.g., when the “shutter” button is pressed). In some situations, several still images are automatically (that is, without the user's input) captured. These images are compared to find a “best” image that is presented to the photographer for consideration. Video is also captured automatically and analyzed to see if there is an action scene or other motion content around the time of the capture command. If the analysis reveals anything interesting, then the video clip is presented to the photographer. The video clip may be cropped to match the still-capture scene and to remove transitory parts. Higher-precision horizon detection may be provided based on motion analysis and on pixel-data analysis.
In a method for enabling support for backwards compatibility in a User Domain, in one of a Rights Issuer (RI) and a Local Rights Manager (LRM), a Rights Object Encryption Key (REK) and encrypted REK are received from an entity that generated a User Domain Authorization for the one of the RI and the LRM and the REK is used to generate a User Domain Rights Object (RO) that includes the User Domain Authorization and the encrypted REK.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G11B 20/00 - Signal processing not specific to the method of recording or reproducing; Circuits therefor
71.
Targeting content based on sensor network data while maintaining privacy of sensor network data
Determination of content for presentation by a client device based on item usage data captured by a sensor network is disclosed. Data describing usage of one or more items at a location is received from a sensor network associated with the location. Content is received from a server and a subset of the received content is selected based on attributes of the data from the sensor network and attributes of the content. The subset of the received content is transmitted to a client device for presentation. In one embodiment, data describing interaction with the subset of the received content is received from the client device and transmitted to a content distribution server for use in selecting additional content. In an embodiment, second content determined by the server using interaction with the subset of the received content and data from the sensor network is received from the server.
A method includes receiving sound by a first audio unit installed in an electrical outlet, routing audio data corresponding to the received sound from the first audio unit to a second audio unit installed in a second electrical outlet, and sending the audio data to a mobile device using a wireless link between the mobile device and the second audio unit. Routing the audio data may include receiving the audio data from the first audio unit by a third audio unit and routing the audio data to the second audio unit by the third audio unit serving as a router. The data may be routed using table driven routing, on-demand routing or some other appropriate routing protocol. The method may also include performing voice recognition on the audio data and detecting a command word and routing command word data to the second audio unit.
An electronic device includes a microphone that receives an audio signal that includes a spoken trigger phrase, and a processor that is electrically coupled to the microphone. The processor measures characteristics of the audio signal, and determines, based on the measured characteristics, whether the spoken trigger phrase is acceptable for trigger phrase model training. If the spoken trigger phrase is determined not to be acceptable for trigger phrase model training, the processor rejects the trigger phrase for trigger phrase model training.
G10L 15/18 - Speech classification or search using natural language modelling
G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 21/0264 - Noise filtering characterised by the type of parameter measurement, e.g. correlation techniques, zero crossing techniques or predictive techniques
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
A method, device, system, and article of manufacture are provided for generating an enhanced image of a predetermined scene from images. In one embodiment, a method comprises receiving, by a computing device, a first indication associated with continuous image capture of a predetermined scene being enabled; in response to the continuous image capture being enabled, receiving, by the computing device, from an image sensor, a reference image and a first image, wherein each of the reference image and the first image is of the predetermined scene and has a first resolution; determining an estimated second resolution of an enhanced image of the predetermined scene using the reference image and the first image; and in response to the continuous image capture being disabled determining the enhanced image using the reference image and the first image, wherein the enhanced image has a second resolution that is at least the first resolution and about the estimated second resolution.
A content moving device which enables providing content stored on a first user device, such as a DVR, in a first format and resolution to be provided to a second user device, such as a portable media player (PMP) in a second format and resolution. The content moving device identifies content on the first user device as candidate content which may be desired by the PMP and receives the candidate content from the DVR. The content moving device transcodes the candidate content at times independent of a request from the PMP for the content. The content moving device may provide a list of available transcoded content to the PMP for selection, and provide selected content to the PMP. The content moving device may also provide information relating to any protection schemes of the content provided to the PMP, such as DRM rights and decryption keys. The content moving device performs the often computationally intense and time consuming transcoding of user content to enable the user to move content between different user devices in a convenient manner.
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
A device (1400) includes an electronic device (100) with one or more processors (501), one or more memory devices (508), a display (101), and a first electrical connector (206). An electronic accessory module (600) includes a second electrical connector (806). A housing (1100) receives the electronic accessory module at a first end of the housing and receives the electronic device at a second end of the housing. The housing biases the first electrical connector and the second electrical connector together and couples to both the electronic device and the electronic accessory module to secure the electronic device and the electronic accessory module within the housing.
An electronic device has an imaging device (such as a still camera or video camera) and is capable of displaying a viewfinder on one side or multiple sides of the device. The device may determine the side or sides on which to display the viewfinder based on factors such as user input, object proximity, grip detection, accelerometer data, and gyroscope data. In one implementation, the device has multiple imaging devices and can select which imaging device to use to capture an image based on the above factors as well.
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
G06F 1/16 - Constructional details or arrangements
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/147 - Digital output to display device using display panels
Disclosed is a method of associating, at a secondary device, secondary media content with primary media content being output at a primary device. The method includes receiving, at the secondary device, first information based upon the primary content being output at the primary device, wherein the first information includes at least one of an audio and a visual signal, determining at the secondary device second information corresponding to the first information, receiving at the secondary device one or more portions of secondary media content that have been made available by a third device, determining at the secondary device whether one or more of the portions of the secondary media content match one or more portions of the second information, and taking at least one further action upon determining that there is a match.
H04H 60/52 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying locations of users
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 13/00 - Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
H04N 5/445 - Receiver circuitry for displaying additional information
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/658 - Transmission by the client directed to the server
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/439 - Processing of audio elementary streams
79.
Electronic device with gesture detection system and methods for using the gesture detection system
A method in an electronic device, the method includes projecting infrared (“IR”) light from a plurality of light emitting diodes (“LEDs”) disposed proximate to the perimeter of the electronic device, detecting, by a sensor, IR light originating from at least two of the plurality of LEDs reflected from off of a person, and carrying out a function based on the relative strength of the detected IR light from the LEDs.
A method and apparatus for streaming content is disclosed. A streamer (155) detects a cue for a break in a segment of the program content, wherein the cue includes an identification of an advertising provider associated with the break. The streamer sends, to an advertisement server (115), a request for content associated with the advertising provider, and receives, from the advertisement server, an advertisement associated with the advertising provider. Further, the streamer transcodes the advertisement based on configuration information of an additional device (165) to generate formatted content viewable on the additional device. Moreover, the streamer streams the formatted content to the additional device via the local connection.
H04N 7/10 - Adaptations for transmission by electrical cable
H04N 7/025 - Systems for transmission of digital non-picture data, e.g. of text during the active part of a television frame
H04N 21/436 - Interfacing a local distribution network, e.g. communicating with another STB or inside the home
H04N 21/41 - Structure of client; Structure of client peripherals
H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/2668 - Creating a channel for a dedicated end-user group, e.g. by inserting targeted commercials into a video stream based on end-user profiles
81.
Alert peripheral for notification of events occurring on a programmable user equipment with communication capabilities
An alert peripheral device that provides sensory notification to a user of the device includes: a power subsystem; a communication mechanism by which notification signals is received from a first user equipment (UE) that generates and transmits the notification signals in response to detection of specific events at the first UE; and a response notification mechanism that provides a sensory response of the peripheral device following receipt of a notification of a detected event (NDE) signal. The device further includes an embedded controller coupled to each of the other components and which includes firmware that when executed on the embedded controller configures the embedded controller to: establish a communication link between the communication mechanism and the first UE; and in response to detecting a receipt of the NDE signal from the first UE, trigger the response notification mechanism to exhibit the sensory response.
H04M 19/04 - Current supply arrangements for telephone systems providing ringing current or supervisory tones, e.g. dialling tone or busy tone the ringing-current being generated at the substations
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
83.
Systems and methods for syncronizing multiple electronic devices
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
84.
Systems and methods for syncronizing multiple electronic devices
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
85.
Systems and methods for synchronizing multiple electronic devices
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
86.
Systems and methods for synchronizing multiple electronic devices
Embodiments are provided for syncing multiple electronic devices for collective audio playback. According to certain aspects, a master device connects (218) to a slave device via a wireless connection. The master device calculates (224) a network latency via a series of network latency pings with the slave device and sends (225) the network latency to the slave device. Further, the master devices sends (232) a portion of an audio file as well as a timing instruction including a system time to the slave device. The master device initiates (234) playback of the portion of the audio file and the slave devices initiates (236) playback of the portion of the audio file according to the timing instruction and a calculated system clock offset value.
H04H 60/88 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet accessed over computer networks which are wireless networks
H04H 60/80 - Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by source locations or destination locations characterised by transmission among terminal devices
H04H 20/18 - Arrangements for synchronising broadcast or distribution via plural systems
H04H 20/38 - Arrangements for distribution where lower stations, e.g. receivers, interact with the broadcast
H04H 20/61 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast
H04H 20/08 - Arrangements for relaying broadcast information among terminal devices
H04H 20/57 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for mobile receivers
H04H 20/63 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast to plural spots in a confined site, e.g. MATV [Master Antenna Television]
87.
Retrieval of data across multiple partitions of a storage device using digital signatures
A system and method for exchanging data among partitions of a storage device is disclosed. For example, data stored in a first partition is exchanged with an application included in the first partition or with a second application included in a second partition. In one embodiment, the second application is associated with a global certificate while the first application is associated with a different platform certificate. A verification module included in the first partition receives a request for data and determines if the request for data is received from the first application. If the request for data is not received from the first application, the verification module determines whether the request is received from the second application and whether the global certificate is an authorized certificate. For example, the verification module determines whether the global certificate is included in a listing of authorized certificates.
A system is provided for use with a video input signal and a video unit. The video input signal can be one of a two dimensional video signal and a three dimensional video signal. The video unit can display a three dimensional video and a two dimensional video. The system includes a receiver portion, a processing portion, a switching portion and an output portion. The receiver portion can receive the video input signal. The processing portion can output a first signal in a first mode of operation and can output a second signal in a second mode of operation, wherein the first signal is based on the video input signal and the second signal is based on the video input signal. The switching portion can switch the processing portion from the first mode of operation to the second mode of operation. The output portion can provide an output signal to the video unit, wherein the output signal is based on the first signal when the processing portion operates in the first mode of operation and wherein the output signal is based on the second signal when the processing portion operates in the second mode of operation. The first signal includes a two dimensional video signal, whereas the second signal includes a three dimensional video signal.
H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
H04N 13/139 - Format conversion, e.g. of frame-rate or size
A disclosed method includes monitoring an audio signal energy level while having a plurality of signal processing components deactivated and activating at least one signal processing component in response to a detected change in the audio signal energy level. The method may include activating and running a voice activity detector on the audio signal in response to the detected change where the voice activity detector is the at least one signal processing component. The method may further include activating and running the noise suppressor only if a noise estimator determines that noise suppression is required. The method may activate and runs a noise type classifier to determine the noise type based on information received from the noise estimator and may select a noise suppressor algorithm, from a group of available noise suppressor algorithms, where the selected noise suppressor algorithm is the most power consumption efficient.
G10L 15/20 - Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise or of stress induced speech
G10L 21/0364 - Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
G10L 21/0216 - Noise filtering characterised by the method used for estimating noise
G10L 15/28 - Constructional details of speech recognition systems
G10L 21/02 - Speech enhancement, e.g. noise reduction or echo cancellation
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
90.
Methods and devices for efficient adaptive bitrate streaming
Methods and systems for a content server to select sets of video streams having different encoding parameters for transmitting the sets of video streams to a media device are disclosed herein. In some embodiments, a method for transmitting video streams for a media program from a server to a media device includes: selecting, by the server, first encoding parameters including a first bitrate for a first set of video streams for the media program based on a first estimated bandwidth capacity for a network linking the server and the media device, transmitting the first set of video streams from the server to the media device, determining, by the server, second encoding parameters including a second bitrate for a second set of video streams for the media program, and transmitting the second set of video streams from the server to the media device.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/6373 - Control signals issued by the client directed to the server or network components for rate control
H04N 21/6379 - Control signals issued by the client directed to the server or network components directed to server directed to encoder
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
H04L 12/825 - Adaptive control, at the source or intermediate nodes, upon congestion feedback, e.g. X-on X-off
An electronic device includes a display screen for displaying (302) an active first application, a movement sensing assembly for providing signals indicative of movement of an object with respect to the display screen, and a processor in electronic communication with the movement sensing assembly and the display screen. The processor evaluates the signals from the movement sensing assembly to identify (304) a subdividing gesture, and instructs the display screen to display (306) the first application in a first portion of the display screen to one side of the subdividing gesture.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
This document describes techniques (300, 400) and apparatuses (100, 500, 600, 700) for in-band peripheral authentication. These techniques (300, 400) and apparatuses (100, 500, 600, 700) may communicate via a non-media channel allowing host device (102) to authenticate peripheral (106), enable an enhanced operational mode of the host device (102), and/or provide content configured for the peripheral (106) without the use of out-of-band signaling.
A method and apparatus for activating a hardware feature of an electronic device includes the electronic device detecting 302 a predetermined motion of the electronic device and measuring 304, in response to detecting the predetermined motion, an orientation of the electronic device. The method further includes the electronic device activating 306, based on the orientation, a hardware feature from a plurality of selectable hardware features of the electronic device, wherein each selectable hardware feature can be activated based on different orientations of the electronic device.
A method and apparatus for selecting between multiple gesture recognition systems includes an electronic device determining a context of operation for the electronic device that affects a gesture recognition function performed by the electronic device. The electronic device also selects, based on the context of operation, one of a plurality of gesture recognition systems in the electronic device as an active gesture recognition system for receiving gesturing input to perform the gesture recognition function, wherein the plurality of gesture recognition systems comprises an image-based gesture recognition system and a non-image-based gesture recognition system.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
95.
System and method for navigating a field of view within an interactive media-content item
A system and method for providing an interactive media content with explorable content on a computing device that includes rendering a field of view within a navigable media content item; rendering at least one targetable object within the media content item; through a user input mechanism, receiving a navigation command; navigating the field of view within the media based at least in part on the received user input mechanism; detecting a locking condition based, at least in part, on of the targetable object being in the field of view and entering a object-locked mode with the targetable object; and in the object-locked mode, automatically navigating the field of view to substantially track the targetable object of the object-locked mode.
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
H04N 21/4725 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
A63F 13/5258 - Changing parameters of virtual cameras by dynamically adapting the position of the virtual camera to keep a game object or game character in its viewing frustum, e.g. for tracking a character or a ball
G06F 1/16 - Constructional details or arrangements
96.
Radio resource assignment in control channel in wireless communication systems
A method in a wireless communication device including receiving (410) a composite control channel including at least two control channel elements, each control channel element only contains radio resource assignment information, for example, a codeword, exclusively addressed to a single wireless communication entity. The device combines (420) at least two of the control channel elements, and decodes (430) the combined control channel elements.
A method and system for identifying location of a touched body part. The method includes initializing a tracking system for monitoring travel of a pointer useful for indicating a touching operation, wherein the touching operation is performed on a body part. In addition, the method includes monitoring the travel of the pointer from a predetermined first location to a second location, wherein the second location coincides with a touch endpoint on a body part; and identifying the location of body part that was touched by the pointer.
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
A61B 34/00 - Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
98.
Method and device with intelligent media management
A method (300) and device (200) with intelligent media management is disclosed. The method (300) can include: streaming (310) media content in a wireless communication device; identifying (320) a media signature of the streamed media content; searching (330) a stored library for the identified media signature; and playing (340) locally stored media content, if the search results in finding a match with the identified media signature in the stored library. Thus, when a match occurs, locally stored media content replaces the streamed media content, to provide substantially lower power consumption and enhanced battery life in connection with wireless communication devices.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/439 - Processing of audio elementary streams
H04N 21/61 - Network physical structure; Signal processing
H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
99.
Display device, corresponding systems, and methods therefor
A display system includes a display and a control circuit operable with the display. The display is configured to provide visual output having a presentation orientation. When user input is received, the control circuit can alter the presentation orientation from an initial orientation in response to user input. When non-user events or device events are detected, the control circuit can revert the presentation orientation to the initial orientation in response to the non-user event or device event. Where the presentation orientation has a user input configuration associated therewith, the user input configuration can either be altered with the presentation orientation or retained in an initial disposition.
G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
G06F 1/16 - Constructional details or arrangements
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
A61B 5/145 - Measuring characteristics of blood in vivo, e.g. gas concentration, pH-value
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
A system receives an indication of selection of an item in a broadcast segment from an end device. A broadcast segment is identified by the selection and a broadcast segment schedule. An item ID is determined using the identified broadcast segment and the broadcast segment schedule, and a corresponding sponsor of the item is determined using the item ID and the identified broadcast segment. An anonymized message, including the item ID and a request for information, is sent to the corresponding sponsor. A reply is received from the corresponding sponsor, and forwarded to an end user contact.
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests
H04N 21/254 - Management at additional data server, e.g. shopping server or rights management server
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 7/173 - Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application