An apparatus for decoding frames of a compressed video data stream having at least one frame divided into partitions, includes a memory and a processor configured to execute instructions stored in the memory to read partition data information indicative of a partition location for at least one of the partitions, decode a first partition of the partitions that includes a first sequence of blocks, decode a second partition of the partitions that includes a second sequence of blocks identified from the partition data information using decoded information of the first partition.
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/91 - Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/51 - Motion estimation or motion compensation
A non-terrestrial, wireless communications network (NTN) (100) can assist User Equipments (UEs) (110) in tracking beams (135) generated by NTN base stations (102) for reselection purposes. The NTN determines one or more candidate beams to which the UE can reselect (e.g., while the UE is in an inactive or idle state with respect to controlling radio resources) based on a geographical location (138) of the UE and respective existing and/or predicted non-terrestrial locations of one or more non-terrestrial base stations of the NTN. The NTN transmits (140) an indication of the one or more candidate beams to the UE (and optionally other beam-related information, such as radio access resources and relative priorities), and the UE can reselect a subsequent beam based on the indication received from the NTN. The UE can locally store (132) a mapping of candidate reselection beams to geographical locations for ease and efficiency of future reselections.
Aspects describe communicating quantized machine-learning, ML, configuration information over a wireless network. A base station selects (605) a quantization configuration for quantizing ML configuration information for a deep neural network, DNN, where the quantization configuration indicates one or more quantization formats associated with quantizing the ML configuration information. The base station transmits (610) an indication of the quantization configuration to a user equipment, UE and transfers (615), over the wireless network and with the UE, quantized ML configuration information using the quantization configuration.
Methods, devices, systems, and means for user equipment slicing assistance information by a user equipment, UE, are described herein. The UE detects a condition of the UE (610) and, based on the detecting, evaluating one or more preferences (612). Based on evaluating the one or more preferences, the UE sends UE Slicing Assistance Information, USAI, to a core network entity (614), the USAI being based on a current network slice configuration. The UE receives, from a base station, a reduced radio resource configuration for operating using the low-throughput network slice (616) and communicates using the low-throughput network slice (618).
In aspects, a base station schedules air interface resources of a wireless communication system using one or more prediction metrics from a user equipment, UE. The base station receives (505), from the user equipment, user-equipment-prediction-metric capabilities. Based on the user-equipment-prediction-metric capabilities, the base station generates (510) a prediction-reporting request and communicates (515) the prediction-reporting request to the user equipment. The base station receives (520) one or more user-equipment-prediction-metric reports from the UE and schedules (525) the one or more air interface resources of the wireless communication system based on the one or more user-equipment-prediction-metric reports.
H04L 41/149 - Network analysis or design for prediction of maintenance
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 43/0876 - Network utilisation, e.g. volume of load or congestion level
H04L 43/091 - Measuring contribution of individual network components to actual service level
A method (300) for optimizing speech recognition includes receiving a first acoustic segment (121) characterizing a hotword detected by a hotword detector (110) in streaming audio (118) captured by a user device (102), extracting one or more hotword attributes (210) from the first acoustic segment, and adjusting, based on the one or more hotword attributes extracted from the first acoustic segment, one or more speech recognition parameters of an automated speech recognition (ASR) model (320). After adjusting the speech recognition parameters of the ASR model, the method also includes processing, using the ASR model, a second acoustic segment (122) to generate a speech recognition result (322). The second acoustic segment characterizes a spoken query/command that follows the first acoustic segment in the streaming audio captured by the user device.
A method for rejecting biased data using a machine learning model includes receiving a cluster training data set including a known unbiased population of data and training a clustering model to segment the received cluster training data set into clusters based on data characteristics of the known unbiased population of data. Each cluster of the cluster training data set includes a cluster weight. The method also includes receiving a training data set for a machine learning model and generating training data set weights corresponding to the training data set for the machine learning model based on the clustering model. The method also includes adjusting each training data set weight of the training data set weights to match a respective cluster weight and providing the adjusted training data set to the machine learning model as an unbiased training data set.
The present disclosure provides systems and methods for on-device machine learning. In particular, the present disclosure is directed to an on-device machine learning platform and associated techniques that enable on-device prediction, training, example collection, and/or other machine learning tasks or functionality. The on-device machine learning platform can include a context provider that securely injects context features into collected training examples and/or client-provided input data used to generate predictions/inferences. Thus, the on-device machine learning platform can enable centralized training example collection, model training, and usage of machine-learned models as a service to applications or other clients.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A method for applying a style to an input image to generate a stylized image. The method includes maintaining data specifying respective parameter values for each image style in a set of image styles, receiving an input including an input image and data identifying an input style to be applied to the input image to generate a stylized image that is in the input style, determining, from the maintained data, parameter values for the input style, and generating the stylized image by processing the input image using a style transfer neural network that is configured to process the input image to generate the stylized image.
Provided is a method of spatially referencing a plurality of images captured from a plurality of different locations within an indoor space by determining the location from which the plurality of images was captured. The method may include obtaining a plurality of distance-referenced panoramas of an indoor space. The distance-referenced panoramas may each include a plurality of distance-referenced images each captured from one position in the indoor space and at a different azimuth from the other distance-referenced images, a plurality of distance measurements, and orientation indicators each indicative of the azimuth of the corresponding one of the distance-referenced images. The method may further include determining the location of each of the distance-referenced panoramas based on the plurality of distance measurements and the orientation indicators and associating in memory the determined locations with the plurality of distance-referenced images captured from the determined location.
Implementations relate to causing a command to be executed based on an image. In some implementations, a computer-implemented method includes obtaining and programmatically analyzing an image to determine suggested actions. The method causes a user interface to be displayed that includes user interface elements corresponding to default actions, and to suggested actions that are determined based on analyzing the image. The method receives user input indicative of selection of a particular action from the default actions and the suggested actions. The method causes a command to be executed by a computing device for the particular action that was selected.
A second computing device operator initiates a service request. The second computing device transmits a request identifier and a displayed image to a processing system, which associates the received image and the request identifier. In an example, the second computing device broadcasts an audio token via an audio communication channel comprising the request identifier and displayed image. A user associated with a first computing device selects an option to initiate a service request and the first computing device receives the audio token via the audio communication channel. The first computing device displays at least the received image and the user selects the image on the first computing device among a group of displayed images to confirm the service request. In another example, the user selects a different image to cancel the service request. The processing system receives the selected image from the first computing device and processes the service request.
H04M 1/215 - Combinations with auxiliary equipment, e.g. with clocks or memoranda pads by non-intrusive coupling means, e.g. acoustic couplers
G06Q 20/02 - Payment architectures, schemes or protocols involving a neutral third party, e.g. certification authority, notary or trusted third party [TTP]
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
Apparatus and methods related to stored software libraries are provided. A computing device can receive versioned-shared-library information for a first software library used by a software application, where the versioned-shared-library information can include an identifier. The computing device can determine whether the computing device stores a copy of the first software library identified in the versioned-shared-library information by the identifier. The computing device can send a request for one of a full executable and a stripped executable for the software application, where the full executable includes the first software library, and where the stripped executable excludes the first software library. In response to the request, the computing device can receive the full executable or the stripped executable for the software application.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a data entity that causes a processing unit to process a computational graph. In one aspect, method includes the actions of receiving data identifying a computational graph, the computational graph including a plurality of nodes representing operations; obtaining compilation artifacts for processing the computational graph on a processing unit; and generating a data entity from the compilation artifacts, wherein the data entity, when invoked, causes the processing unit to process the computational graph by executing the operations represented by the plurality of nodes.
Contextual paste target prediction is used to predict one or more target applications for a paste action, and do so based upon a context associated with the content that has previously been selected and copied. The results of the prediction may be used to present to a user one or more user controls to enable the user to activate one or more predicted application, and in some instances, additionally configure a state of a predicted application to use the selected and copied content once activated. As such, upon completing a copy action, a user may, in some instances, be provided with an ability to quickly switch to an application into which the user was intending to paste the content. This can provide a simpler user interface in a device such as phones and tablet computers with limited display size and limited input device facilities. It can result in a paste operation into a different application with fewer steps than is possible conventionally.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
G06F 17/27 - Automatic analysis, e.g. parsing, orthograph correction
Techniques are described herein for reducing the number of inputs required by a user to utilize copied/cut content to perform various operations. In various implementations, it may be determined that new content has been added to a pasteboard data structure stored in memory of a computing device. The new content may be ready to be provided as input to one or more applications in response to a paste command. The new content may be analyzed to identify attribute(s) of the new content. Additionally or alternatively, dynamic attribute(s) of a state of the computing device may be identified. In various implementations, based on the attribute(s) of the new content and/or the dynamic attribute(s), candidate action(s) may be identified that are performable using the new content as input. Output may be generated and provided that is based the candidate action(s).
G06F 17/27 - Automatic analysis, e.g. parsing, orthograph correction
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
Implementations disclose a handoff feature for a content sharing platform. A method includes maintaining a session history of a session that occurred at a first client device, the session history identified by a visit identifier (ID) and comprising a set of recently-watched content items on a content sharing platform, determining that a user associated with the session is active on a second client device, transmitting, to the second client device, a session continuation notification associated with the visit ID and a navigation end-point of the session, receiving, from the second client device, a request for a watch page user interface (UI) of a content item corresponding to the navigation end-point of the session, and transmitting, to the second client device, instructions to load the watch page UI and to request additional components of the watch page UI using the visit ID.
Implementations disclose restricted and unrestricted states for content based on installation status of applications. A method includes receiving, by a first content platform, a request to access content via a first application executing on a client device, the first application being associated with the first content platform, determining that the first application is in an unrestricted state based on an ephermal state machine of the server device, determining an install state of a second application on the client device, the second application being associated with a second content platform, responsive to determining that the install state of the second application is uninstalled, providing the content via the first application in the unrestricted state, and responsive to determining that the install state of the second application is installed, transferring the first application to a restricted state, and providing the content via the first application in a restricted state.
A method includes determining, by an application executing at a computing device, based at least in part on a respective amount of usage of each settings category from a plurality of settings categories, a respective relevancy score for the corresponding settings category. The method also includes determining, by the application, based on the respective relevancy scores, a respective display position for each settings category within an application settings graphical user interface. The method further includes, responsive to determining a display position of each settings category, generating, by the application, based on the display positions of each settings category, the application settings graphical user interface including a respective representation of at least one settings category in the plurality of settings categories at the corresponding display position. The method also includes outputting, by the application, for display at a display device, an indication of the application settings graphical user interface.
A system comprising at least one processor; and at least one storage device. The storage device(s) store instructions that, when executed, cause the at least one processor to: prior to enabling output of an audio signal based on an audio data stream, detect, within the audio data stream, an indication of a target sound that corresponds to one of a plurality of sounds that are expected to cause distraction, replace, within the audio data stream, the indication of the target sound with an indication of a replacement sound, wherein the replacement sound is a less distracting version of the target sound, and after replacing the indication of the target sound with the indication of the replacement sound, output the audio data stream.
Methods and apparatus related to determining a semantically diverse subset of candidate responses to provide for initial presentation to a user as suggestions for inclusion in a reply to an electronic communication. Selection of one of the provided responses by the user will reduce the number of user inputs that a user must make to formulate the reply, which may reduce the usage of various client device computational resources and/or be of particular benefit to users that have low dexterity (or who otherwise have difficulties making user inputs to a client device). Some of the implementations determine the semantically diverse subset of candidate responses based on generating, over a neural network response encoder model, embeddings that are each based on one of the plurality of the candidate responses. The embedding based on a given candidate response may be compared to embedding(s) of candidate response(s) already selected for the subset, and the given candidate response added to the subset only if the comparing indicates a difference criterion is satisfied.
A user interface adaptation module identifies a dominant color of a portion selection of a frame of a video and, based on the dominant color, generates colors for components of a user interface in which the video is displayed. The colors of the user interface components are set based upon the generated colors and upon context information such as a playing state of the video. The setting of the component colors in this way allows the user interface to adjust to complement both the played content of the video and the video's context. In one embodiment, the dominant color is identified by partitioning individual pixels of the portion selection based on their respective colors. In one embodiment, a set of primary color variants is generated based on the dominant color, and different colors are generated for each type of user interface component based on the different primary color variants.
A method includes determining whether an application has previously been executed by a computing device. The method includes, responsive to determining that the application has not previously been executed by the computing device, determining, by the application, contextual information associated with the computing device. The method also includes determining, based at least in part on the contextual information, content to include in at least one template graphic user interface of a plurality of template graphical user interfaces for an onboarding tutorial of the application. At least one template graphical user interface is associated with at least one feature of the application. The method also includes generating, based on the at least one template graphical user interface and the content, at least a first graphical user interface of the onboarding tutorial. The method further includes outputting an indication of the first graphical user interface of the onboarding tutorial.
A user device provides a user interface for video manipulation with face replacement. The user device accesses a source video including a group of frames and one or more faces. The user device also provides a set of stickers with alternate face graphics. Upon receiving selection of one of the stickers, one of the faces and one of the frames that includes the face from a user, the user device accesses a face frame sequence. The face frame sequence is a sequence of frames including the selected frame. And each frame of the face frame sequence includes the selected face. The user device sends to a server a request to replace the selected face with the selected sticker in the frame sequence and receives a manipulated video in response to the request, where the selected face is replaced with the selected sticker in each frame of the frame sequence.
A plurality of entities relating to popular search queries are identified. A set of entities representing musical artists or events is selected from the plurality of entities. Based on a history of online actions of a user, a subset of the selected set of entities that is relevant to the user is determined, and personalized music recommendations are created for the user, where the personalized music recommendations comprise music content associated with the determined subset of entities that each represent a musical artist or event relating to the popular search queries. The personalized music recommendations are provided for presentation to the user.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for parameterization of physical dimensions of discrete circuit components for component definitions that define discrete circuit components. The component definitions may be selected for use in a device design. When a parametrization of a particular version of a discrete circuit component definition is changed, the version level of the device design is also changed and the circuit layout for the device design is physically verified for the new version level.
Techniques are described herein for automatically permitting interactive assistant modules to provide access to resources controlled by users. In various implementations, an interactive assistant module may receive a request by a first user for access to a given resource controlled by a second user. The interactive assistant module may lack prior permission to provide the first user access to the given resource. The interactive assistant module may determine attribute(s) of a relationship between the first and second users, as well as attribute(s) of other relationship(s) between the second user and other user(s) for which the interactive assistant module has prior permission to provide access to the given resource. The interactive assistant module may compare the attribute(s) of the relationship with the attribute(s) of the other relationship(s), and may conditionally assume, based on the comparing, permission to provide the first user access to the given resource. Various embodiments describe solutions to the technical problem of managing security in computer-implemented processes carried out by interactive assistant modules.
A method (700) includes receiving a connection request (204) from a network base station (102) on a primary component carrier (CC) (220P) associated with a primary user equipment (UE) (104P), and connecting to the network base station on the primary CC. The method also includes receiving a configuration message (206) from the network base station. The configuration message instructs operation of at least one secondary CC (220S). The at least one secondary CC is associated with at least one secondary UE (104S). The method also includes, in response to receiving the configuration message, instructing the at least one secondary UE to operate on the at least one secondary CC and receive data (208) from the network base station on the at least one secondary CC.
A carrier office includes an optical line terminal (OLT) (120), a first transmit- erbium-doped fiber amplifier (EDFA) (430a), and a second transmit-EDFA (430b). The OLT is configured to transmit first and second optical signals (102a, 102b). The first transmit-EDFA is optically coupled to the OLT and a first feeder fiber (110a), and the first feeder fiber is optically coupled to a first remote node (RN). The first transmit-EDFA is operable between a respective enabled state and a respective disabled state. The second transmit-EDFA is optically coupled to the OLT and a second feeder fiber (110b), and the second feeder fiber is optically coupled to a second RN. The second transmit-EDFA is operable between a respective enabled state and a respective disabled state.
A graphical user interface displays inventory data that has been determined based on user supplied data and merchant supplied data. When a user searches for a product on a search engine computing system, the search engine computing system associates the searched items with the user. The search engine computing system logs if a user visits a local merchant location associated with the searched product. The search engine computing system requests inventory data from the user for the product at the local merchant location. The search engine computing system aggregates the user response with other user responses and incorporates the responses with the inventory data provided by the merchant. The inventory display may include one or more inventory metrics to provide more useful inventory data to the user.
A system includes a multiplexer (160) having a pass-band and an optical network unit (ONU) (140) optically coupled to the multiplexer. The ONU includes a tunable laser (310) configured to continuously transmit an optical signal (104) to the multiplexer in a burst-on state and a burst-off state. While in the burst-on state, the ONU is configured to tune the tunable laser to transmit the optical signal at a transmit wavelength within the wavelength pass-band of the multiplexer. The multiplexer configured to allow passage therethrough of the optical signal at the transmit wavelength. While in the burst-off state, the ONU is configured to tune the tunable laser to transmit the optical signal at a non- transmit wavelength outside of the wavelength pass-band of the multiplexer. The multiplexer configured to block passage therethrough of the optical signal at the non- transmit wavelength.
H01S 5/0625 - Arrangements for controlling the laser output parameters, e.g. by operating on the active medium by varying the potential of the electrodes in multi-section lasers
A remote node (170) includes a first node input (531), a second node input (532), and an optical switch (600). The optical switch includes a first switch input (610) optically coupled to the first node input, a second switch input (612) optically coupled to the second node input, a first switch output switchably coupled to the first switch input or the second switch input, and a second switch output switchably coupled to the first switch input or the second switch input. The remote node includes a photodiode (520) optically coupled to the second switch output, and a capacitor (540) electrically coupled to the photodiode and the optical switch. When the first switch input is switchably coupled to the first switch output, the second switch input is switchably coupled to the second switch output. Light received by the second switch input passes out the second switch output to the photodiode. The photodiode charges the capacitor to a threshold charge.
A method for updating an application on a computing device includes receiving, at the computing device, a notification that an update is available for the application, then responsive to the notification, obtaining, over a first time period having a predetermined length, one or more stability indicators for the application from one or more sources, the one or more stability indicators being generated after the notification is received, and automatically executing the update for the application after the expiration of the first time period when the one or more stability indicators satisfy one or more predetermined vetting rules.
Methods, systems, and media for analyzing spherical video content are provided. More particularly, methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere are provided. In some embodiments, the method comprises: receiving an identifier of a spherical video content item, wherein the spherical video content item has a plurality of views and wherein the spherical video content item is encoded into a plurality of two-dimensional video frames; selecting a first frame of the plurality of two-dimensional video frames associated with the spherical video content item; dividing the first frame into a plurality of tiles spanning the first frame of the spherical video content item; calculating, for each tile of the plurality of tiles, a probability that the tile includes content of a particular type of content; determining, for each tile, whether the probability exceeds a predetermined threshold; in response to determining, for a particular tile, that the probability exceeds the predetermined threshold, causing the content associated with the tile to be analyzed using a video fingerprinting technique; and in response to determining, using the video fingerprinting technique, that the content associated with the tile matches a reference content item of a plurality of reference content items, generating an indication of the match in association with the identifier of the spherical video content item.
A request of a channel owner is received to enable an online community option to facilitate communications between the channel owner and viewers of a channel of the channel owner on a content sharing platform. The online community option is associated with the channel in a data store, and a channel GUI comprising a GUI element representing the online community option is provided for presentation to the channel owner. An online community GUI is provided to allow the channel owner to submit a post to initiate online conversation with viewers of the channel. The channel GUI is provided for presentation to a viewer of the channel. In response to a selection of the GUI element representing the online community option, the online community GUI comprising the post of the channel owner is provided, and the viewer of the channel is allowed to respond to the post.
A computer-implemented method for speech diarization is described. The method comprises determining temporal positions of separate faces in a video using face detection and clustering. Voice features are detected in the speech sections of the video. The method further includes generating a correlation between the determined separate faces and separate voices based at least on the temporal positions of the separate faces and the separate voices in the video. This correlation is stored in a content store with the video.
Methods, systems, and media for identifying content in stereoscopic videos and, more particularly, for detecting abusive stereoscopic videos by generating fingerprints for multiple portions of a video frame are provided. The method comprises: receiving, from a user device, a video content item for uploading to a content provider; selecting a frame from a plurality of frames of the video content item for generating one or more fingerprints corresponding to the video content item; generating a first fingerprint corresponding to the selected frame, a second fingerprint corresponding to a first encoded portion of the selected frame, and a third fingerprint corresponding to a second encoded portion of the selected frame; comparing each of the first fingerprint, the second fingerprint, and the third fingerprint to a plurality of reference fingerprints corresponding to reference video content items; determining whether at least one of the first fingerprint, the second fingerprint, and the third fingerprint match a reference fingerprint of the plurality of reference fingerprints; and, in response to determining that at least one of the first fingerprint, the second fingerprint, and the third fingerprint match the reference fingerprint, causing an indication of the match to be presented on the user device.
G06F 17/30 - Information retrieval; Database structures therefor
G11B 23/28 - Record carriers not specific to the method of recording or reproducing; Accessories, e.g. containers, specially adapted for co-operation with the recording or reproducing apparatus indicating prior or unauthorised use
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
H04N 21/254 - Management at additional data server, e.g. shopping server or rights management server
H04N 21/835 - Generation of protective data, e.g. certificates
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
39.
FACILITATING CREATION AND PLAYBACK OF USER-RECORDED AUDIO
Methods, apparatus, and computer readable media are described related to recording, organizing, and making audio files available for consumption by voice-activated products. In various implementations, in response to receiving an input from a first user indicating that the first user intends to record audio content, audio content may be captured and stored. Input may be received from the first user indicating at least one identifier for the audio content. The stored audio content may be associated with the at least one identifier. A voice input may be received from a subsequent user. In response to determining that the voice input has particular characteristics, speech recognition may be biased in respect of the voice input towards recognition of the at least one identifier. In response to recognizing, based on the biased speech recognition, presence of the at least one identifier in the voice input, the stored audio content may be played.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G09B 5/04 - Electrically-operated educational appliances with audible presentation of the material to be studied
Implementations disclose livestream conversation notifications. A method includes receiving, via a first user device over a network, a livestream video; presenting, via the first user device to a first user, the livestream video; selecting, from contacts of the first user, a set of contacts with whom the livestream video is to be shared, the selecting being based on affinity scores of the contacts; and causing a transmission, to the selected set of contacts, of a notification that at least the first user is watching the livestream video.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
41.
DECOMPOSITION OF DYNAMIC GRAPHICAL USER INTERFACES
A system is described that is configured to generate a rendering of a graphical user interface (GUI) for display at a display of a first device and identify a set of dynamic components from the GUI that change during a period of time. The system is further configured to determine respective display information associated with each dynamic component that includes an indication of an image of the corresponding dynamic component; and an indication of a position of the corresponding dynamic component within the GUI during discrete intervals of the period of time. The system is further configured to generate, based on the respective display information, display instructions that configure a second device to display the GUI at a display of the second device, and send, to the second device, the display instructions.
A method (500) includes receiving packets of data (50) from an external network (102) and receiving location information (306) over a wireless communication link (280) from a wireless communication device (150) located inside a building (42). The location information indicates a relative location of the wireless communication device. The method also includes executing beam forming with the wireless communication device based on the received location information to form a communication beam (380) directed toward the wireless communication device. The method also includes transmitting the communication beam over the wireless communication link to the wireless communication device, the communication beam containing the data packets.
H04B 7/02 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas
G01C 21/20 - Instruments for performing navigational calculations
G01S 5/00 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations
G01S 5/02 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
G01S 13/48 - Indirect determination of position data using multiple beams at emission or reception
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
H04B 10/2575 - Radio-over-fibre, e.g. radio frequency signal modulated onto an optical carrier
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
Rendering graphical user interfaces to a user computing device to display commonly categorized entities includes receiving a search request comprising a point of interest query input into a graphical user interface hosted by the one or more computing devices. The system determines airports or other commonly categorized entities, that are closest to the point of interest and displays a list of the entities that are closest to the point of interest. The graphical user interface configures a set of boundaries for a map display on the graphical user interface based on a configured number of entities to be displayed and presents the point of interest and the entities on the map. The graphical user interface displays a transit time for one or more modes of transportation from the point of interest to each of the entities to allow the user to assess the preferred entity.
Methods, systems, and media for enhancing two-dimensional video content items with spherical video content are provided. In some embodiments, the method comprises: receiving an indication of a two-dimensional video content item to be presented on a user device; determining image information associated with one or more image frames of the two-dimensional video content item; identifying spherical video content based on the image information associated with the one or more image frames of the two-dimensional video content item, wherein the spherical video content is related to the determined image information and wherein the spherical video content includes a plurality of views; identifying a position corresponding to a first view of the plurality of views within the related spherical video content at which to insert the two- dimensional video content item; and generating a spherical video content item by inserting the two-dimensional video content item within the related spherical video content at the identified position corresponding to the first view for presentation on the user device, wherein, in response to receiving a user input from the user device to change a viewpoint of the spherical video content item, the related spherical video content within the spherical video content item is modified to a second view of the plurality of views while the two-dimensional content item within the spherical video content item is continued to be presented at the identified position.
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
45.
DETERMINATION OF SIMILARITY BETWEEN VIDEOS USING SHOT DURATION CORRELATION
A content system identifies shots in a first video and shots in a second video. Shot durations are determined for the identified shots of each video, A histogram is generated for each video, each histogram dividing the identified shots of the corresponding video into a set of buckets divided according to a range of shot durations. The system determines confidence weights for the buckets of each histogram, with the confidence weight for a bucket based on a likelihood of a particular number of identified shots occurring within the range of shot duration for that bucket. A correlation value is computed for the two videos based on a number of identified shots in each bucket of each respective histogram and based on the confidence weights. The content system determines whether the two videos are similar based on the correlation value and a self-correlation value of each video.
A computer-implemented method for providing templates for a document to a user, the method comprising detecting a first object in the document, generating a score for each document template in a plurality of document templates, by applying a ranking scheme to the document templates, wherein the ranking scheme is based on the first object placed in the document, providing to the user, a first subset of the plurality of document templates based on each document templates respective score, receiving a selection of a document template from the first subset of the plurality of document templates from the user, and applying the selected document template to the first object in the document.
A method (300) receiving an input signal (102) at radio circuitry (200), sampling the input signal, and determining a power level of the sampled input signal. The radio circuitry includes an input switch (230) having an input (232), a first output (234), and a second output (236). The input switch is configured to switch between the first output for the receive mode and the second output for the transmit mode. The method also includes determining whether the power level of the sampled input signal is greater than a threshold power level. When the power level of the sampled input signal is greater than the threshold power level, the method includes switching the input switch to the second output for the transmit mode. When the power level of the sampled input signal is less than or equal to the threshold power level, the method includes switching the input switch to the first output for the receive mode.
H04B 1/48 - Transmit/receive switching in circuits for connecting transmitter and receiver to a common transmission path, e.g. by energy of transmitter
48.
APPLICATION PROGRAM INTERFACE FOR MANAGING COMPLICATION DATA
A computing device is described that requests, at a first time and from a data provider, packaged complication data associated with a complication that comprises a graphical notification element on a display device. The computing device receives the packaged complication data that includes a plurality of complication data updates and timing data that defines a respective length of time that each complication data update is to be displayed. The computing device, responsive to receiving the packaged complication data, outputs, for display, a graphical user interface including current time information and the complication including a graphical indication of a first complication data update. The computing device determines a second time at which to output a graphical indication of a second complication data update for display. The computing device replaces, at the second time, the graphical indication of the first complication data update with that of the second complication data update.
Methods, apparatus, and computer readable media related to determining that no resources responsive to a query of a user at a first time satisfy one or more criteria (e.g., one or more quality criteria) and, in response to such a determination: providing for presentation to the user at a later time, content that is based on a given resource that is responsive to the query at the later time and that satisfies the criteria. The given resource that is responsive to the query at the later time may be a resource that is in addition to any resources responsive to the query at the first time or may be a refined version of a resource that was responsive to the query at the first time.
A method (500) of establishing communication between an optical line terminal (120) and an optical network unit (140) within an optical access network (105) includes receiving a signal indication (128) from an optical transceiver (122) of an optical line terminal. The signal indication includes: (i) a loss-of-signal indication (128a) indicating non-receipt of an upstream optical signal (104u) from the optical network unit; or (ii) a signal-received indication (128b) indicating receipt of the upstream optical signal from the optical network unit. The method includes determining whether the signal indication includes the loss-of-signal indication. When the signal indication includes the loss-of- signal indication, the method includes instructing the optical transceiver to cease signal transmission from the optical transceiver to the optical network unit. Moreover, when the signal indication includes the signal-received indication, the method includes instructing the optical transceiver to transmit a downstream optical signal (104d) from the optical transceiver to the optical network unit.
In a streaming application environment, input generated in a remote device may be synchronized with rendered content generated by a virtual streaming application in the streaming application environment in part by using frame refresh events passed between the remote device and the streaming application environment to enable the streaming application environment to effectively track a frame refresh rate of the remote device such that input events received from the remote device may be injected into the virtual streaming application at appropriate frame intervals.
Implementations disclose leveraging aggregated network statistics for enhancing quality and user experience for live video streaming from mobile devices. A method includes receiving, by a processing device of a client device, a bandwidth parameter corresponding to aggregated network statistics for at least one of a current geographic location of the client device or a current network of the client device, initializing an upload quality parameter of an upload session based on the received bandwidth parameter, the upload session comprising upload of content from the client device, and modifying, by the processing device, the upload quality parameter during the upload session based on updated bandwidth parameters corresponding to aggregated network conditions for at least one of new geographic locations of the client device or new networks of the client device, the upload quality parameter being used to control a format of the upload session.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/6547 - Transmission by server directed to the client comprising parameters, e.g. for client setup
H04N 21/658 - Transmission by the client directed to the server
H04N 21/647 - Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load or bridging bet
53.
PROVIDING PROMPT IN AN AUTOMATED DIALOG SESSION BASED ON SELECTED CONTENT OF PRIOR AUTOMATED DIALOG SESSION
Methods, apparatus, and computer readable media related to soliciting feedback from a user regarding one or more content parameters of a suggestion or other content provided by the automated assistant. The user's feedback may be used to influence future suggestions and/or other content subsequently provided, by the automated assistant in future dialog sessions, to the user and/or to other users. In some implementations, content is provided to a user by an automated assistant in a dialog session between the user and the automated assistant - and the automated assistant provides a prompt that solicits user feedback related to the provided content in a future dialog session between the user and the automated assistant. In some of those implementations, the prompt is provided following input from the user and/or output from the automated assistant, in the future dialog session, that is unrelated to the content provided in the previous dialog session.
A head mounted display device (1400) includes a display panel (104) and a lens assembly (124) mounted so that an optical axis of the lens assembly intersects the display panel. The lens assembly includes a lens body (102) having a surface (112) facing the display panel and defining Fresnel prisms (108). A Fresnel prism of the Fresnel prisms has a first facet angle when viewed in a first cross-section and has a second facet angle when viewed in a second cross-section parallel to the first cross-section. The first facet angle is different than the second facet angle.
A method (400) for overlapping spectrum amplification includes receiving an optical signal (102) and splitting the optical signal into a first split signal (102a) having a first wavelength band (λa) and a second split signal (102b) having a second wavelength band (λb). The splitting results in a band gap (G) between the first wavelength band and the second wavelength band. The method further includes delaying the first split signal by a threshold period of time relative to the second split signal and combining the first split signal and the second split signal, resulting in a combined signal (104) having the first wavelength band and the second wavelength band without the band gap therebetween. The path difference between the first split signal along the first signal path (P1) and the second split signal along the second signal path (P2) is within a threshold multipath interference compensation range.
Rendering graphical user interfaces to display current and future data to users, the graphical user interfaces generated in response to search queries comprises a flight search system and an airline system. The flight search system receives current flight data and future flight data for the group of flights from an airline system and stores the data on a database. When the flight search system receives a flight search request comprising desired flight data from a user computing device, the system compares the desired flight search data with the stored data to identify one or more flights of the group of flights that match one or more features of the desired flight data. The system presents the current flight data and the future flight data on a graphical user interface to the user when it is likely that the flight data is going to change.
A stream hosting server generates anchors associated with a live stream, each anchor specifying a timestamp of the live stream that represents an opportune moment for a user to join the live stream. When a viewer client device sends a request to join the live stream, the stream hosting server analyzes the anchor list and selects an appropriate anchor. The stream hosting server provides the live stream to the viewer client device beginning at the timestamp specified by the anchor. Thus, the viewer client device can begin displaying the live stream at the opportune moment specified by the anchor. The stream hosting server also creates video on demand content that include a completed live stream as well as anchors associated with the live stream. The viewer client device can display the VOD beginning at different anchors.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
58.
BITRATE OPTIMIZATION FOR MULTI-REPRESENTATION ENCODING USING PLAYBACK STATISTICS
Implementations disclose bitrate optimization for multi-representation encoding using playback statistics. A method includes generating multiple versions of a segment of a source video, the versions comprising encodings of the segment at different encoding bitrates for each resolution of the segment, measuring a quality metric for each version of the segment, generating rate-quality models for each resolution of the segment based on the measured quality metrics corresponding to the resolutions, generating a probability model to predict requesting probabilities that representations of the segment are requested, the probability model based on a joint probability distribution of network speed and viewport size that is generated from client-side feedback statistics associated with prior playbacks of other videos, determining an encoding bitrate for each of the representations of the segment based on the rate-quality models and the probability model, and assigning determined encoding bitrates to corresponding representations of the segment.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
Computer-implemented systems and methods are described for configuring a plurality of privacy properties for a plurality of virtual objects associated with a first user and a virtual environment being accessed using a device associated with the first user, triggering for display, in the virtual environment, the plurality of virtual objects to the first user accessing the virtual environment, determining whether at least one virtual object is associated with a privacy setting corresponding to the first user. In response to determining that a second user is attempting to access the one virtual object, a visual modification may be applied to the object based on a privacy setting. The method may also include triggering for display, the visual modification of the at least one virtual object, to the second user while continuing to trigger display of the at least one virtual object without the visual modification to the first user.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/5255 - Changing parameters of virtual cameras according to dedicated instructions from a player, e.g. using a secondary joystick to rotate the camera around a player's character
A63F 13/71 - Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/25 - Output arrangements for video game devices
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/75 - Enforcing rules, e.g. detecting foul play or generating lists of cheating players
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
An assistant executing at, at least one processor, is described that determines content for a conversation with a user of a computing device and selects, based on the content and information associated with the user, a modality to signal initiating the conversation with the user. The assistant is further described that causes, in the modality, a signaling of the conversation with the user.
An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed; determining a capability level of a first party (1P) agent to perform the task; determining capability levels of respective third party (3P) agents of a plurality of 3P agents to perform the task; responsive to determining that the capability level of the 1P agent does not satisfy a threshold capability level, that a capability level of a particular 3P agent of the plurality of 3P agents is a greatest of the determined capability levels, and that the capability level of the particular 3P agent satisfies the threshold capability level, selecting the particular 3P agent to perform the task; and performing one or more actions determined by the selected agent to perform the task.
A camera captures an image of a user wearing a head mounted device (HMD) that occludes a portion of the user's face. A 3-D pose that indicates an orientation and a location of the user's face in a camera coordinate system is determined. A representation of the occluded portion of the user's face is determined based on a 3-D model of the user's face. The representation replaces a portion of the HMD in the image based on the 3-D pose of the user's face in the camera coordinate system. Mixed reality images can be generated by combining virtual reality images, unoccluded portions of the user's face, and representations of an occluded portion of the user's face.
An example method includes receiving, by one or more processors, a representation of an utterance spoken at a computing device; identifying, by a first computational agent from a plurality of computational agents and based on the utterance, a multi-element task to be performed, wherein the plurality of computational agents includes one or more first party computational agents and a plurality of third-party computational agents; and performing, by the first computational agent, a first sub-set of elements of the multi-element task, wherein performing the first sub-set of elements comprises selecting a second computational agent from the plurality of computational agents to perform a second sub-set of elements of the multi-element task.
An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; selecting, based on the utterance, an agent from a plurality of agents, wherein the plurality of agents includes one or more first party agents and a plurality of third-party agents; responsive to determining that the selected agent comprises a first party agent, selecting a reserved voice from a plurality of voices; and outputting synthesized audio data using the selected voice to satisfy the utterance.
A system includes: a chip (100) including a superconducting quantum computing circuit element (118); a printed circuit board (102) including a laminate sheet (114) a first superconductor layer including a signal line (110) and a ground line (112) on a first side of the laminate sheet, a second superconductor layer (122) on a second side of the laminate sheet, the second side opposing the first side, and a via (126) extending from the first superconductor layer through the laminate sheet to the second superconductor layer, in which the via includes a third superconductor material (124) that electrically connects the first superconductor layer to the second superconductor layer; and a superconductor coupling (element 116a), (116b) that electrically couples the chip to the first superconductor layer of the printed circuit board.
H05K 1/11 - Printed elements for providing electric connections to or between printed circuits
H01L 39/00 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof
G06N 99/00 - Subject matter not provided for in other groups of this subclass
66.
DEEP REINFORCEMENT LEARNING FOR ROBOTIC MANIPULATION
Implementations utilize deep reinforcement learning to train a policy neural network that parameterizes a policy for determining a robotic action based on a current state. Some of those implementations collect experience data from multiple robots that operate simultaneously. Each robot generates instances of experience data during iterative performance of episodes that are each explorations of performing a task, and that are each guided based on the policy network and the current policy parameters for the policy network during the episode. The collected experience data is generated during the episodes and is used to train the policy network by iteratively updating policy parameters of the policy network based on a batch of collected experience data. Further, prior to performance of each of a plurality of episodes performed by the robots, the current updated policy parameters can be provided (or retrieved) for utilization in performance of the episode.
G05B 13/02 - Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
This disclosure relates to systems and methods for proactively determining identification information for a plurality of audio segments within a plurality of broadcast media streams, and providing identification information associated with specific audio portions of a broadcast media stream automatically or upon request.
G10L 21/00 - Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G10L 25/54 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for retrieval
G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals
G10L 25/81 - Detection of presence or absence of voice signals for discriminating voice from music
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
An optical connector assembly (10) includes a spring (400), a ferrule (300), a first housing (210), and a second housing (230) connected to the first housing. The ferrule includes a ferrule body (310) and a lens (360). The ferrule body defines a fiber receiver (312) configured to receive optical fibers (114) of an optical cable (100, 110) and a first spring receiver (314) configured to receive the spring. The first housing defines a first opening (220) configured to slidably receive and guide the ferrule for movement along a first longitudinal axis (Al). The second housing defines a second opening (240) configured to receive the optical cable therethrough along a second longitudinal axis (A2), and a second spring receiver (234) configured to receive the spring. The spring biases movement of the ferrule in the first housing away from the second housing.
A system and method for presentation of media related to a context. A request is received over a network from a requesting device for media related to a context, wherein the request comprises at least one criteria. A query is formulated based on the context criteria so as to search, via the network, for user profile data, social network data, spatial data, temporal data and topical data that is available via the network and relates to the context and to media files so as to identify at least one media file that is relevant to the context criteria. A playlist is assembled via the network containing a reference to the media files. The media files on the playlist are transmitted over the network to the requesting device.
G06F 17/30 - Information retrieval; Database structures therefor
H04W 4/02 - Services making use of location information
H04W 4/18 - Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
Implementations relate to providing animated user identifiers. In some implementations, a computer-executed method includes determining that a video call over a communication network is connected between a first device associated with a first user and a second device associated with a second user. The method stores a set of multiple images that are received by the first device as part of the video call, and forms a motion clip including the set of multiple images and indicating a sequence of the set of multiple images for display. The method assigns the motion clip to a user identifier associated with the second user, and causes display of the motion clip to visually represent the second user in response to the user identifier being displayed in at least one user interface on the first device.
A near-eye display system (100, 600) includes a display panel (102), a beam steering assembly (104) facing the display panel, a display controller (108), and a beam steering controller (110). The beam steering assembly imparts one of a plurality of net deflection angles to incident light. The display controller drives the display panel to display a sequence of images, and the beam steering controller controls the beam steering assembly to impart a different net deflection angle for each displayed image of the sequence. The sequence of images, when displayed within the visual perception interval, may be perceived as a single image having a resolution greater than the resolution of the display panel or having larger apparent pixel sizes that conceal the black space between pixels of the display, or the sequence of images may represent a lightfield with the angular information represented in the net deflection angles imparted into the images as they are projected.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer software to enable uploading, posting, showing, displaying, tagging, blogging, and sharing electronic media or information over the Internet and other communications networks; computer software for broadcasting, electronic transmission, and streaming of gaming digital media content Providing temporary use of non-downloadable software to enable uploading, capturing, posting, showing, editing, playing, streaming, viewing, previewing, displaying, tagging, blogging, sharing, manipulating, distributing, publishing, reproducing, and providing electronic media, multimedia content, videos, movies, pictures, images, text, photos, user-generated content, audio content, and information via the Internet and other communications networks; providing temporary use of non-downloadable software to enable sharing of multimedia content and comments among users; online hosting of multimedia content for others; online hosting of multimedia entertainment and educational content for others; hosting computer websites; online hosting of databases for others
A computing unit is disclosed, comprising a first memory bank for storing input activations and a second memory bank for storing parameters used in performing computations. The computing unit includes at least one cell comprising at least one multiply accumulate (“MAC”) operator that receives parameters from the second memory bank and performs computations. The computing unit further includes a first traversal unit that provides a control signal to the first memory bank to cause an input activation to be provided to a data bus accessible by the MAC operator. The computing unit performs one or more computations associated with at least one element of a data array, the one or more computations being performed by the MAC operator and comprising, in part, a multiply operation of the input activation received from the data bus and a parameter received from the second memory bank.
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
G06F 13/28 - Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access, cycle steal
G06N 3/04 - Architecture, e.g. interconnection topology
75.
Automatic suggestions and other content for messaging applications
A messaging application may automatically analyze content of one or more messages and/or user information to automatically provide suggestions to a user within a messaging application. The suggestions may automatically incorporate particular non-messaging functionality into the messaging application. The automatic suggestions may suggest one or more appropriate responses to be selected by a user to respond in the messaging application, and/or may automatically send one or more appropriate responses on behalf of a user.
A near-eye display device (100, 400) includes a display panel (102, 408, 410) with an array of photon-emitting cells (111, 112, 113, 114, 115) interspersed with photon-detecting cells (121, 122, 123) and a display controller (208) coupled to the display panel, the display controller to control the display panel to display imagery using the array of photon-emitting cells. The device further includes a camera controller (210) coupled to the display panel, the camera controller to control the display panel to capture imagery of an eye (108) of a user using the photon-detecting cells. The device also includes an eye-tracking module (214) coupled to the camera controller, the eye-tracking module to construct a three-dimensional representation (806, 906) of the eye based on the captured imagery.
Systems and methods for identifying related videos based on elements tagged in the videos are presented. In an aspect, a system includes an identification component configured to identify tagged elements in a video, a matching component configured to identify other videos that include one or more of the tagged elements, and a recommendation component configured to recommend the other videos for viewing based on a current or past request to play the video.
The present disclosure can select a communication identifier for a device of a content provider. A system receives a request for content for display. The system identifies a content item responsive to the request. The system determines a feature of the computing device and a feature of the content item. The system selects a type of phone number for a content provider of the content item based on both the feature of the computing device and the feature of the content item. The system identifies a phone number for the content item corresponding to the type of phone number. The system transmits the phone number for the content item for display via a computing device. The system identifies, responsive to an indication to establish a communication corresponding to the phone number for the content item, a phone number for the device of the content provider.
A system and machine-implemented method for providing a font is provided. A request is received from a client device to download a font. The requested font is accessed, where the accessed font includes a corresponding character map and a corresponding glyph table. A supported character list and a modified font based on the corresponding character map, the modified font is compressed, and the supported character list and the compressed modified font are sent to the client device. Character data is also sent to the client device, wherein the character data is for merging the at least one character into the modified font based on information in the character data.
A display panel (400) comprises a display layer (410) including a plurality of pixel arrays (426) offset from each other by spacing regions (435) and a screen layer (415) disposed over the display layer (410) with each of the pixel arrays (426) aligned to project an image portion onto a corresponding portion of the screen layer (415). The screen layer (415) includes a transparent substrate (450), a Fresnel lens layer (455), a diffusing layer (460) and an array of upper spacer supports (440), made of metal, to support the transparent substrate (450) a first fixed distance from the display layer (410). Each of the upper spacer supports (440) is positioned on one of the spacing regions (435). An array of lower space supports (422), aligned with the upper spacer supports (440), is arranged to support the display layer (410) and forms air cavities defining optical pathways (424) which guide light from light sources (421) to the screen layer (415).
G09F 9/302 - Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements characterised by the form or geometrical disposition of the individual elements
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G03B 21/10 - Projectors with built-in or built-on screen
Techniques of data compression involve performing a separate compression operation on each set of corresponding bits of a sequence of bit strings in which each bit string represents a number having an upper bound. Advantageously, compressing the sets of corresponding bits produces an improved compression ratio over compressing each number in the sequence. Further, decompression is straightforward as long as sequence order is preserved and the upper bound of each number in the sequence is known.
Systems and methods are provided for determining candidate pick-up locations. For instance, responsive to receiving a request from a user for a ride, one or more candidate pick-up locations proximate a current location of the user can be determined. The candidate pick-up locations can be determined at least in part by ranking a plurality of locations proximate the current location of the user in view of one or more travel parameters and a destination specified by the user. The user may select a candidate pick-up location as a selected pick-up location, and the selected pick-up location may be provided to a car service or ride share platform to facilitate a pick-up.
A display panel includes an array of display pixels to output an image. The array of display pixels includes a central pixel region and a perimeter pixel region. The central pixel region includes central pixel units each having three different colored sub-pixels. The different colored sub-pixels of the central pixel units are organized according to a central layout pattern that repeats across the central pixel region. The perimeter pixel region is disposed along a perimeter of the central pixel region and includes perimeter pixel units that increase a brightness of the image along edges of the central pixel region to mask gaps around the array of display pixels when tiling the array of display pixels with other arrays of display pixels.
A system and methodology provide for annotating videos with entities and associated probabilities of existence of the entities within video frames. A computer-implemented method identifies an entity from a plurality of entities identifying characteristics of video items. The computer-implemented method selects a set of features correlated with the entity based on a value of a feature of a plurality of features, determines a classifier for the entity using the set of features, and determines an aggregation calibration function for the entity based on the set of features. The computer-implemented method selects a video frame from a video item, where the video frame having associated features, and determines a probability of existence of the entity based on the associated features using the classifier and the aggregation calibration function.
A method includes receiving data (10) corresponding one of streaming data or batch data and a content of the received data for computation. The method also includes determining an event time of the data for slicing the data, determining a processing time to output results of the received data, and emitting at least a portion of the results of the received data based on the processing time and the event time.
Techniques and mechanisms to provide for improved image display in an area of overlapping projections. In an embodiment, a multi-layer projection screen comprises light sources and collimation structures each disposed over a corresponding one of such light sources. A first collimation structure disposed over a first light source collimates first light from the first light source. The first collimation structure further receives and redirects second light from a second light source disposed under a second collimation structure that adjoins the first collimation structure. In another embodiment, the first collimation structure redirects the other light from the second light source away from the direction of collimation of the first light. A stray light rejection layer of the multi-layer projection screen passes a majority of the first light for inclusion as part of a projected image, and prevents a majority of the second light from inclusion in the projected image.
A computing device is described that obtains an indication of movement associated with the computing device, and responsive to determining that the movement does not satisfy an activity threshold indicative of a user of the computing device being in a physically active state, determines, based at least in part on contextual information associated with the computing device, a recommended physical activity for the user to perform, and determines, based at least in part on the contextual information, a current activity associated with the user. Responsive to determining that a degree of likelihood that the recommended physical activity can be performed concurrently with the current activity satisfies a probability threshold, the computing device outputs a notification of the recommended physical activity.
G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
88.
Automatic connection of images using visual features
Aspects of the disclosure relate generating navigation paths between images. A first image taken from a first location and a second image taken from a second location may be selected. A position of the first location in relation to the second location may be determined. First and second frames for the first and second images may be selected based on the position. First and second sets of visual features for each of the first and second image frames may be identified. Matching visual features between the first set of visual features and the second set of visual features may be determined. A confidence level for a line-of-sight between the first and second images may be determined by evaluating one or more positions of the matching visual features. Based on at least the confidence level, a navigation path from the first image to the second image is generated.
The present disclosure is directed to providing call context to content providers. A tracker receives a selection of a content item associated with a keyword. The tracker stores, in an impression data structure, tracking data including the keyword. The tracker maps the selected content item to a first virtual number and generates a link there between. The tracker receives a call from a client device to initiate a first communication channel via the first virtual number. The tracker performs a lookup in a database using the first virtual number to identify a second virtual number corresponding to the content provider and to identify the tracking data. The tracker establishes, via the second virtual number, a second communication channel between the client device and a content provider device. The tracker provides the tracking data to the content provider via the second communication channel.
In one general aspect, a method can include setting an alarm on a computing device. The setting can include setting a predetermined time to trigger the alarm, indicating a target application to launch when the alarm is triggered, and identifying content for access by the target application when the target application is launched. The method can include launching the target application based on the triggering of the alarm, identifying an external device for execution of the identified content, and providing the identified content for execution on the external device.
An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
Examples described may relate to methods and systems for controlling permission requests for applications running on a computing device to access resources provided by the computing device. A computing device may maintain in memory' for a given application responses to permission requests. The computing device may receive responses to a first permission request that includes two selectable options to either allow or deny access to a particular resource. The computing device may determine whether a number of the responses to the first request that indicate to deny access exceeds a predefined threshold. If the number exceeds the threshold, the computing device may provide, at a run-time of the application subsequent to presentation of the first request, and based on the application attempting to access the resource, a modified permission request that includes, in addition to the two selectable options, a selectable option to prevent requesting permission to access the resource.
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
Methods and systems are provided for concealing identifying data that may be used to identify a beacon or device in broadcasts unless an observer device is able to directly or indirectly, via an authorized resolver device, translate an encrypted broadcast into the identifiable information. The wireless security scheme disclosed herein also pertains to resolving the concealed data messages to obtain the identifiable information.
H04W 12/04 - Key management, e.g. using generic bootstrapping architecture [GBA]
H04W 12/02 - Protecting privacy or anonymity, e.g. protecting personally identifiable information [PII]
H04W 4/06 - Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
H04W 4/00 - Services specially adapted for wireless communication networks; Facilities therefor
H04L 29/06 - Communication control; Communication processing characterised by a protocol
94.
Automatic method for photo texturing geolocated 3D models from geolocated imagery
A method and system for applying photo texture to geolocated 3D models operates within a 3D modeling system. The modeling system includes a modeling application operating on a workstation and a database of geotagged imagery. A 3D model created or edited within the 3D modeling system is geolocated such that every point in the 3D modeling space corresponds to a real world location. For a selected surface, the method and system search the database of imagery to identify in the database one or more images depicting the selected surface of the 3D model. The method and system identify the boundaries of the selected surface within the image by transforming two or more sets of coordinates from the 3D modeling space to a coordinate space corresponding to the image. The portion of the image corresponding to the selected surface is copied and mapped to the selected surface of the 3D model.
In one example, a computing system includes at least one processor, a communication unit, and a predictive knowledge system. The predictive knowledge system is operable by the at least one processor to determine, based at least in part on the current location of the computing device, a particular geographic region from a plurality of defined geographic regions, the particular geographic region including the current location of the computing device, determine, based on an aggregated web access history for a plurality of computing devices, a content source associated with the particular geographic region, receive, from the content source, content designated for use by the predictive knowledge system, and send, via the communication unit and to the computing device, at least a portion of the content.
Certain implementations of the disclosed technology may include systems and methods for providing notifications relating to context-based features of a mobile device. According to an example implementation, a method is provided for receiving an indication of contextual information and an indication of historical information. The method also includes determining an environmental context of the mobile device from the contextual information and the historical information. The method also includes determining whether a usage criteria associated with a context-based feature associated with the environmental context has been met. The method also includes outputting an indication of the determination that the context-based feature has not met the usage criteria, such that the mobile device outputs a notification related to the context-based feature.
A system and method to profile an application use and identify data used for application execution, map the identified data for application execution to a virtual memory associated with application execution, including execution beginning at specific times, states or stages of the application, and transmit the virtual memory to an end user wishing to demonstrate the application on an end user device. The end user device can emulate the application from any desired application start time, state or stage using data at the end user device identified by the virtual memory.
A merchant and a user register with a payment processing system, which establishes a facial template based on a user image. The user signs into a payment application via a user computing device, which receives an identifier from a merchant beacon device to transmit to the payment processing system. The payment processing system transmits facial templates to the merchant point of sale device for the user and for other users who are also signed in to the payment application in range of the merchant beacon device. The merchant POS device determines whether it has a threshold number of facial templates and may request and receive additional facial templates from the payment processing system to meet the threshold. The merchant POS operator selects a facial template corresponding to the user. The merchant POS device transmits transaction details to the payment processing system, which processes a transaction with an issuer system.
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
A method may provide, by a content distribution system, access to interactive content, to a group of users and obtain a social media data indicating an interaction level of the users on a social network. The method may determine a content sharing rating for the users based on the social media data and select a user from the group based on the content sharing rating. The method may determine a recommendation for an incentive to be provided to the user within the interactive content, in exchange for the user performing an action to connect the interactive content to the user on a social network. The method may provide the recommendation to an administrative system that administers the interactive content. The method avoids the sending of an excessive number of invitations to members of a network and thereby avoids waste of network resources.
Parallel processing of data may include a set of map processes and a set of reduce processes. Each map process may include at least one map thread. Map threads may access distinct input data blocks assigned to the map process, and may apply an application specific map operation to the input data blocks to produce key-value pairs. Each map process may include a multiblock combiner configured to apply a combining operation to values associated with common keys in the key-value pairs to produce combined values, and to output intermediate data including pairs of keys and combined values. Each reduce process may be configured to access the intermediate data output by the multiblock combiners. For each key, an application specific reduce operation may be applied to the combined values associated with the key to produce output data.