Techniques for using validated communications identifiers of a user's communications profile to resolve entries in another user's contact list are described. When a user imports a contact list, the contact list may include multiple entities related to the same person. The system may identify one of the entries in the contact list that corresponds to a validated communications identifier stored in another user's communications profile. The system may identify other validated communications identifiers in the other user's communications profile and cross-reference them against the entries of the contact list. If the system determines the contact list includes entries for the different validated communications identifiers of the other user, the system may consolidate the entries into a single entry associated with the other user.
H04M 1/27453 - Directories allowing storage of additional subscriber data, e.g. metadata
H04M 1/2757 - Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time using static electronic memories, e.g. chips providing data content by data transmission, e.g. downloading
The following description is directed to a configurable logic platform. In one example, a configurable logic platform includes host logic and a reconfigurable logic region. The reconfigurable logic region can include logic blocks that are configurable to implement application logic. The host logic can be used for encapsulating the reconfigurable logic region. The host logic can include a host interface for communicating with a processor. The host logic can include a management function accessible via the host interface. The management function can be adapted to cause the reconfigurable logic region to be configured with the application logic in response to an authorized request from the host interface. The host logic can include a data path function accessible via the host interface. The data path function can include a layer for formatting data transfers between the host interface and the application logic.
Methods and systems for improved quality of a streaming session using multiple simultaneous streams. For the multiple simultaneous streams an audio/video device (A/V device) records and generates a high-resolution stream and a low-resolution stream for simultaneous transmission to a server. The server selects one of the two streams for retransmission to a destination client device. The server also monitors the streaming session and estimates a total available bandwidth between the server and the A/V device and assigns a confidence value to the bandwidth estimation. The server periodically transmits the bandwidth estimate and confidence value to the A/V device to improve the efficiency of the streams being generated by the A/V device. The A/V device can use the received bandwidth estimate and confidence value to adapt the resolution of each of the streams to efficiently use the total available bandwidth between the A/V device and the server.
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
H04L 65/65 - Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
A technique for merging, via lattice surgery, a color code and a surface code, and subsequentially decoding one or more rounds of stabilizer measurements of the merged code is disclosed. Such a technique can be applied to bottom-up fault-tolerant magic state preparation protocol such that an encoded magic state can be teleported from a color code to a surface code. Decoding the stabilizer measurements of the merged code requires a decoding algorithm specific to the merged code in which error correction involving qubits at the border between the surface and color code portions of the merged code is performed. Error correction involving qubits within the surface code portion and within color code portion, respectively, may additionally be performed. In some cases, the magic state is prepared in a color code via a technique for encoding a Clifford circuit design problem as an SMT decision problem.
Data is encoded to be incrementally authenticable. A plaintext is used to generate a ciphertext that comprises a plurality of authentication tags. Proper subsets of the authentication tags are usable to authenticate respective portions of plaintexts obtained from the ciphertext. Portions of the plaintext can be obtained and authenticated without decrypting the complete ciphertext.
Various embodiments of systems and methods for providing virtualized (e.g., serverless) broker clusters for a data streaming service are disclosed. A data streaming service uses a front-end proxy layer and a back-end broker layer to provide virtualized broker clusters, for example in a Kafka-based streaming service. Resources included in a virtualized broker cluster are monitored and automatically scaled-up, scaled-down, or re-balanced in a way that is transparent to data producing and/or data consuming clients of the data streaming service.
Systems and methods are described herein for implementing a hybrid codec to compress and decompress image data using both lossy and lossless compression. In one example encoding process, it may be determined whether a first block of pixels of a frame of image data contains an edge. A type of compression by which to encode the first block may be selected based on that determination. The first block may be compressed using the selected type of compression. At least one second value associated with the first block of pixels may be set to indicate at least oof the compressed value or the type of compression used to compress the first block.
Availability zone and region recovery are described. For an availability zone (AZ), a recovery availability zone (rAZ) may be identified based on available computing capacity of the recovery availability zone and geographic proximity of the availability zone relative to the recovery availability zone. In an instance in which the availability zone is impacted in which at least one of hardware and software of the availability zone is not fully operational, a virtual private cloud (VPC) is generated that establishes a peered connection between the availability zone and the recovery availability zone. A service is executed in the recovery availability zone, permitting any other services executing in the availability zone to invoke the service and become partially or fully operational.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
10.
Automated tier-based transitioning for data objects
An object-based data storage service receives a request to store a data object in a first location corresponding to a first data storage tier. The request may specify a parameter to enable transitioning of the data object to another data storage tier. In response to the request, the object-based data storage service stores the data object in the first location and monitors access of the data object to determine usage data associated with the data object. The object-based data storage service processes the usage data to determine that the data object is to be transitioned to a second data storage tier. As a result of this determination, the object-based data storage service transitions the data object to a second location corresponding to the second data storage tier.
Techniques for performing speech processing using multi-modal widget information are described. A system may receive input data corresponding to a user input. The system may also receive widget context data corresponding to one or more multi-modal widgets active at a device. The system may use the widget context data to perform natural language understanding (NLU) processing with respect to the user input, and for selecting a skill component for responding to the user input. The system may send a widget identifier to the skill component when invoking the skill to respond to the user input.
An optical transceiver includes a built-in optical switch to switch between diverse fiber paths in switches in a datacenter or in switches between two datacenters. The built-in optical switch can be used to switch between racks in a datacenter to increase capacity for any rack that requests it. A controller, which receives a signal from a server computer in one of the racks, can be external to the optical transceiver or within the optical transceiver. In either case, the server computer can be provided with additional bandwidth when needed. For connections between datacenters, the built-in optical switch allows for optical line protection, but without the need for a splitter circuit, which incurs a significant power loss and requires a more expensive transceiver. Consequently, the built-in optical switch within an optical transceiver can be used in a variety of contexts to increase efficiency and reduce overall costs for network devices.
H04B 10/079 - Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using an in-service signal using measurements of the data signal
H04B 10/25 - Arrangements specific to fibre transmission
Techniques for change data capture (CDC) log augmentation are described. In some examples, a user configures CDC log augmentation by indicating which data should be included in a CDC log, and the database, when generating a CDC log associated with this configuration, can obtain the associated data and augment the CDC log by inserting this data into it. The augmented data can include one or more fields from a record in a separate database table, where the record can be identified based on the changed record represented by the CDC log.
Disclosed are various embodiments for inferring brand similarities using graph neural networks and selection prediction. In one embodiment, a brand-to-brand graph is generated indicating similarities between a set of brands according to at least one of: click-through data or conversion data. Using a first graph neural network (GNN) tower, the brand-to-brand graph is analyzed to determine brand similarities among a first brand identified from a search query and a first set of other brands. Using a second GNN tower, the brand-to-brand graph is analyzed to determine brand similarities among a second brand and a second set of other brands. A level of similarity between the first brand and the second brand is determined based at least in part on an output of the first GNN tower and an output of the second GNN tower.
A clock disciplining scheme uses a pulse per second (PPS) signal that is distributed throughout a network to coordinate timing. In determining the time, jitter can occur due to latency between detection of the PPS signal and a software interrupt generated there from. This jitter affects the accuracy of the clock disciplining process. To eliminate the jitter, extra hardware is used to capture when the PPS signal occurred relative to a hardware clock counter associated with the clock disciplining software. In one embodiment, the extra hardware can be a sampling logic, which captures a state of a hardware clock counter upon PPS detection. In another embodiment, the extra hardware can initiate a counter that calculates a delay by the clock disciplining software in reading the hardware clock counter. The disciplining software can then subtract the calculated delay from a hardware clock counter to obtain the original PPS signal.
H03L 7/10 - Automatic control of frequency or phase; Synchronisation using a reference signal applied to a frequency- or phase-locked loop - Details of the phase-locked loop for assuring initial synchronisation or for broadening the capture range
G11C 7/10 - Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
H03L 7/083 - Automatic control of frequency or phase; Synchronisation using a reference signal applied to a frequency- or phase-locked loop - Details of the phase-locked loop the reference signal being additionally directly applied to the generator
H03L 7/199 - Indirect frequency synthesis, i.e. generating a desired one of a number of predetermined frequencies using a frequency- or phase-locked loop using a frequency divider or counter in the loop a time difference being used for locking the loop, the counter counting between numbers which are variable in time or the frequency divider dividing by a factor variable in time, e.g. for obtaining fractional frequency division with reset of the frequency divider or the counter, e.g. for assuring initial synchronisation
16.
Controlling ingestion of streaming data to serverless function executions
Systems and methods are described controlling ingestion of data items within a data stream by executions of a serverless function on a serverless compute system. A poller device can act as an intermediary between the data stream and the serverless function, iteratively retrieving data items from the data stream and passing them in invocations of the serverless function. To allow for fine-grained control of ingestion without requiring implementation of complex logic at the poller device, the poller device can enable the serverless function to pass instructions controlling subsequent operation of the poller device. Each execution of the serverless function may determine whether subsequent operation of the poller device should be altered, and if so, instruct the poller device accordingly. The poller device can then modify its operation pursuant to the instructions, enabling highly granular control of streaming data ingestion without inhibiting existing benefits of serverless computing.
H04L 67/133 - Protocols for remote procedure calls [RPC]
H04L 67/564 - Enhancement of application control based on intercepted application data
H04L 67/5651 - Reducing the amount or size of exchanged application data
H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
Remote Triggered Black Holes (RTBHs) can be precisely placed on networks that are not directly physically connected to a target of an attack. A network source of a potential attack can be determined. A path between the network source and the target can be identified, and a determination can be made as to which networks along that path subscribe to an attack mitigation service. From multiple identified subscriber networks, a subscriber network can be identified that is determined to be appropriate for placement of a black hole to mitigate the attack. Once selected, the identified network can receive attack information and acknowledge placement of the black hole. The subscriber network can then begin discarding traffic for the attack target. A subscriber-owned list of network prefixes can be reviewed before allowing RTBH injection for a corresponding address space.
Methods, systems, and computer-readable media for auto-tuning permissions using a learning mode are disclosed. A plurality of access requests to a plurality of services and resources by an application are determined during execution of the application in a learning mode in a pre-production environment. The plurality of services and resources are hosted in a multi-tenant provider network. A subset of the services and resources that were used by the application during the learning mode are determined. An access control policy is generated that permits access to the subset of the services and resources used by the application during the learning mode. The access control policy is attached to a role associated with the application to permit access to the subset of the services and resources in a production environment.
Systems and methods are provided for implementing a multi-service file system for a hosted computing instance via a locally-addressable secure compute layer. Software within the instance can submit file operations to the secure compute layer, which the secure compute layer can translate into calls to one or more network-accessible storage services. To provide a multi-service file system, the secure compute layer can obtain mapping data mapping file system objects within the virtualized file system to different network-accessible storage services. On receiving a file operation, the secure compute layer can determine one or more network-accessible storage services corresponding to the file operation, and submit appropriate calls to the one or more network-accessible storage services. By varying the calls for file operations, various functionalities, such as data backup, write staging, read caching, and failover can be implemented independent of both operation of the hosted computing device and the network-accessible storage services.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/172 - Caching, prefetching or hoarding of files
20.
Tiered electronic protection systems for aerial vehicles
Described are systems and methods for monitoring, detecting, and/or protecting various systems of an aerial vehicle, such as an unmanned aerial vehicle (UAV). Embodiments of the present disclosure can provide a multi-tiered system to provide monitoring, detection, and/or initiation of protection protocols in response to detected faults in connection with the electronics associated with UAV systems, such as the motor drive and/or control systems that may drive the propulsion systems of the UAV.
H02H 7/085 - Emergency protective circuit arrangements specially adapted for specific types of electric machines or apparatus or for sectionalised protection of cable or line systems, and effecting automatic switching in the event of an undesired change from norm for dynamo-electric motors against excessive load
B64C 39/02 - Aircraft not otherwise provided for characterised by special use
B64D 27/24 - Aircraft characterised by the type or position of power plant using steam, electricity, or spring force
B64D 31/00 - Power plant control; Arrangement thereof
H02H 7/08 - Emergency protective circuit arrangements specially adapted for specific types of electric machines or apparatus or for sectionalised protection of cable or line systems, and effecting automatic switching in the event of an undesired change from norm for dynamo-electric motors
B64U 50/19 - Propulsion using electrically powered motors
A latch mechanism for a sidewalk delivery robot or container includes a plunger that extends to engage a latch body on a lid to lock the lid closed, a cable for pulling the plunger out of engagement with the lid to free the lid for opening, and a temporary catch mechanism for holding the plunger in the retracted position during an initial phase of the opening process. The catch, and therefore the plunger, is released when a forward-extending release arm of the catch is engaged by the latch body during the opening process to pivot the catch. A slot hinge mechanism includes a spring piston to guard against finger pinching and a slider that is attached to the piston of the latch mechanism by the cable. Actuation of the hinge mechanism retracts the piston to its retracted position.
Techniques for performing machine learning inference calls in database query processing are described. A method for performing machine learning inference calls in database query processing may include generating a query plan to optimize a query for batch processing of data stored in a database service, the query plan including a batch mode operator to execute a function reference and an execution context associated with the batch mode operator, executing the query plan to invoke a function associated with the function reference, wherein the function sends a batch of requests, generated using the execution context, to a remote service and obtains a plurality of responses from the remote service, and generating a query response based on the plurality of responses.
Systems and methods for configuring a virtual machine provided by a remote computing system based on the availability of one or more remote computing resources and respective corresponding prices of the one or more remote computing resources are disclosed. Users are presented with an interface that allows for selection of individual remote computing resources to be included in a custom-configured virtual machine. Also, a customized corresponding price is determined for the custom-configured virtual machine based on user selections and current availability of the selected remote computing resources to be included in the custom-configured virtual machine.
Learning iterations, individual ones of which include a respective bucket group selection phase and a class boundary refinement phase, are performed using a source data set whose records are divided into buckets. In the bucket group selection phase of an iteration, a bucket is selected for annotation based on output obtained from a classification model trained in the class boundary refinement phase of an earlier iteration. In the class boundary refinement phase, records of buckets annotated as positive-match buckets for a target class in the bucket group selection phase are selected for inclusion in a training set for a new version of the model using a model enhancement criterion. The trained version of the model is stored.
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06F 17/18 - Complex mathematical operations for evaluating statistical data
G06F 18/2113 - Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06F 18/2411 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
An encoding of a cryptographic key is obtained in a form of an encrypted key. Request is provided to a service provider including a fulfillment involving performing a cryptographic operation on data. Upon fulfillment of the request, a response is then received which indicates the fulfillment of the request.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Apparatus and methods are disclosed herein for remote, direct memory access (RDMA) technology that enables direct memory access from one host computer memory to another host computer memory over a physical or virtual computer network according to a number of different RDMA protocols. In one example, a method includes receiving remote direct memory access (RDMA) packets via a network adapter, deriving a protocol index identifying an RDMA protocol used to encode data for an RDMA transaction associated with the RDMA packets, applying the protocol index to a generate RDMA commands from header information in at least one of the received RDMA packets, and performing an RDMA operation using the RDMA commands.
Techniques for emulating a configuration space may include emulating a set of configuration registers in an integrated circuit device for a set of functions corresponding to a type of peripheral device. The type of peripheral device represented by the integrated circuit device can be modified by changing the set of configuration registers being emulated in the integrated circuit device. Multiple sets of configuration registers can also be emulated to support different virtual machines or different operating systems.
Methods, systems, and computer-readable media for tracing service interactions without global transaction identifiers are disclosed. A service monitoring system receives an event message from a first service in a service-oriented system. The event message comprises one or more elements of data from a body of a service request from an upstream service. The first service initiates a sub-task associated with the service request. The service monitoring system receives one or more additional event messages from one or more additional services. The additional event message(s) comprise one or more additional elements of data from one or more additional service requests associated with one or more additional sub-tasks. The service monitoring system determines, based (at least in part) on the element(s) of data in the event message and the additional element(s) of data in the additional event message(s), that the sub-task and the additional sub-task(s) are associated with a higher-level task.
A skid-steer delivery autonomous ground vehicle has a drive train and suspension that aids in maneuverability. The AGV has six wheels, each of which is powered by its own motor. The AGV has features that diminish the dragging effect on the wheels, either by choice of wheel features or by taking weight off the front wheels during turning.
B62D 11/04 - Steering non-deflectable wheels; Steering endless tracks or the like by differentially driving ground-engaging elements on opposite vehicle sides by means of separate power sources
B60C 3/04 - Tyres characterised by transverse section characterised by the relative dimensions of the section, e.g. low profile
B60G 9/02 - Resilient suspensions for a rigid axle or axle housing for two or more wheels the axle or housing being pivotally mounted on the vehicle
B60G 17/015 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or s the regulating means comprising electric or electronic elements
B60G 17/016 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or s the regulating means comprising electric or electronic elements characterised by their responsiveness, when the vehicle is travelling, to specific motion, a specific condition, or driver input
B62D 11/00 - Steering non-deflectable wheels; Steering endless tracks or the like
B62D 61/10 - Motor vehicles or trailers, characterised by the arrangement or number of wheels, not otherwise provided for, e.g. four wheels in diamond pattern with more than four wheels
Disclosed are various embodiments for implementing passenger profiles for autonomous vehicles. A passenger of the autonomous vehicle is identified. A passenger profile corresponding to the passenger and comprising a passenger preference is identified. The passenger preference is identified. A configuration setting of the autonomous vehicle corresponding to autonomous operation of the autonomous vehicle is then adjusted based at least in part on the passenger preference.
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
H04W 4/48 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication
An autonomous mobile device (AMD) moves around a physical space while performing tasks. The AMD may have sensors with fields of view (FOVs) that are forward-facing. As the AMD moves forward, a safe region is determined based on data from those forward-facing sensors. The safe region describes a geographical area clear of obstacles during recent travel. Before moving outside of the current FOV, the AMD determines whether a move outside of the current FOV keeps the AMD within the safe region. For example, if a path that is outside the current FOV would result in the AMD moving outside the safe region, the AMD modifies the path until poses associated with the path result in the AMD staying within the safe region. The resulting safe path may then be used by the AMD to safely move outside the current FOV.
A first configurable address decoder can be coupled between a source node and a first interconnect fabric, and a second address decoder can be coupled between the first interconnect fabric and a second interconnect fabric. The first address decoder can be configured with a first address mapping table that can map a first set of address ranges to a first set of target nodes connected to the first interconnect fabric. The second address decoder can be configured with a second address mapping table that can map a second set of address ranges to a second set of target nodes connected to the second interconnect fabric. The second address decoder can be part of the first set of target nodes. The first address decoder and the second address decoder can be configured or re-configured to determine different routes for a transaction from the source node to a target node in the second set of target nodes via the first and second interconnect fabrics.
A database management system receives a command defining a view of the database. The view definition is accepted without determining whether references to schema elements within the view definition are resolvable to existing elements of the database schema. A query of the view is received. In response to the query of the view, the database management system resolves references to schema elements in the view definition by determining whether the references correspond to data available for processing the query.
G06F 16/80 - Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
34.
Systems for determining image-based search results
When a first search query including an image of an item is received to search for items associated with similar images, a second search query that includes text based on the image is generated. The text may be based on previous queries associated with the depicted item, visual features of the image, or text that is present in the image. The results from the first search query are scored based on their correspondence with the image of the item. Results having a score greater than a threshold are presented first in the output, followed by a selected number of results from the second search query. Results from the first search query that are associated with a score less than the threshold may be presented after the results from the second search query. This presentation increases the likelihood that items presented earlier in the output are relevant to the initial query.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Systems and methods are provided to eliminate multiplication operations with zero padding data for convolution computations. A multiplication matrix is generated from an input feature map matrix with padding by adjusting coordinates and dimensions of the input feature map matrix to exclude padding data. The multiplication matrix is used to perform matrix multiplications with respective weight values which results in fewer computations as compared to matrix multiplications which include the zero padding data.
Disclosed herein are techniques for classifying data with a data processing circuit. In one embodiment, the data processing circuit includes a probabilistic circuit configurable to generate a decision at a pre-determined probability, and an output generation circuit including an output node and configured to receive input data and a weight, and generate output data at the output node for approximating a product of the input data and the weight. The generation of the output data includes propagating the weight to the output node according a first decision of the probabilistic circuit. The probabilistic circuit is configured to generate the first decision at a probability determined based on the input data.
Techniques for training a machine-learning model are described. In an example, a computer generates a first pseudo-label indicating a first mask associated with a first object detected by a first machine-learning model in a first training image. A transformed image of the first training image can be generated using a transformation. Based on the transformation, a second pseudo-label indicating a second mask detected in the transformed image and corresponding to the first mask can be determined. A second machine-learning model can be trained using the second pseudo-label. The trained, second machine-learning model can detect a third mask associated with a second object based on a second image.
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A first computing device acquires video data representing a user performing an activity. The first device uses a first pose extraction algorithm to determine a pose of the user within a frame of video data. If the pose is determined to be potentially inaccurate, the user is prompted for authorization to send the frame of video data to a second computing device. If authorization is granted, the second computing device may use a different algorithm to determine a pose of the user and send data indicative of this pose to the first computing device to enable the first computing device to update a score or other output. The second computing device may also use the frame of video data as training data to retrain or modify the first pose extraction algorithm, and may send the modified algorithm to the first computing device for future use.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
G06V 20/40 - Scenes; Scene-specific elements in video content
This disclosure describes systems and methods for using a primary device, communicatively coupled to a remote system, to configure or re-configure a secondary device in the same environment as the primary device. In some instances, the primary device may communicatively couple to the secondary device via a short-range wireless connection and to the remote system via a wireless area network (WAN), a wired connection, or the like. Thus, the primary device may act as an intermediary between the secondary device and the remote system for configuring the secondary device.
Systems and methods to measure and affect focus, engagement, and presence of users may include measuring a variety of aspects of users engaged in particular activities. Individual user characteristics or preferences and attributes of activities may be taken into account to determine levels of focus for particular users and activities. A variety of sensors may detect aspects of users engaged in activities to measure levels of focus. In addition, a variety of output devices may initiate actions to affect levels of focus of users engaged in activities. Further, a variety of processing algorithms, including machine learning models, may be trained to identify desired levels of focus, to calculate current levels of focus, and to select actions to change or boost levels of focus. In this manner, activities undertaken by users, as well as interactions between multiple users, may be made more engaging, efficient, and productive.
An acoustic event detection system may employ one or more recurrent neural networks (RNNs) to extract features from audio data, and use the extracted features to determine the presence of an acoustic event. The system may use self-attention to emphasize features extracted from portions of audio data that may include features more useful for detecting acoustic events. The system may perform self-attention in an iterative manner to reduce the amount of memory used to store hidden states of the RNN while processing successive portions of the audio data. The system may process the portions of the audio data using the RNN to generate a hidden state for each portion. The system may calculate an interim embedding for each hidden state. An interim embedding calculated for the last hidden state may be normalized to determine a final embedding representing features extracted from the input data by the RNN.
To assist a user in the correct performance of an activity, video data is acquired. A pose of the user is determined from the video data and an avatar is generated representing the user in the pose. The pose of the user is compared to one or more other poses representing correct performance of the activity to determine one or more differences that may represent errors by the user. Depending on the activity that is being performed, some errors may be presented to the user during performance of the activity, while other errors may be presented after performance of the activity has ceased. To present an indication of an error, a specific body part or other portion of the avatar that corresponds to a difference between the user's pose and a correct pose may be presented along with an instruction regarding correct performance of the activity.
G16H 20/30 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
A63B 24/00 - Electric or electronic controls for exercising apparatus of groups
A63B 71/06 - Indicating or scoring devices for games or players
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G09B 19/00 - Teaching not covered by other main groups of this subclass
43.
Automatically prioritizing computing resource configurations for remediation
Systems and methods for automatically prioritizing computing resource configurations for remediation include receiving information describing configuration issues that may result in impaired system performance or unauthorized access, parsing that information and automatically analyzing configuration details of a user's private computing environment to determine that assets provide an environment in which configuration issues may be exploited to produce undesired results. Such systems and methods can generate assessments indicating the likelihood an issue can be exploited and potential impacts of the issue being exploited. Such systems and methods can use these assessments to generate a report prioritizing remediation of specific configuration issues for specific vulnerable assets based on the actual configuration of the user's computing resources and the data managed using those resources. Issues deemed have a higher likelihood of resulting in problems can be prioritized over configuration issues which may appear to have severe consequences, but which are unlikely to affect the user's resources.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
Techniques for reducing the latency of content retrieval from a content delivery network include receiving a request from a client device for media content, parsing the request for attributes associated with the request and the client device, and providing the attributes to a machine learning model to perform server-side prediction of an estimated retrieval time of the media content. A quality level for the media content is determined based on the estimated retrieval time, and the requested media content is provided to the client device at the determined quality level.
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
Systems and methods are described for merging customer profiles, such as may be implemented by a computer-implemented contact center service. In some aspects, a subset of profiles may be determined that satisfy merging criteria, where individual profiles include a plurality of data fields. At least one value in a first data field that conflicts between at least two profiles may be identified. Next a merged value may be selected for the first data field based on data deduplication criteria, where the data deduplication criteria includes at least one indicator of accuracy of values of the plurality of data fields. As a result of a determination that at least the subset of profiles of the group of profiles meet the merging criteria, at least the subset of profiles may be combined into a combined profile using the merged value.
Devices, systems, and methods are provided for enhanced geographical caching of estimated arrival times. A method may include receiving respective user inputs indicative of respective users being in transit to a destination location from within a geographic region; determining, for the first user and the second user, a first estimated time of arrival from a first geographical area to the destination location, the first geographical area including the first location and the second location; identifying a third location of the first device at a third time, wherein the third location is within the first geographical area; determining that a time-to-live (TTL) of the first estimated time of arrival has not expired at the third time; and refraining from recalculating the first estimated time of arrival.
Systems, devices, and methods are provided for processing images using machine learning. Features may be obtained from an image using a residual network, such as ResNet-101. Features may be analyzed using a classification model such as K-nearest neighbors (K-NN). Features and metadata extracted from images may be used to generate other images. Templates may be used to generate various types of images. For example, assets from two images may be combined to create a third image.
G06V 10/40 - Extraction of image or video features
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06T 3/40 - Scaling of a whole image or part thereof
G06T 11/60 - Editing figures and text; Combining figures or text
48.
Agent re-verification and resolution using imaging
Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.
Server-specified subscription filters for long-lived client requests to fetch data in response to events. In one aspect, the techniques encompass a method performed by a set of one or more computing devices. The method includes the step of receiving a long-lived request to fetch data in response to events sent by a client computing device. The method further includes receiving a server-specified subscription filter for the long-lived request and executing the long-lived request. Executing the long-lived request includes creating a persistent function that uses the server-specified subscription filter to map a source event stream to a response event stream. The response event stream is provided to the client computing device. The server-specified subscription filter facilitates filtering of events fetched for the long-lived request in a way that may not be possible or impractical if the subscription client were required to specify the filter in the long-lived request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A system for database restoration across service regions. The system includes data storage and backup data storage in the first region. The system includes a frontend for the database service configured to receive, from a client, a request to restore a database to the first region from backups stored in another backup data storage in a second region and to receive an authentication token for the request from the client. The system also includes a backup restore manager service for the first region configured to send, to another backup restore manager service implemented in the second region, a credential request for a second region credential authorizing retrieval of the one or more other backups from the second region. The backup restore manager service sends a backup restore request to retrieve the backups from the other backup data storage and loads the backups to restore the database in the first region.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
A simulation environment (e.g., multi-player game) hosted by a provider network may implement componentized entities to reduce the amount of resource usage for a simulation (e.g., by reducing the amount of input/state data transmitted through the use of dynamically changing input structures). A user may add or remove any number of components to an entity that is simulated at the local client device. When inputs are received for one or more components, values for predictive states are locally determined for each component. An input packet is generated and sent to the provider network, which includes the inputs as well as data that is based on the values for the locally predicted states (e.g., a fingerprint or other unique ID). If necessary, a correction packet may be generated at the provider network and sent back to the client.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 43/106 - Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
H04L 45/7453 - Address table lookup; Address filtering using hashing
Methods, systems, and computer-readable media for automated management of machine images are disclosed. A machine image management system determines that a trigger for a machine image build process has occurred. The machine image management system performs the machine image build process responsive to the trigger. The machine image build process generates a machine image, and the machine image comprises a plurality of operating system components associated with an application. The machine image is validated by the machine image management system for compliance with one or more policies. The machine image management system provides the machine image to one or more recipients. One or more compute resources are launched using the machine image, and the application is executed on the compute resource(s) launched using the machine image.
Disclosed are systems, methods, and apparatus of an automated and self-service kiosk that allows customers to select inventory items available from the kiosk and walk or move away with selected inventory item(s) without having to process payment, identify the inventory item(s), or provide any other form of checkout. After a customer has picked one or more items and departed the kiosk, the picked items are determined and the customer charged for the items. For example, one or more of detected weight changes measured at the kiosk and/or images generated at the kiosk may be used to identify items picked by the customer from the kiosk.
An electric pallet jack can be configured to include logic controllers that are connected to a drive system and a steering system of the electric pallet jack. The logic controllers can be in communication with one or more sensors that enable determinations of pallet jack velocity, pallet jack acceleration, and a rate of turning for the electric pallet jack. The logic controllers can be configured to provide maximum velocity, maximum acceleration, maximum deceleration, and maximum rate of turning limitations to maintain control over an object transported by the electric pallet jack. The logic controllers can determine whether the maximum thresholds of the electric pallet jack are exceeded by an operating variable and can modulate the amount of power provided by the drive system to reduce the operating variable below the associated threshold.
B66F 17/00 - Safety devices, e.g. for limiting or indicating lifting force
B60W 10/04 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B66F 9/065 - Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks non-masted
56.
Distributed automated mobile vehicle routing based on characteristic information satisfying a minimum requirement
This disclosure describes a distributed automated mobile vehicle (“automated mobile vehicle”) system for autonomously delivering orders of items to various delivery locations and/or autonomously returning items to a return location. In some implementations, each user may own or be assigned their own automated mobile vehicle that is associated with the user and an automated mobile vehicle control system maintained by the user. When the user orders an item, the user owned or controlled automated mobile vehicle navigates to a materials handling facility, retrieves the ordered item and delivers it to the user.
Techniques and systems can process data of a dataset to determine when a portion of data is comprised in the data of the dataset. An output generated from processing the data of the dataset can be evaluated, where the output can signify that processing the data of the dataset was unable to locate the portion of data in the data of the dataset. Based on evaluating the output, the data of the dataset can be automatically reprocessed to determine the portion of data is in the data of the dataset. A result can then be generated from the portion of data determined to be in the data of the dataset.
A speech-processing system may provide access to one or more virtual assistants via a voice-controlled device. A user may leverage a first virtual assistant to translate a natural language command from a first language into a second language, which the device can forward to a second virtual assistant for processing. The device may receive a command from a user and send input data representing the command to a first speech-processing system representing the first virtual assistant. The device may receive a response in the form of a first natural language output from the first speech-processing system along with an indication that the first natural language output should be directed to a second speech-processing system representing the second virtual assistant. For example, the command may be in the first language, and the first natural language output may be in the second language, which is understandable by the second speech-processing system.
Techniques for determining whether audio is machine-outputted or non-machine-outputted are described. A device may receive audio, may process the audio to determine audio data including audio features corresponding to the audio, and may process the audio data to determine audio embedding data. The device may process the audio embedding data to determine whether the audio is machine-outputted or non-machine-outputted. In response to determining that the audio is machine-outputted, then the audio may be discarded or not processed further. Alternatively, in response to determining that the audio is non-machine-outputted (e.g., live speech from a user), then the audio may be processed further (e.g., using ASR processing).
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G06N 3/044 - Recurrent networks, e.g. Hopfield networks
G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
G10L 15/16 - Speech classification or search using artificial neural networks
G10L 15/18 - Speech classification or search using natural language modelling
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
G10L 25/69 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for evaluating synthetic or decoded voice signals
Video output is synchronized to the actions of a user by determining positions of the user's body based on acquired video of the user. The positions of the user's body are compared to the positions of a body shown in the video output to determine corresponding positions in the video output. The video output may then be synchronized so that the subsequent output that is shown corresponds to the subsequent position attempted by the user. The rate of movement of the user may be used to determine output characteristics for the video to cause the body shown in the video output to appear to move at a similar rate to that of the user. If the user moves at a rate less than a threshold or performs an activity erroneously, the video output may be slowed or portions of the video output may be repeated.
Systems and methods are described herein for detecting the inadvertent modification to or deletion of data in a data store and taking automated action to prevent the deletion of data from becoming permanent. The described techniques may also be utilized to detect anomalous changes to a policy or affecting storage of data and taking automated action to mitigate the effects of those changes. In one example, events generated as a result of requests to perform operations on data objects in a data storage service may be obtained, where at least some of the events indicate a failure to fulfill respective requests. Data from the events may be input into a model to detect an anomaly indicative of inadvertent modification of data. As a result of detection of the anomaly, a set of operations may be initiated or performed to prevent the inadvertent modification of data from becoming permanent.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
To accelerate the data processing of a processor, a coprocessor subsystem can be used to offload data processing operations from the processor. The coprocessor subsystem can include a coprocessor and an accelerator. The accelerator can offload operations such as data formatting operations from the coprocessor to improve the performance of the coprocessor. The coprocessor subsystem can be used to accelerate database operations.
A technique to execute transpose and compute operations may include retrieving a set of machine instructions from an instruction buffer of a data processor. The instruction buffer has multiple entries, and each entry stores one machine instruction. A machine instruction from the set of machine instructions is executed to transpose a submatrix of an input tensor and perform computations on column elements of the submatrix. The machine instruction combines the transpose operation with computational operations into a single machine instruction.
Systems, methods, and computer-readable media are disclosed for systems and methods for dynamic product summary images. The dynamic product summary images may be displayed on product pages or in association with individual product search results. The dynamic product summary images may comprise a number of different visual icons that provide a customer quick and easily-digestible information about a product. The dynamic product summary image may also be specific to the user such that different users may be presented with different icons based on details about the product that they are likely to find most important. For example, a dynamic product summary image for a laptop may include an icon indicating a processor type, an icon indicating a graphics card type, an icon indicating an operating system, etc. This provides for a more efficient product browsing process and mitigates or eliminates the need for the customer to search the entire product page for important details about the product.
Systems and methods for implementing record locking for transactions using a probabilistic data structure are described. This probabilistic structure enables adding of data records without growth of the data structure. The data structure includes a hash table for each of multiple hash functions, where entries in the respective hash tables store a transaction time and locking state. To lock a record, each hash function is applied to a record key to provide an index into a respective hash table and a minimum of the values stored in the hash tables is retrieved. If the retrieved value is less than a transaction time for a transaction attempting to lock the record, locking is permitted and the transaction time is recorded to each of the hash tables. To commit the transaction, the probabilistic data structure is atomically updated as part of the commit operation.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Systems and methods are described for implementing a distributed unit in a radio access network that executes code on behalf of mobile devices. A distributed unit may be implemented on an edge server that is in close physical proximity to a radio unit, with few or no intervening devices. The edge server may thus provide services to mobile devices, such as executing code on behalf of a mobile device in an execution environment on the edge server, at significantly lower latency than more distant cloud-based servers. The edge server may preload computing environments with code for which a mobile device is likely to request execution (e.g., because a particular application is executing on the mobile device), and may determine whether to execute code on the edge server or on a cloud provider network.
A system for managing deployment of quantum circuits is described. The system may include a web server configured to receive, from a consumer, a quantum computing request to perform a job using a given quantum application. The web server may generate a response based on execution of the quantum application and at least a portion of the quantum computing request and return the response to the consumer. The system may also include a deployment service configured to store quantum circuit definitions in a data store. The deployment service may receive, from the web server, a deployment request for executing a quantum circuit. The deployment service may generate a container for implementing the quantum circuit. The deployment service may configure a quantum application in the container for executing a job using the quantum circuit. The deployment service may provide the web server access to results of the execution of the job.
G06N 10/80 - Quantum programming, e.g. interfaces, languages or software-development kits for creating or handling programs capable of running on quantum computers; Platforms for simulating or accessing quantum computers, e.g. cloud-based quantum computing
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06N 10/20 - Models of quantum computing, e.g. quantum circuits or universal quantum computers
A system for providing code suggestions according to licensing criteria is described. The system comprises computing devices that implement a code suggestion service. The code suggestion service receives a request that specifies licensing criteria via an interface of the code suggestion service. The code suggestion service determines respective licenses for respective source code files according to a source code attribution database from parsing the plurality of source code files that are applicable to the plurality of source code files. The code suggestion service generates a set of candidate code suggestions based, at least in part, on the plurality of source code files. The code suggestion service determines code suggestions from the set of candidate code suggestions that satisfy the licensing criteria based on the respective licenses. The code suggestion service provides the code suggestions determined from the set of candidate source code files that satisfy the licensing criteria.
An Application Programming Interface (API) allows a launching of a virtual machine where a queue count can be configured by a user. More specifically, each virtual machine can be assigned a pool of queues. Additionally, each virtual machine can have multiple virtual networking interfaces and a user can assign a number of queues from the pool to each virtual networking interface. Thus, a new metadata field is described that can be used with requests to launch a virtual machine. The metadata field includes one or more parameters that associate a number of queues with each virtual networking interface. A queue count can be dynamically configured by a user to ensure that the queues are efficiently used given that the user understands the intended application of the virtual machine being launched.
Systems and methods for selectively ignoring an occurrence of a wakeword within audio input data is provided herein. In some embodiments, a wakeword may be detected to have been uttered by an individual within a modified time window, which may account for hardware delays and echoing offsets. The detected wakeword that occurs during this modified time window may, in some embodiments, correspond to a word included within audio that is outputted by a voice activated electronic device. This may cause the voice activated electronic device to activate itself, stopping the audio from being outputted. By identifying when these occurrences of the wakeword within outputted audio are going to happen, the voice activated electronic device may selectively determine when to ignore the wakeword, and furthermore, when not to ignore the wakeword.
A system is provided for modifying how an output is presented via a multi-device synchronous configuration based on detecting a speech characteristic in the user input. For example, if the user whispers a request, then the system may temporarily modify how the responsive output is presented to the user via multiple devices. In one example, the system may lower the volume on all devices presented the output. In another example, the system may present the output via a single device rather than multiple devices. The system may also determine to operate in a alternate output mode based on certain non-audio data.
Systems and processes are described for establishing and using a secure channel. A shared secret may be used for authentication of session initiation messages as well as for generation of a private/public key pair for the session. A number of ways of agreeing on the shared secret are described and include pre-sharing the keys, reliance on a key management system, or via a token mechanism that uses a third entity such as a hub to manage authentication, for example. In some instances, the third party may also perform endpoint selection (e.g., load balancing) by providing a particular endpoint along with the token.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
Techniques are described for providing users with access to computer networks, such as to enable users to interact with a remote configurable network service in order to create and configure computer networks that are provided by the configurable network service for use by the users. Computer networks provided by the configurable network service may be configured to be private computer networks that are accessible only by the users who create them, and may each be created and configured by a client of the configurable network service to be an extension to an existing computer network of the client, such as a private computer network extension to an existing private computer network of the client. If so, secure private access between an existing computer network and new computer network extension that is being provided may be enabled using one or more VPN connections or other private access mechanisms.
This disclosure describes an aerial vehicle, such as an unmanned aerial vehicle (“UAV”), which includes a plurality of propulsion mechanisms that enable the aerial vehicle to move independently in any of six degrees of freedom (surge, sway, heave, roll, pitch, and yaw).
A data service implements a configurable data compressor/decompressor using a recipe generated for a particular data set type and using compression operators of a common registry (e.g., pantry) that are referenced by the recipe, wherein the recipe indicates at which nodes of a compression graph respective ones of the compression operators of the registry are to be implemented. The configurable data compressor/decompressor provides a customizable framework for compressing data sets of different types (e.g., belonging to different data domains) using a common compressor/decompressor implemented using a common set of compression operators.
Systems and techniques are disclosed for predicting the structural status of an object. An object model, such as a machine learning model, can be trained on sample sensor data indicating vibrations, movements, and/or other reactions of objects with known desired and undesired structural statuses to a stimulus agent, such as a puff of air. A scanning device can output a corresponding stimulus agent towards an object, capture sensor data indicating the reaction of the object to the stimulus agent, and provide the sensor data to the trained object model. Based on the sensor data indicating how the object reacted to the stimulus agent, the object model can predict whether the object has a desired structural status or an undesired structural status.
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 20/17 - Terrestrial scenes taken from planes or by drones
79.
AUTOMATED POLICY REFINER FOR CLOUD-BASED IDENTITY AND ACCESS MANAGEMENT SYSTEMS
Techniques are described for providing a policy refiner application used to analyze and recommend modifications to identity and access management policies created by users of a cloud provider network (e.g., to move the policies toward least-privilege permissions). A policy refiner application receives as input a policy to analyze, and a log of events related to activity associated with one or more accounts of a cloud provider network. The policy refiner application can identify, from the log of events, actions that were permitted based on particular statements contained in the policy. Based on field values contained in the corresponding events, the policy refiner application generates an abstraction of the field values, where the abstraction of the field values may represent a more restrictive version of the field from a policy perspective. These abstractions can be presented to users as recommendations for modifying their policy to reduce the privileges granted by the policy.
Techniques for customer-initiated virtual machine resource allocation sharing are described. A hardware virtualization service of a cloud provider network receives a request to launch a first virtual machine, wherein the first virtual machine is of a first virtual machine type, the first virtual machine type having a resource amount allocated to virtual machines of the first virtual machine type. The hardware virtualization service causes a launch of the first virtual machine on a host computer system of the cloud provider network. The host computer system shares an allocation of the resource amount from a corresponding resource of the host computer system between the first virtual machine and a second virtual machine, wherein the second virtual machine is of the first virtual machine type.
Connectivity is enabled between a first and second isolated network using a virtual traffic hub that includes a decision master node responsible for determining a routing action for a packet received at the hub. At the hub, a determination is made that a particular domain name system (DNS) message being directed to a first resource in the first isolated network is to include an indication of a second resource in the second isolated network. The second resource is assigned a network address within a private address range of the second isolated network, which overlaps with a private address range being used in the first isolated network. The hub causes a transformed version of the network address to be included in the DNS message delivered to the first resource.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Disclosed are various embodiments for seamless insertion of modified media content. In one embodiment, a modified portion of video content is received. The modified portion has a start cue point and an end cue point that are set relative to a modification to the video content to indicate respectively when the modification approximately begins and ends compared to the video content. A video coding associated with the video content is identified. The start cue point and/or the end cue point are dynamically adjusted to align the modified portion with the video content based at least in part on the video coding.
Disclosed are various embodiments for a distributed and synchronized core in a radio-based network. In one embodiment, a first radio access network (RAN)-enabled edge server at a first edge location is configured to perform a set of distributed unit (DU) functions for a radio-based network. The first RAN-enabled edge server is also configured to perform a set of core network functions and a set of centralized unit (CU) functions for the radio-based network. State associated with the set of core network functions and the set of CU functions is synchronized between the first RAN-enabled edge server and another server.
Systems and methods are provided for translation of text in an image, and presentation of a version of the image in which the translated text is displayed a manner consistent with the original image. Text segments are automatically translated from their original source language to a target language. In order to provide presentation of the translated text in a manner that closely matches the source text, various display attributes of the source text are detected (e.g., font size, font color, font style, etc.).
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A system and method for continual learning in a provider network. The method is configured to implement or interface with a system which implements a semi-automated or fully automated architecture of continual machine learning, the semi-automated or fully automated architecture implementing user-configurable model retraining or hyperparameter tuning, which is enabled by a provider network. This functions to adapt a model over time to new information in the training data while also providing a user-friendly, flexible, and customizable continual learning process.
Systems and methods are disclosed for automated lateral transfer and elevation of sortation shuttles. An example system may include a track having a first portion arranged in a first direction, a shuttle configured to move along the track, and a shuttle carriage system configured to move in a second direction transverse to the first direction, where the shuttle is configured to move from the track to the shuttle carriage system. The shuttle carriage system may include a first frame configured to support the shuttle, a first electromagnet configured to propel the first frame, and a second electromagnet coupled to the first frame, the second electromagnet configured to propel the shuttle off the first frame.
The updating of a definition layer or schema for a large distributed database can be accomplished using a plurality of data store tiers. A distributed database can be made up of many individual data stores, and these data stores can be allocated across a set of tiers based on business logic or other allocation criteria. The update can be applied sequentially to the individual tiers, such that only data stores for a single tier are being updated at any given time. This can help to minimize downtime for the database as a whole, and can help to minimize problems that may result from an unsuccessful update. Such an approach can also allow for simplified error detection and rollback, as well as providing control over a rate at which the update is applied to the various data stores of the distributed database.
G06F 16/185 - Hierarchical storage management [HSM] systems, e.g. file migration or policies thereof
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
89.
Automatic index management for a non-relational database
Index management for non-relational database systems may be automatically performed. Performance of queries to a non-relational database may be evaluated to determine whether to create or remove an additional index. An additional index may be automatically created to store a subset of data projected from the non-relational database to utilize when performing a query to the non-relational database instead of accessing data in the non-relational database.
Intelligent query routing may be performed across shards of a scalable database table. A router of a database system may receive an access request directed to one or more database tables. The router may evaluate the access request with respect to metadata obtained for the database tables to determine an assignment distribution of computing resources of the database system to data that can satisfy the access request. The router can select planning locations to perform the access request based on the assignment distribution of the computing resources. The router can cause the access request to be performed according to planning at the selected planning locations.
Working set ratio estimations of data items in a sliding time window are determined to dynamically allocate storage for the data items. A working set ratio may be determined by accessing a fixed-size array that stores respective timestamps of last accesses of data items to determine which data items are useful to determine an estimate of a working set for the application within a range of time. The working set ratio is then determined from an estimated working set and an amount of computing resources allocated to the application by the estimated working set. The amount of the computing resources allocated to the application may then be automatically scaled according to the determine working set ratio.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
Devices and techniques are generally described for determining named entity recognition tags. In various examples, first input data representing a natural language input may be determined. In some examples, a first machine learned model may determine first data comprising a first encoded representation of the first input data. In various examples, second data representing a grouping of text of the first input data may be determined based at least in part on the first data. In some examples, first entity data may be determined by searching a memory layer using the second data. In at least some examples, the first entity data and the first data may be combined to generate third data. In various examples, output data comprising a predicted named entity recognition tag may be generated for the grouping of text based at least in part on the third data.
Techniques for performing multi-stage entity resolution (ER) processing are described. A system may determine a portion of a user input corresponding to an entity name, and may request an entity provider component to perform a search to determine one or more entities corresponding to the entity name. The preliminary search results may be sent to a skill selection component for processing, while the entity provider component performs a complete search to determine entities corresponding to the entity name. A selected skill component may request the complete search results to perform its processing, including determining an output responsive to the user input.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/183 - Speech classification or search using natural language modelling using context dependencies, e.g. language models
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
Network services are deployed in a networked environment in association with a user account. Dependencies of a network service, such as other network services, may be identified based on an online analysis and an offline analysis of the network service. Further, anomalies associated with the dependencies may be identified in some situations. A call graph may include nodes corresponding to the network services and its dependencies, and may include an identifier corresponding to a part of the call path that has the anomaly. An inspection of the call graph allows software developers to readily recognize that their service depends on a potentially flawed software that may cause a service failure or outage.
An interruption-handling setting for a category of interactions of an application is determined via a programmatic interface. A set of user-generated input is obtained while presentation to a user of a set of output of the category is in progress. A response to the set of user-generated input is prepared based at least in part on the interruption-handling setting.
Technologies directed to a control circuit using dynamic signal compression are described. A control circuit includes a front-end module (FEM) coupled to an RF cable, the FEM having a low-noise amplifier (LNA). The control circuit further includes an automatic gain control (AGC) circuitry coupled to the FEM. The AGC circuitry receives a first radio frequency (RF) signal having a first portion of one or more symbols and a second portion of one or more symbols. The AGC circuitry further amplifies the first portion to generate a first portion of an output signal. The AGC circuitry further compresses the second portion to obtain a second portion of the output signal. The AGC circuitry further sends a control signal to cause the FEM to change a gain state value of the LNA from a first value to a second value based on a comparison between a voltage of the output signal and a reference voltage.
Techniques and systems can receive a query identifying a name linked to performance data of a computer system and a location of the performance data. The name linked to the performance data of the computer system and the location of the performance data can be communicated to a first computer-implemented system. The first computer-implemented system can include identifying data derived from the name and the location of the performance data. Identifying data derived from the name and the location of the performance data can be received from the first computer-implemented system. The identifying data derived from the name and the location of the performance data can be used to retrieve the performance data. The performance data can be hosted by a second computer-implemented system that is different than the first computer-implemented system.
Embodiments of a contextualized visual search (CVS) system are disclosed capable of isolating target images of items that contain instances of a previously-unseen query image from a large database of target images. In embodiments, the system is used to implement an interactive query interface of an e-commerce portal, which allows the user to specify the query image (e.g. a logo) to be searched. The system converts the query image into a feature vector using a first machine learning model, and compares the feature vector to feature vectors of target images using a second machine learning model to find matching target images that contain an instance of the query image. The system then returns a query result indicating a list of items associated with matched target images. In embodiments, the query results may be ranked based on a set of personalized factors associated with the user.
Techniques are provided herein for selecting and transmitting snippets from a messaging application. A “snippet” refers to an audio segment of a song that is less than the whole of the song. A user may request to view various audio segments (e.g., by category, by search, etc.) corresponding to portions of respective songs via a user interface of the messaging application. In some embodiments, an audio segment can be selected and metadata associated with that particular audio segment may be transmitted to another computing device where the audio segment can be played (e.g., streamed). In this manner, these snippets can be employed by the user to enhance their chat or texting conversation.
Techniques for planning resources using block and route information are described. In an example, a computing system determines a demand for item transportation expected during a planning horizon. The computing system determines information about a pre-planned transportation resource available during the planning horizon and costs associated with the pre-planned transportation resource. The computing system uses an optimization model to determine a block having a time length, a tour to transport, during the block, a first portion of the demand using the pre-planned transportation resource, and a second portion of the demand to be transported using an on-demand transportation resource. The computing system indicates, to a first computing device of the pre-planned transportation resource, an assignment of the block to the pre-planned transportation resource.