Systems and methods for configuring a virtual machine provided by a remote computing system based on the availability of one or more remote computing resources and respective corresponding prices of the one or more remote computing resources are disclosed. Users are presented with an interface that allows for selection of individual remote computing resources to be included in a custom-configured virtual machine. Also, a customized corresponding price is determined for the custom-configured virtual machine based on user selections and current availability of the selected remote computing resources to be included in the custom-configured virtual machine.
Learning iterations, individual ones of which include a respective bucket group selection phase and a class boundary refinement phase, are performed using a source data set whose records are divided into buckets. In the bucket group selection phase of an iteration, a bucket is selected for annotation based on output obtained from a classification model trained in the class boundary refinement phase of an earlier iteration. In the class boundary refinement phase, records of buckets annotated as positive-match buckets for a target class in the bucket group selection phase are selected for inclusion in a training set for a new version of the model using a model enhancement criterion. The trained version of the model is stored.
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06F 17/18 - Complex mathematical operations for evaluating statistical data
G06F 18/2113 - Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06F 18/2411 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
An encoding of a cryptographic key is obtained in a form of an encrypted key. Request is provided to a service provider including a fulfillment involving performing a cryptographic operation on data. Upon fulfillment of the request, a response is then received which indicates the fulfillment of the request.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Apparatus and methods are disclosed herein for remote, direct memory access (RDMA) technology that enables direct memory access from one host computer memory to another host computer memory over a physical or virtual computer network according to a number of different RDMA protocols. In one example, a method includes receiving remote direct memory access (RDMA) packets via a network adapter, deriving a protocol index identifying an RDMA protocol used to encode data for an RDMA transaction associated with the RDMA packets, applying the protocol index to a generate RDMA commands from header information in at least one of the received RDMA packets, and performing an RDMA operation using the RDMA commands.
Techniques for emulating a configuration space may include emulating a set of configuration registers in an integrated circuit device for a set of functions corresponding to a type of peripheral device. The type of peripheral device represented by the integrated circuit device can be modified by changing the set of configuration registers being emulated in the integrated circuit device. Multiple sets of configuration registers can also be emulated to support different virtual machines or different operating systems.
Methods, systems, and computer-readable media for tracing service interactions without global transaction identifiers are disclosed. A service monitoring system receives an event message from a first service in a service-oriented system. The event message comprises one or more elements of data from a body of a service request from an upstream service. The first service initiates a sub-task associated with the service request. The service monitoring system receives one or more additional event messages from one or more additional services. The additional event message(s) comprise one or more additional elements of data from one or more additional service requests associated with one or more additional sub-tasks. The service monitoring system determines, based (at least in part) on the element(s) of data in the event message and the additional element(s) of data in the additional event message(s), that the sub-task and the additional sub-task(s) are associated with a higher-level task.
An autonomous mobile device (AMD) moves around a physical space while performing tasks. The AMD may have sensors with fields of view (FOVs) that are forward-facing. As the AMD moves forward, a safe region is determined based on data from those forward-facing sensors. The safe region describes a geographical area clear of obstacles during recent travel. Before moving outside of the current FOV, the AMD determines whether a move outside of the current FOV keeps the AMD within the safe region. For example, if a path that is outside the current FOV would result in the AMD moving outside the safe region, the AMD modifies the path until poses associated with the path result in the AMD staying within the safe region. The resulting safe path may then be used by the AMD to safely move outside the current FOV.
This disclosure describes systems and methods for using a primary device, communicatively coupled to a remote system, to configure or re-configure a secondary device in the same environment as the primary device. In some instances, the primary device may communicatively couple to the secondary device via a short-range wireless connection and to the remote system via a wireless area network (WAN), a wired connection, or the like. Thus, the primary device may act as an intermediary between the secondary device and the remote system for configuring the secondary device.
Systems and methods to measure and affect focus, engagement, and presence of users may include measuring a variety of aspects of users engaged in particular activities. Individual user characteristics or preferences and attributes of activities may be taken into account to determine levels of focus for particular users and activities. A variety of sensors may detect aspects of users engaged in activities to measure levels of focus. In addition, a variety of output devices may initiate actions to affect levels of focus of users engaged in activities. Further, a variety of processing algorithms, including machine learning models, may be trained to identify desired levels of focus, to calculate current levels of focus, and to select actions to change or boost levels of focus. In this manner, activities undertaken by users, as well as interactions between multiple users, may be made more engaging, efficient, and productive.
Systems and methods are provided to eliminate multiplication operations with zero padding data for convolution computations. A multiplication matrix is generated from an input feature map matrix with padding by adjusting coordinates and dimensions of the input feature map matrix to exclude padding data. The multiplication matrix is used to perform matrix multiplications with respective weight values which results in fewer computations as compared to matrix multiplications which include the zero padding data.
Disclosed are various embodiments for implementing passenger profiles for autonomous vehicles. A passenger of the autonomous vehicle is identified. A passenger profile corresponding to the passenger and comprising a passenger preference is identified. The passenger preference is identified. A configuration setting of the autonomous vehicle corresponding to autonomous operation of the autonomous vehicle is then adjusted based at least in part on the passenger preference.
H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
H04W 4/48 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for in-vehicle communication
When a first search query including an image of an item is received to search for items associated with similar images, a second search query that includes text based on the image is generated. The text may be based on previous queries associated with the depicted item, visual features of the image, or text that is present in the image. The results from the first search query are scored based on their correspondence with the image of the item. Results having a score greater than a threshold are presented first in the output, followed by a selected number of results from the second search query. Results from the first search query that are associated with a score less than the threshold may be presented after the results from the second search query. This presentation increases the likelihood that items presented earlier in the output are relevant to the initial query.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
13.
Adaptive user interface for determining errors in performance of activities
To assist a user in the correct performance of an activity, video data is acquired. A pose of the user is determined from the video data and an avatar is generated representing the user in the pose. The pose of the user is compared to one or more other poses representing correct performance of the activity to determine one or more differences that may represent errors by the user. Depending on the activity that is being performed, some errors may be presented to the user during performance of the activity, while other errors may be presented after performance of the activity has ceased. To present an indication of an error, a specific body part or other portion of the avatar that corresponds to a difference between the user's pose and a correct pose may be presented along with an instruction regarding correct performance of the activity.
G16H 20/30 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
A63B 24/00 - Electric or electronic controls for exercising apparatus of groups
A63B 71/06 - Indicating or scoring devices for games or players
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G09B 19/00 - Teaching not covered by other main groups of this subclass
Disclosed herein are techniques for classifying data with a data processing circuit. In one embodiment, the data processing circuit includes a probabilistic circuit configurable to generate a decision at a pre-determined probability, and an output generation circuit including an output node and configured to receive input data and a weight, and generate output data at the output node for approximating a product of the input data and the weight. The generation of the output data includes propagating the weight to the output node according a first decision of the probabilistic circuit. The probabilistic circuit is configured to generate the first decision at a probability determined based on the input data.
A first configurable address decoder can be coupled between a source node and a first interconnect fabric, and a second address decoder can be coupled between the first interconnect fabric and a second interconnect fabric. The first address decoder can be configured with a first address mapping table that can map a first set of address ranges to a first set of target nodes connected to the first interconnect fabric. The second address decoder can be configured with a second address mapping table that can map a second set of address ranges to a second set of target nodes connected to the second interconnect fabric. The second address decoder can be part of the first set of target nodes. The first address decoder and the second address decoder can be configured or re-configured to determine different routes for a transaction from the source node to a target node in the second set of target nodes via the first and second interconnect fabrics.
Techniques for training a machine-learning model are described. In an example, a computer generates a first pseudo-label indicating a first mask associated with a first object detected by a first machine-learning model in a first training image. A transformed image of the first training image can be generated using a transformation. Based on the transformation, a second pseudo-label indicating a second mask detected in the transformed image and corresponding to the first mask can be determined. A second machine-learning model can be trained using the second pseudo-label. The trained, second machine-learning model can detect a third mask associated with a second object based on a second image.
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A database management system receives a command defining a view of the database. The view definition is accepted without determining whether references to schema elements within the view definition are resolvable to existing elements of the database schema. A query of the view is received. In response to the query of the view, the database management system resolves references to schema elements in the view definition by determining whether the references correspond to data available for processing the query.
G06F 16/80 - Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
18.
Systems for improving pose determination based on video data
A first computing device acquires video data representing a user performing an activity. The first device uses a first pose extraction algorithm to determine a pose of the user within a frame of video data. If the pose is determined to be potentially inaccurate, the user is prompted for authorization to send the frame of video data to a second computing device. If authorization is granted, the second computing device may use a different algorithm to determine a pose of the user and send data indicative of this pose to the first computing device to enable the first computing device to update a score or other output. The second computing device may also use the frame of video data as training data to retrain or modify the first pose extraction algorithm, and may send the modified algorithm to the first computing device for future use.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/98 - Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
G06V 20/40 - Scenes; Scene-specific elements in video content
A skid-steer delivery autonomous ground vehicle has a drive train and suspension that aids in maneuverability. The AGV has six wheels, each of which is powered by its own motor. The AGV has features that diminish the dragging effect on the wheels, either by choice of wheel features or by taking weight off the front wheels during turning.
B62D 11/04 - Steering non-deflectable wheels; Steering endless tracks or the like by differentially driving ground-engaging elements on opposite vehicle sides by means of separate power sources
B60C 3/04 - Tyres characterised by transverse section characterised by the relative dimensions of the section, e.g. low profile
B60G 9/02 - Resilient suspensions for a rigid axle or axle housing for two or more wheels the axle or housing being pivotally mounted on the vehicle
B60G 17/015 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or s the regulating means comprising electric or electronic elements
B60G 17/016 - Resilient suspensions having means for adjusting the spring or vibration-damper characteristics, for regulating the distance between a supporting surface and a sprung part of vehicle or for locking suspension during use to meet varying vehicular or s the regulating means comprising electric or electronic elements characterised by their responsiveness, when the vehicle is travelling, to specific motion, a specific condition, or driver input
B62D 11/00 - Steering non-deflectable wheels; Steering endless tracks or the like
B62D 61/10 - Motor vehicles or trailers, characterised by the arrangement or number of wheels, not otherwise provided for, e.g. four wheels in diamond pattern with more than four wheels
Techniques for reducing the latency of content retrieval from a content delivery network include receiving a request from a client device for media content, parsing the request for attributes associated with the request and the client device, and providing the attributes to a machine learning model to perform server-side prediction of an estimated retrieval time of the media content. A quality level for the media content is determined based on the estimated retrieval time, and the requested media content is provided to the client device at the determined quality level.
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
21.
Automatically prioritizing computing resource configurations for remediation
Systems and methods for automatically prioritizing computing resource configurations for remediation include receiving information describing configuration issues that may result in impaired system performance or unauthorized access, parsing that information and automatically analyzing configuration details of a user's private computing environment to determine that assets provide an environment in which configuration issues may be exploited to produce undesired results. Such systems and methods can generate assessments indicating the likelihood an issue can be exploited and potential impacts of the issue being exploited. Such systems and methods can use these assessments to generate a report prioritizing remediation of specific configuration issues for specific vulnerable assets based on the actual configuration of the user's computing resources and the data managed using those resources. Issues deemed have a higher likelihood of resulting in problems can be prioritized over configuration issues which may appear to have severe consequences, but which are unlikely to affect the user's resources.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
An acoustic event detection system may employ one or more recurrent neural networks (RNNs) to extract features from audio data, and use the extracted features to determine the presence of an acoustic event. The system may use self-attention to emphasize features extracted from portions of audio data that may include features more useful for detecting acoustic events. The system may perform self-attention in an iterative manner to reduce the amount of memory used to store hidden states of the RNN while processing successive portions of the audio data. The system may process the portions of the audio data using the RNN to generate a hidden state for each portion. The system may calculate an interim embedding for each hidden state. An interim embedding calculated for the last hidden state may be normalized to determine a final embedding representing features extracted from the input data by the RNN.
Systems, devices, and methods are provided for processing images using machine learning. Features may be obtained from an image using a residual network, such as ResNet-101. Features may be analyzed using a classification model such as K-nearest neighbors (K-NN). Features and metadata extracted from images may be used to generate other images. Templates may be used to generate various types of images. For example, assets from two images may be combined to create a third image.
G06V 10/40 - Extraction of image or video features
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
G06T 3/40 - Scaling of a whole image or part thereof
G06T 11/60 - Editing figures and text; Combining figures or text
Devices, systems, and methods are provided for enhanced geographical caching of estimated arrival times. A method may include receiving respective user inputs indicative of respective users being in transit to a destination location from within a geographic region; determining, for the first user and the second user, a first estimated time of arrival from a first geographical area to the destination location, the first geographical area including the first location and the second location; identifying a third location of the first device at a third time, wherein the third location is within the first geographical area; determining that a time-to-live (TTL) of the first estimated time of arrival has not expired at the third time; and refraining from recalculating the first estimated time of arrival.
Described is a multiple-camera system and process for detecting, tracking, and re-verifying agents within a materials handling facility. In one implementation, a plurality of feature vectors may be generated for an agent and maintained as an agent model representative of the agent. When the object being tracked as the agent is to be re-verified, feature vectors representative of the object are generated and stored as a probe agent model. Feature vectors of the probe agent model are compared with corresponding feature vectors of candidate agent models for agents located in the materials handling facility. Based on the similarity scores, the agent may be re-verified, it may be determined that identifiers used for objects tracked as representative of the agents have been flipped, and/or to determine that tracking of the object representing the agent has been dropped.
Systems and methods are described for merging customer profiles, such as may be implemented by a computer-implemented contact center service. In some aspects, a subset of profiles may be determined that satisfy merging criteria, where individual profiles include a plurality of data fields. At least one value in a first data field that conflicts between at least two profiles may be identified. Next a merged value may be selected for the first data field based on data deduplication criteria, where the data deduplication criteria includes at least one indicator of accuracy of values of the plurality of data fields. As a result of a determination that at least the subset of profiles of the group of profiles meet the merging criteria, at least the subset of profiles may be combined into a combined profile using the merged value.
Server-specified subscription filters for long-lived client requests to fetch data in response to events. In one aspect, the techniques encompass a method performed by a set of one or more computing devices. The method includes the step of receiving a long-lived request to fetch data in response to events sent by a client computing device. The method further includes receiving a server-specified subscription filter for the long-lived request and executing the long-lived request. Executing the long-lived request includes creating a persistent function that uses the server-specified subscription filter to map a source event stream to a response event stream. The response event stream is provided to the client computing device. The server-specified subscription filter facilitates filtering of events fetched for the long-lived request in a way that may not be possible or impractical if the subscription client were required to specify the filter in the long-lived request.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
A system for database restoration across service regions. The system includes data storage and backup data storage in the first region. The system includes a frontend for the database service configured to receive, from a client, a request to restore a database to the first region from backups stored in another backup data storage in a second region and to receive an authentication token for the request from the client. The system also includes a backup restore manager service for the first region configured to send, to another backup restore manager service implemented in the second region, a credential request for a second region credential authorizing retrieval of the one or more other backups from the second region. The backup restore manager service sends a backup restore request to retrieve the backups from the other backup data storage and loads the backups to restore the database in the first region.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
A simulation environment (e.g., multi-player game) hosted by a provider network may implement componentized entities to reduce the amount of resource usage for a simulation (e.g., by reducing the amount of input/state data transmitted through the use of dynamically changing input structures). A user may add or remove any number of components to an entity that is simulated at the local client device. When inputs are received for one or more components, values for predictive states are locally determined for each component. An input packet is generated and sent to the provider network, which includes the inputs as well as data that is based on the values for the locally predicted states (e.g., a fingerprint or other unique ID). If necessary, a correction packet may be generated at the provider network and sent back to the client.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 43/106 - Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps
H04L 45/7453 - Address table lookup; Address filtering using hashing
Methods, systems, and computer-readable media for automated management of machine images are disclosed. A machine image management system determines that a trigger for a machine image build process has occurred. The machine image management system performs the machine image build process responsive to the trigger. The machine image build process generates a machine image, and the machine image comprises a plurality of operating system components associated with an application. The machine image is validated by the machine image management system for compliance with one or more policies. The machine image management system provides the machine image to one or more recipients. One or more compute resources are launched using the machine image, and the application is executed on the compute resource(s) launched using the machine image.
Disclosed are systems, methods, and apparatus of an automated and self-service kiosk that allows customers to select inventory items available from the kiosk and walk or move away with selected inventory item(s) without having to process payment, identify the inventory item(s), or provide any other form of checkout. After a customer has picked one or more items and departed the kiosk, the picked items are determined and the customer charged for the items. For example, one or more of detected weight changes measured at the kiosk and/or images generated at the kiosk may be used to identify items picked by the customer from the kiosk.
An electric pallet jack can be configured to include logic controllers that are connected to a drive system and a steering system of the electric pallet jack. The logic controllers can be in communication with one or more sensors that enable determinations of pallet jack velocity, pallet jack acceleration, and a rate of turning for the electric pallet jack. The logic controllers can be configured to provide maximum velocity, maximum acceleration, maximum deceleration, and maximum rate of turning limitations to maintain control over an object transported by the electric pallet jack. The logic controllers can determine whether the maximum thresholds of the electric pallet jack are exceeded by an operating variable and can modulate the amount of power provided by the drive system to reduce the operating variable below the associated threshold.
B66F 17/00 - Safety devices, e.g. for limiting or indicating lifting force
B60W 10/04 - Conjoint control of vehicle sub-units of different type or different function including control of propulsion units
B60W 10/20 - Conjoint control of vehicle sub-units of different type or different function including control of steering systems
B60W 30/09 - Taking automatic action to avoid collision, e.g. braking and steering
B66F 9/065 - Devices for lifting or lowering bulky or heavy goods for loading or unloading purposes movable, with their loads, on wheels or the like, e.g. fork-lift trucks non-masted
34.
Distributed automated mobile vehicle routing based on characteristic information satisfying a minimum requirement
This disclosure describes a distributed automated mobile vehicle (“automated mobile vehicle”) system for autonomously delivering orders of items to various delivery locations and/or autonomously returning items to a return location. In some implementations, each user may own or be assigned their own automated mobile vehicle that is associated with the user and an automated mobile vehicle control system maintained by the user. When the user orders an item, the user owned or controlled automated mobile vehicle navigates to a materials handling facility, retrieves the ordered item and delivers it to the user.
Techniques and systems can process data of a dataset to determine when a portion of data is comprised in the data of the dataset. An output generated from processing the data of the dataset can be evaluated, where the output can signify that processing the data of the dataset was unable to locate the portion of data in the data of the dataset. Based on evaluating the output, the data of the dataset can be automatically reprocessed to determine the portion of data is in the data of the dataset. A result can then be generated from the portion of data determined to be in the data of the dataset.
A speech-processing system may provide access to one or more virtual assistants via a voice-controlled device. A user may leverage a first virtual assistant to translate a natural language command from a first language into a second language, which the device can forward to a second virtual assistant for processing. The device may receive a command from a user and send input data representing the command to a first speech-processing system representing the first virtual assistant. The device may receive a response in the form of a first natural language output from the first speech-processing system along with an indication that the first natural language output should be directed to a second speech-processing system representing the second virtual assistant. For example, the command may be in the first language, and the first natural language output may be in the second language, which is understandable by the second speech-processing system.
Techniques for determining whether audio is machine-outputted or non-machine-outputted are described. A device may receive audio, may process the audio to determine audio data including audio features corresponding to the audio, and may process the audio data to determine audio embedding data. The device may process the audio embedding data to determine whether the audio is machine-outputted or non-machine-outputted. In response to determining that the audio is machine-outputted, then the audio may be discarded or not processed further. Alternatively, in response to determining that the audio is non-machine-outputted (e.g., live speech from a user), then the audio may be processed further (e.g., using ASR processing).
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G06N 3/044 - Recurrent networks, e.g. Hopfield networks
G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
G10L 15/16 - Speech classification or search using artificial neural networks
G10L 15/18 - Speech classification or search using natural language modelling
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
G10L 25/69 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for evaluating synthetic or decoded voice signals
Video output is synchronized to the actions of a user by determining positions of the user's body based on acquired video of the user. The positions of the user's body are compared to the positions of a body shown in the video output to determine corresponding positions in the video output. The video output may then be synchronized so that the subsequent output that is shown corresponds to the subsequent position attempted by the user. The rate of movement of the user may be used to determine output characteristics for the video to cause the body shown in the video output to appear to move at a similar rate to that of the user. If the user moves at a rate less than a threshold or performs an activity erroneously, the video output may be slowed or portions of the video output may be repeated.
Systems and methods are described herein for detecting the inadvertent modification to or deletion of data in a data store and taking automated action to prevent the deletion of data from becoming permanent. The described techniques may also be utilized to detect anomalous changes to a policy or affecting storage of data and taking automated action to mitigate the effects of those changes. In one example, events generated as a result of requests to perform operations on data objects in a data storage service may be obtained, where at least some of the events indicate a failure to fulfill respective requests. Data from the events may be input into a model to detect an anomaly indicative of inadvertent modification of data. As a result of detection of the anomaly, a set of operations may be initiated or performed to prevent the inadvertent modification of data from becoming permanent.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/52 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure
To accelerate the data processing of a processor, a coprocessor subsystem can be used to offload data processing operations from the processor. The coprocessor subsystem can include a coprocessor and an accelerator. The accelerator can offload operations such as data formatting operations from the coprocessor to improve the performance of the coprocessor. The coprocessor subsystem can be used to accelerate database operations.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer software for media and design content creation;
computer software for image and video rendering,
compression, processing and visualization; computer software
for administering and managing video render farms; computer
software for automating image and video rendering and
post-render tasks; computer software for caching animated
scene geometry; computer software for particle rendering,
shading, texturing, meshing, editing, simulating,
manipulation and management; computer software for
visualization and processing of digital images, film, video
and data relating to computer graphics in the film,
broadcast, commercial marketing, video game, website
development, computer-aided design, computer-aided
manufacturing, engineering, and non-clinical medical
visualization industries; computer software for the local
and remote management of desktop computers, servers, and
workstations, for rendering, processing, execution, and
automation of other programs and computer software
applications on individual or multiple concurrent computer
systems. Software as a service (SaaS) services featuring software for
media and design content creation; software as a service
(SaaS) services featuring software for image and video
rendering, compression, compositing, processing and
visualization; software as a service (SaaS) services
featuring software for administering and managing video
render farms; software as a service (SaaS) services
featuring software for automating image and video rendering
and post-render tasks; software as a service (SaaS) services
featuring software for caching animated scene geometry;
software as a service (SaaS) services featuring software for
particle rendering, shading, texturing, meshing, editing,
simulating, manipulation and management; software as a
service (SaaS) services featuring software for visualization
and processing of digital images, photographs, film, video
and data relating to computer graphics in the film,
broadcast, commercial marketing, video game, website
development, computer-aided design, computer-aided
manufacturing, engineering, and non-clinical medical
visualization industries; software as a service (SaaS)
services featuring software for the local and remote
management of desktop computers, servers, workstations,
tablets, personal digital assistants and smartphones, for
rendering, processing, execution, and automation of other
programs and computer software applications on individual or
multiple concurrent computer systems.
42.
PROGRAMMABLE COMPUTE ENGINE HAVING TRANSPOSE OPERATIONS
A technique to execute transpose and compute operations may include retrieving a set of machine instructions from an instruction buffer of a data processor. The instruction buffer has multiple entries, and each entry stores one machine instruction. A machine instruction from the set of machine instructions is executed to transpose a submatrix of an input tensor and perform computations on column elements of the submatrix. The machine instruction combines the transpose operation with computational operations into a single machine instruction.
Systems, methods, and computer-readable media are disclosed for systems and methods for dynamic product summary images. The dynamic product summary images may be displayed on product pages or in association with individual product search results. The dynamic product summary images may comprise a number of different visual icons that provide a customer quick and easily-digestible information about a product. The dynamic product summary image may also be specific to the user such that different users may be presented with different icons based on details about the product that they are likely to find most important. For example, a dynamic product summary image for a laptop may include an icon indicating a processor type, an icon indicating a graphics card type, an icon indicating an operating system, etc. This provides for a more efficient product browsing process and mitigates or eliminates the need for the customer to search the entire product page for important details about the product.
Systems and methods for implementing record locking for transactions using a probabilistic data structure are described. This probabilistic structure enables adding of data records without growth of the data structure. The data structure includes a hash table for each of multiple hash functions, where entries in the respective hash tables store a transaction time and locking state. To lock a record, each hash function is applied to a record key to provide an index into a respective hash table and a minimum of the values stored in the hash tables is retrieved. If the retrieved value is less than a transaction time for a transaction attempting to lock the record, locking is permitted and the transaction time is recorded to each of the hash tables. To commit the transaction, the probabilistic data structure is atomically updated as part of the commit operation.
A data service implements a configurable data compressor/decompressor using a recipe generated for a particular data set type and using compression operators of a common registry (e.g., pantry) that are referenced by the recipe, wherein the recipe indicates at which nodes of a compression graph respective ones of the compression operators of the registry are to be implemented. The configurable data compressor/decompressor provides a customizable framework for compressing data sets of different types (e.g., belonging to different data domains) using a common compressor/decompressor implemented using a common set of compression operators.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Systems and methods are described for implementing a distributed unit in a radio access network that executes code on behalf of mobile devices. A distributed unit may be implemented on an edge server that is in close physical proximity to a radio unit, with few or no intervening devices. The edge server may thus provide services to mobile devices, such as executing code on behalf of a mobile device in an execution environment on the edge server, at significantly lower latency than more distant cloud-based servers. The edge server may preload computing environments with code for which a mobile device is likely to request execution (e.g., because a particular application is executing on the mobile device), and may determine whether to execute code on the edge server or on a cloud provider network.
A system for managing deployment of quantum circuits is described. The system may include a web server configured to receive, from a consumer, a quantum computing request to perform a job using a given quantum application. The web server may generate a response based on execution of the quantum application and at least a portion of the quantum computing request and return the response to the consumer. The system may also include a deployment service configured to store quantum circuit definitions in a data store. The deployment service may receive, from the web server, a deployment request for executing a quantum circuit. The deployment service may generate a container for implementing the quantum circuit. The deployment service may configure a quantum application in the container for executing a job using the quantum circuit. The deployment service may provide the web server access to results of the execution of the job.
G06N 10/80 - Quantum programming, e.g. interfaces, languages or software-development kits for creating or handling programs capable of running on quantum computers; Platforms for simulating or accessing quantum computers, e.g. cloud-based quantum computing
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
G06N 10/20 - Models of quantum computing, e.g. quantum circuits or universal quantum computers
A system for providing code suggestions according to licensing criteria is described. The system comprises computing devices that implement a code suggestion service. The code suggestion service receives a request that specifies licensing criteria via an interface of the code suggestion service. The code suggestion service determines respective licenses for respective source code files according to a source code attribution database from parsing the plurality of source code files that are applicable to the plurality of source code files. The code suggestion service generates a set of candidate code suggestions based, at least in part, on the plurality of source code files. The code suggestion service determines code suggestions from the set of candidate code suggestions that satisfy the licensing criteria based on the respective licenses. The code suggestion service provides the code suggestions determined from the set of candidate source code files that satisfy the licensing criteria.
An Application Programming Interface (API) allows a launching of a virtual machine where a queue count can be configured by a user. More specifically, each virtual machine can be assigned a pool of queues. Additionally, each virtual machine can have multiple virtual networking interfaces and a user can assign a number of queues from the pool to each virtual networking interface. Thus, a new metadata field is described that can be used with requests to launch a virtual machine. The metadata field includes one or more parameters that associate a number of queues with each virtual networking interface. A queue count can be dynamically configured by a user to ensure that the queues are efficiently used given that the user understands the intended application of the virtual machine being launched.
Systems and methods for selectively ignoring an occurrence of a wakeword within audio input data is provided herein. In some embodiments, a wakeword may be detected to have been uttered by an individual within a modified time window, which may account for hardware delays and echoing offsets. The detected wakeword that occurs during this modified time window may, in some embodiments, correspond to a word included within audio that is outputted by a voice activated electronic device. This may cause the voice activated electronic device to activate itself, stopping the audio from being outputted. By identifying when these occurrences of the wakeword within outputted audio are going to happen, the voice activated electronic device may selectively determine when to ignore the wakeword, and furthermore, when not to ignore the wakeword.
A system is provided for modifying how an output is presented via a multi-device synchronous configuration based on detecting a speech characteristic in the user input. For example, if the user whispers a request, then the system may temporarily modify how the responsive output is presented to the user via multiple devices. In one example, the system may lower the volume on all devices presented the output. In another example, the system may present the output via a single device rather than multiple devices. The system may also determine to operate in a alternate output mode based on certain non-audio data.
Systems and processes are described for establishing and using a secure channel. A shared secret may be used for authentication of session initiation messages as well as for generation of a private/public key pair for the session. A number of ways of agreeing on the shared secret are described and include pre-sharing the keys, reliance on a key management system, or via a token mechanism that uses a third entity such as a hub to manage authentication, for example. In some instances, the third party may also perform endpoint selection (e.g., load balancing) by providing a particular endpoint along with the token.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
Techniques are described for providing users with access to computer networks, such as to enable users to interact with a remote configurable network service in order to create and configure computer networks that are provided by the configurable network service for use by the users. Computer networks provided by the configurable network service may be configured to be private computer networks that are accessible only by the users who create them, and may each be created and configured by a client of the configurable network service to be an extension to an existing computer network of the client, such as a private computer network extension to an existing private computer network of the client. If so, secure private access between an existing computer network and new computer network extension that is being provided may be enabled using one or more VPN connections or other private access mechanisms.
This disclosure describes an aerial vehicle, such as an unmanned aerial vehicle (“UAV”), which includes a plurality of propulsion mechanisms that enable the aerial vehicle to move independently in any of six degrees of freedom (surge, sway, heave, roll, pitch, and yaw).
This disclosure describes a verification service within a service provider network for automatically verifying and validating documents. A user may upload a document image to the verification service. A pre-processing service may pre-process the document image. The pre-processed document image may then be forwarded to a first machine learning ML model for similarity evaluation. Once the first ML model has completed its evaluation of the document image, the first ML model may forward the document image to a second ML model for symbol recognition, which may then forward the business license to an optical recognition (OCR) service for OCR validation. If the document image is validated, e.g., is an image of a purported document type, as will be discussed further herein, the publishing service may pre-populate, e.g., publish, information from the document image to an account template.
Techniques for enabling access in a multi-assistant speech processing system are described, where a first assistant system may use components of a second assistant system as data processing components. Runtime operational data and user input data related to the first assistant may be kept separate from the processing data and input data related to the second assistant by propagating a first account ID, for user inputs directed to the first assistant, through the processing pipeline, and using a second account for user inputs directed to the second assistant. A mapping between the first account ID and the second account ID may be accessible to a select number of system components. Handoffs between the two assistants are handled in a manner where data related to one assistant is not accessible by the other assistant.
An Application Programming Interface (API) allows a launching of a virtual machine where a queue count can be configured by a user. More specifically, each virtual machine can be assigned a pool of queues. Additionally, each virtual machine can have multiple virtual networking interfaces and a user can assign a number of queues from the pool to each virtual networking interface. Thus, a new metadata field is described that can be used with requests to launch a virtual machine. The metadata field includes one or more parameters that associate a number of queues with each virtual networking interface. A queue count can be dynamically configured by a user to ensure that the queues are efficiently used given that the user understands the intended application of the virtual machine being launched.
Systems and methods are provided for managing provision of—and access to—data sets among instances of function code executing in an on-demand manner. An API is provided by which functions can store data sets to be shared with other functions, and by which functions can access data sets shared by other functions.
Systems, methods, and devices are disclosed for front-lit displays having uniform brightness. In one embodiment, an example display may include an electrophoretic display, a light guide configured to direct light from one or more light emitting diodes, and a cover lens assembly. The cover lens assembly may include a cover glass layer, an anti-glare film coupled to the cover glass layer, and a hot melt adhesive disposed about lateral edge surfaces of the cover glass layer and the anti-glare film, such that the hot melt adhesive forms a perimeter of the cover lens assembly.
G02F 1/167 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on translational movement of particles in a fluid under the influence of an applied field characterised by the electro-optical or magneto-optical effect by electrophoresis
Techniques for customer-initiated virtual machine resource allocation sharing are described. A hardware virtualization service of a cloud provider network receives a request to launch a first virtual machine, wherein the first virtual machine is of a first virtual machine type, the first virtual machine type having a resource amount allocated to virtual machines of the first virtual machine type. The hardware virtualization service causes a launch of the first virtual machine on a host computer system of the cloud provider network. The host computer system shares an allocation of the resource amount from a corresponding resource of the host computer system between the first virtual machine and a second virtual machine, wherein the second virtual machine is of the first virtual machine type.
Connectivity is enabled between a first and second isolated network using a virtual traffic hub that includes a decision master node responsible for determining a routing action for a packet received at the hub. At the hub, a determination is made that a particular domain name system (DNS) message being directed to a first resource in the first isolated network is to include an indication of a second resource in the second isolated network. The second resource is assigned a network address within a private address range of the second isolated network, which overlaps with a private address range being used in the first isolated network. The hub causes a transformed version of the network address to be included in the DNS message delivered to the first resource.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Disclosed are various embodiments for seamless insertion of modified media content. In one embodiment, a modified portion of video content is received. The modified portion has a start cue point and an end cue point that are set relative to a modification to the video content to indicate respectively when the modification approximately begins and ends compared to the video content. A video coding associated with the video content is identified. The start cue point and/or the end cue point are dynamically adjusted to align the modified portion with the video content based at least in part on the video coding.
Disclosed are various embodiments for a distributed and synchronized core in a radio-based network. In one embodiment, a first radio access network (RAN)-enabled edge server at a first edge location is configured to perform a set of distributed unit (DU) functions for a radio-based network. The first RAN-enabled edge server is also configured to perform a set of core network functions and a set of centralized unit (CU) functions for the radio-based network. State associated with the set of core network functions and the set of CU functions is synchronized between the first RAN-enabled edge server and another server.
Systems and methods are provided for translation of text in an image, and presentation of a version of the image in which the translated text is displayed a manner consistent with the original image. Text segments are automatically translated from their original source language to a target language. In order to provide presentation of the translated text in a manner that closely matches the source text, various display attributes of the source text are detected (e.g., font size, font color, font style, etc.).
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
A system and method for continual learning in a provider network. The method is configured to implement or interface with a system which implements a semi-automated or fully automated architecture of continual machine learning, the semi-automated or fully automated architecture implementing user-configurable model retraining or hyperparameter tuning, which is enabled by a provider network. This functions to adapt a model over time to new information in the training data while also providing a user-friendly, flexible, and customizable continual learning process.
Systems and techniques are disclosed for predicting the structural status of an object. An object model, such as a machine learning model, can be trained on sample sensor data indicating vibrations, movements, and/or other reactions of objects with known desired and undesired structural statuses to a stimulus agent, such as a puff of air. A scanning device can output a corresponding stimulus agent towards an object, capture sensor data indicating the reaction of the object to the stimulus agent, and provide the sensor data to the trained object model. Based on the sensor data indicating how the object reacted to the stimulus agent, the object model can predict whether the object has a desired structural status or an undesired structural status.
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 20/17 - Terrestrial scenes taken from planes or by drones
70.
AUTOMATED POLICY REFINER FOR CLOUD-BASED IDENTITY AND ACCESS MANAGEMENT SYSTEMS
Techniques are described for providing a policy refiner application used to analyze and recommend modifications to identity and access management policies created by users of a cloud provider network (e.g., to move the policies toward least-privilege permissions). A policy refiner application receives as input a policy to analyze, and a log of events related to activity associated with one or more accounts of a cloud provider network. The policy refiner application can identify, from the log of events, actions that were permitted based on particular statements contained in the policy. Based on field values contained in the corresponding events, the policy refiner application generates an abstraction of the field values, where the abstraction of the field values may represent a more restrictive version of the field from a policy perspective. These abstractions can be presented to users as recommendations for modifying their policy to reduce the privileges granted by the policy.
Techniques for customer-initiated virtual machine resource allocation sharing are described. A hardware virtualization service of a cloud provider network receives a request to launch a first virtual machine, wherein the first virtual machine is of a first virtual machine type, the first virtual machine type having a resource amount allocated to virtual machines of the first virtual machine type. The hardware virtualization service causes a launch of the first virtual machine on a host computer system of the cloud provider network. The host computer system shares an allocation of the resource amount from a corresponding resource of the host computer system between the first virtual machine and a second virtual machine, wherein the second virtual machine is of the first virtual machine type.
Systems and methods are described for implementing a distributed unit in a radio access network that executes code on behalf of mobile devices. A distributed unit may be implemented on an edge server that is in close physical proximity to a radio unit, with few or no intervening devices. The edge server may thus provide services to mobile devices, such as executing code on behalf of a mobile device in an execution environment on the edge server, at significantly lower latency than more distant cloud-based servers. The edge server may preload computing environments with code for which a mobile device is likely to request execution (e.g., because a particular application is executing on the mobile device), and may determine whether to execute code on the edge server or on a cloud provider network.
Systems and techniques are disclosed for predicting the structural status of an object. An object model, such as a machine learning model, can be trained on sample sensor data indicating vibrations, movements, and/or other reactions of objects with known desired and undesired structural statuses to a stimulus agent, such as a puff of air. A scanning device can output a corresponding stimulus agent towards an object, capture sensor data indicating the reaction of the object to the stimulus agent, and provide the sensor data to the trained object model. Based on the sensor data indicating how the object reacted to the stimulus agent, the object model can predict whether the object has a desired structural status or an undesired structural status.
Systems and methods are provided for translation of text in an image, and presentation of a version of the image in which the translated text is displayed a manner consistent with the original image. Text segments are automatically translated from their original source language to a target language. In order to provide presentation of the translated text in a manner that closely matches the source text, various display attributes of the source text are detected (e.g., font size, font color, font style, etc.).
A system and method for continual learning in a provider network. The method is configured to implement or interface with a system which implements a semi-automated or fully automated architecture of continual machine learning, the semi-automated or fully automated architecture implementing user-configurable model retraining or hyperparameter tuning, which is enabled by a provider network. This functions to adapt a model over time to new information in the training data while also providing a user-friendly, flexible, and customizable continual learning process.
Techniques are described for providing a policy refiner application to analyze and recommend modifications to identity and access management policies created by users of a cloud provider network (e.g., to move the policies toward least-privilege permissions). A policy refiner application receives as input a policy to analyze, and a log of events related to activity associated with one or more accounts of a cloud provider network. The policy refiner application can identify, from the log of events, actions that were permitted based on particular statements contained in the policy. Based on field values contained in the corresponding events, the policy refiner application generates an abstraction of the field values, where the abstraction of the field values may represent a more restrictive version of the field from a policy perspective. These abstractions can be presented to users as recommendations for modifying their policy to reduce the privileges granted by the policy.
Systems and methods for implementing record locking for transactions using a probabilistic data structure are described. This probabilistic structure enables adding of data records without growth of the data structure. The data structure includes a hash table for each of multiple hash functions, where entries in the respective hash tables store a transaction time and locking state. To lock a record, each hash function is applied to a record key to provide an index into a respective hash table and a minimum of the values stored in the hash tables is retrieved. If the retrieved value is less than a transaction time for a transaction attempting to lock the record, locking is permitted and the transaction time is recorded to each of the hash tables. To commit the transaction, the probabilistic data structure is atomically updated as part of the commit operation.
A system for providing code suggestions according to licensing criteria is described. The system comprises computing devices that implement a code suggestion service. The code suggestion service receives a request that specifies licensing criteria via an interface of the code suggestion service. The code suggestion service determines respective licenses for respective source code files according to a source code attribution database from parsing the plurality of source code files that are applicable to the plurality of source code files. The code suggestion service generates a set of candidate code suggestions based, at least in part, on the plurality of source code files. The code suggestion service determines code suggestions from the set of candidate code suggestions that satisfy the licensing criteria based on the respective licenses. The code suggestion service provides the code suggestions determined from the set of candidate source code files that satisfy the licensing criteria.
A distributed database identifies classifications of risk associated with stages of a query plan. The distributed database generates an execution plan in which incompatible risk classifications are assigned to separate stages of an execution plan that is derived from the query plan. The stages are assigned to computing nodes for execution based, at least in part, on the risk classifications. A result for the query is generated based on execution of the stages on the assigned computing nodes.
Disclosed are various embodiments for seamless insertion of modified media content. In one embodiment, a modified portion of video content is received. The modified portion has a start cue point and an end cue point that are set relative to a modification to the video content to indicate respectively when the modification approximately begins and ends compared to the video content. A video coding associated with the video content is identified. The start cue point and/or the end cue point are dynamically adjusted to align the modified portion with the video content based at least in part on the video coding.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/8543 - Content authoring using a description language, e.g. MHEG [Multimedia and Hypermedia information coding Expert Group] or XML [eXtensible Markup Language]
81.
MULTI-DOMAIN CONFIGURABLE DATA COMPRESSOR/DE-COMPRESSOR
A data service implements a configurable data compressor/decompressor using a recipe generated for a particular data set type and using compression operators of a common registry (e.g., pantry) that are referenced by the recipe, wherein the recipe indicates at which nodes of a compression graph respective ones of the compression operators of the registry are to be implemented. The configurable data compressor/decompressor provides a customizable framework for compressing data sets of different types (e.g., belonging to different data domains) using a common compressor/decompressor implemented using a common set of compression operators.
A multitenant solver execution service provides managed infrastructure for defining and solving large-scale optimization problems. In embodiments, the service executes solver jobs on managed compute resources such as virtual machines or containers. The compute resources can be automatically scaled up or down based on client demand and are assigned to solver jobs in a serverless manner. Solver jobs can be initiated based on configured triggers. In embodiments, the service allows users to select from different types of solvers, mix different solvers in a solver job, and translate a model from one solver to another solver. In embodiments, the service provides developer interfaces to, for example, run solver experiments, recommend solver types or solver settings, and suggest model templates. The solver execution service relieves developers from having to manage infrastructure for running optimization solvers and allows developers to easily work with different types of solvers via a unified interface.
Disclosed are various embodiments for a distributed and synchronized core in a radio-based network. In one embodiment, a first radio access network (RAN)-enabled edge server at a first edge location is configured to perform a set of distributed unit (DU) functions for a radio-based network. The first RAN-enabled edge server is also configured to perform a set of core network functions and a set of centralized unit (CU) functions for the radio-based network. State associated with the set of core network functions and the set of CU functions is synchronized between the first RAN-enabled edge server and another server.
Systems and methods are disclosed for automated lateral transfer and elevation of sortation shuttles. An example system may include a track having a first portion arranged in a first direction, a shuttle configured to move along the track, and a shuttle carriage system configured to move in a second direction transverse to the first direction, where the shuttle is configured to move from the track to the shuttle carriage system. The shuttle carriage system may include a first frame configured to support the shuttle, a first electromagnet configured to propel the first frame, and a second electromagnet coupled to the first frame, the second electromagnet configured to propel the shuttle off the first frame.
The updating of a definition layer or schema for a large distributed database can be accomplished using a plurality of data store tiers. A distributed database can be made up of many individual data stores, and these data stores can be allocated across a set of tiers based on business logic or other allocation criteria. The update can be applied sequentially to the individual tiers, such that only data stores for a single tier are being updated at any given time. This can help to minimize downtime for the database as a whole, and can help to minimize problems that may result from an unsuccessful update. Such an approach can also allow for simplified error detection and rollback, as well as providing control over a rate at which the update is applied to the various data stores of the distributed database.
G06F 16/185 - Hierarchical storage management [HSM] systems, e.g. file migration or policies thereof
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
86.
Automatic index management for a non-relational database
Index management for non-relational database systems may be automatically performed. Performance of queries to a non-relational database may be evaluated to determine whether to create or remove an additional index. An additional index may be automatically created to store a subset of data projected from the non-relational database to utilize when performing a query to the non-relational database instead of accessing data in the non-relational database.
Intelligent query routing may be performed across shards of a scalable database table. A router of a database system may receive an access request directed to one or more database tables. The router may evaluate the access request with respect to metadata obtained for the database tables to determine an assignment distribution of computing resources of the database system to data that can satisfy the access request. The router can select planning locations to perform the access request based on the assignment distribution of the computing resources. The router can cause the access request to be performed according to planning at the selected planning locations.
Working set ratio estimations of data items in a sliding time window are determined to dynamically allocate storage for the data items. A working set ratio may be determined by accessing a fixed-size array that stores respective timestamps of last accesses of data items to determine which data items are useful to determine an estimate of a working set for the application within a range of time. The working set ratio is then determined from an estimated working set and an amount of computing resources allocated to the application by the estimated working set. The amount of the computing resources allocated to the application may then be automatically scaled according to the determine working set ratio.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
Devices and techniques are generally described for determining named entity recognition tags. In various examples, first input data representing a natural language input may be determined. In some examples, a first machine learned model may determine first data comprising a first encoded representation of the first input data. In various examples, second data representing a grouping of text of the first input data may be determined based at least in part on the first data. In some examples, first entity data may be determined by searching a memory layer using the second data. In at least some examples, the first entity data and the first data may be combined to generate third data. In various examples, output data comprising a predicted named entity recognition tag may be generated for the grouping of text based at least in part on the third data.
Techniques for performing multi-stage entity resolution (ER) processing are described. A system may determine a portion of a user input corresponding to an entity name, and may request an entity provider component to perform a search to determine one or more entities corresponding to the entity name. The preliminary search results may be sent to a skill selection component for processing, while the entity provider component performs a complete search to determine entities corresponding to the entity name. A selected skill component may request the complete search results to perform its processing, including determining an output responsive to the user input.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/183 - Speech classification or search using natural language modelling using context dependencies, e.g. language models
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
Network services are deployed in a networked environment in association with a user account. Dependencies of a network service, such as other network services, may be identified based on an online analysis and an offline analysis of the network service. Further, anomalies associated with the dependencies may be identified in some situations. A call graph may include nodes corresponding to the network services and its dependencies, and may include an identifier corresponding to a part of the call path that has the anomaly. An inspection of the call graph allows software developers to readily recognize that their service depends on a potentially flawed software that may cause a service failure or outage.
An interruption-handling setting for a category of interactions of an application is determined via a programmatic interface. A set of user-generated input is obtained while presentation to a user of a set of output of the category is in progress. A response to the set of user-generated input is prepared based at least in part on the interruption-handling setting.
Technologies directed to a control circuit using dynamic signal compression are described. A control circuit includes a front-end module (FEM) coupled to an RF cable, the FEM having a low-noise amplifier (LNA). The control circuit further includes an automatic gain control (AGC) circuitry coupled to the FEM. The AGC circuitry receives a first radio frequency (RF) signal having a first portion of one or more symbols and a second portion of one or more symbols. The AGC circuitry further amplifies the first portion to generate a first portion of an output signal. The AGC circuitry further compresses the second portion to obtain a second portion of the output signal. The AGC circuitry further sends a control signal to cause the FEM to change a gain state value of the LNA from a first value to a second value based on a comparison between a voltage of the output signal and a reference voltage.
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer search engine software; computer software for answering retail product and shopping inquiries in a conversational interface; computer software for discovering and recommending products of others in a retail store; computer software for learning about, comparing, and selecting products of others in a retail store; computer software in the nature of an AI (artificial intelligence) retail store product expert; computer software in the nature of an AI (artificial intelligence) retail store product assistant; computer software for disseminating knowledge and recommendations to assist retail shoppers; computer software for natural language processing, generation, understanding, and analysis to respond to consumer inquiries in the field of retail shopping; computer chatbot software for simulating conversations with retail shoppers. Shopping facilitation services, namely, providing an online shopping search engine for obtaining retail product and purchasing information; shopping facilitation services, namely, providing an online comparison-shopping search engine for obtaining purchasing information; shopping facilitation services, namely, providing an online shopping search engine for discovery and inspiration while shopping. Provision of Internet search engines; Software as a service (SAAS) services featuring computer search engine software; Software as a service (SAAS) services featuring software for answering retail product and shopping inquiries in a conversational interface; Software as a service (SAAS) services featuring software for discovering and recommending products of others in a retail store; Software as a service (SAAS) services featuring software for learning about, comparing, and selecting products of others in a retail store; Software as a service (SAAS) services featuring software in the nature of an AI (artificial intelligence) retail store product expert; Software as a service (SAAS) services featuring software in the nature of an AI (artificial intelligence) retail store product assistant; Software as a service (SAAS) services featuring software for disseminating knowledge and recommendations to assist retail shoppers; Software as a service (SAAS) services featuring software for natural language processing, generation, understanding, and analysis to respond to consumer inquiries in the field of retail shopping; Software as a service (SAAS) services featuring software computer chatbot software for simulating conversations with retail shoppers; Providing temporary use of online non-downloadable search engine software; providing temporary use of online non-downloadable computer software for answering retail product and shopping inquiries in a conversational interface; providing temporary use of online non-downloadable computer software for discovering and recommending products of others in a retail store; providing temporary use of online non-downloadable computer software for learning about, comparing, and selecting products of others in a retail store; providing temporary use of online non-downloadable computer software in the nature of an AI (artificial intelligence) retail store product expert; providing temporary use of online non-downloadable computer software in the nature of an AI (artificial intelligence) retail store product assistant; providing temporary use of online non-downloadable computer software for disseminating knowledge and recommendations to assist retail shoppers; providing temporary use of online non-downloadable computer software for natural language processing, generation, understanding, and analysis to respond to consumer inquiries in the field of retail shopping; providing temporary use of online non-downloadable computer chatbot software for simulating conversations with retail shoppers.
Techniques and systems can receive a query identifying a name linked to performance data of a computer system and a location of the performance data. The name linked to the performance data of the computer system and the location of the performance data can be communicated to a first computer-implemented system. The first computer-implemented system can include identifying data derived from the name and the location of the performance data. Identifying data derived from the name and the location of the performance data can be received from the first computer-implemented system. The identifying data derived from the name and the location of the performance data can be used to retrieve the performance data. The performance data can be hosted by a second computer-implemented system that is different than the first computer-implemented system.
Embodiments of a contextualized visual search (CVS) system are disclosed capable of isolating target images of items that contain instances of a previously-unseen query image from a large database of target images. In embodiments, the system is used to implement an interactive query interface of an e-commerce portal, which allows the user to specify the query image (e.g. a logo) to be searched. The system converts the query image into a feature vector using a first machine learning model, and compares the feature vector to feature vectors of target images using a second machine learning model to find matching target images that contain an instance of the query image. The system then returns a query result indicating a list of items associated with matched target images. In embodiments, the query results may be ranked based on a set of personalized factors associated with the user.
Techniques are provided herein for selecting and transmitting snippets from a messaging application. A “snippet” refers to an audio segment of a song that is less than the whole of the song. A user may request to view various audio segments (e.g., by category, by search, etc.) corresponding to portions of respective songs via a user interface of the messaging application. In some embodiments, an audio segment can be selected and metadata associated with that particular audio segment may be transmitted to another computing device where the audio segment can be played (e.g., streamed). In this manner, these snippets can be employed by the user to enhance their chat or texting conversation.
Techniques for planning resources using block and route information are described. In an example, a computing system determines a demand for item transportation expected during a planning horizon. The computing system determines information about a pre-planned transportation resource available during the planning horizon and costs associated with the pre-planned transportation resource. The computing system uses an optimization model to determine a block having a time length, a tour to transport, during the block, a first portion of the demand using the pre-planned transportation resource, and a second portion of the demand to be transported using an on-demand transportation resource. The computing system indicates, to a first computing device of the pre-planned transportation resource, an assignment of the block to the pre-planned transportation resource.
Systems, methods, and computer-readable media are disclosed for estimating impressions for a digital out of home (DOOH) advertising spaces (e.g., digital billboards and screens). A DOOH advertising system may determine the location of relevant DOOH advertising spaces and the location of certain consumers with known attributes and a known location. Based on this information the DOOH advertising system may estimate a number of impressions for a given DOOH advertising space and a given consumer segment associated with attributes of consumers within a certain distance from the DOOH advertising space. Using this information, the DOOH advertising spaces having the highest estimated impressions for a given consumer segment may be identified.
Described herein is a system for predictive feature analysis to precompute and store data required to respond to a user input in advance of receiving the user input. To determine when to precompute the data, the system uses a prediction model to predict user interactions and when to expect the user input. The system predicts that a user input is about to be received, and starts to process certain data to determine feature data and stores the data in a cache. When the user input is received, the system retrieves the data from the cache for further processing to respond to the user input.