In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.
Disclosed techniques relate to managing power within a power distribution system. Power consumption corresponding to devices (e.g., servers) that receive power from an upstream device (e.g., a bus bar) may be monitored (e.g., by a service) to determine when power consumption corresponding to those devices breaches (or approaches) a budget threshold corresponding to an amount of power allocated to the upstream device. If the budget threshold is breached, or is likely to be breached, the service may initiate operations to distribute power caps for the devices and to initiate a timer. Although distributed, the power caps may be ignored by the devices until they are instructed to enforce the power caps (e.g., upon expiration of the timer). This allows the power consumption of the devices to exceed the budgeted power associated with the upstream device at least until expiration of the timer while avoiding power outage events.
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
3.
FRAMEWORK FOR EFFECTIVE STRESS TESTING AND APPLICATION PARAMETER PREDICTION
Techniques disclosed herein can include receiving an instruction to perform a stress test on one or more cloud computing resources of a cloud computing system. Worker nodes of the cloud computing system can be provisioned by a resource manager to perform the stress test on the cloud computing resources. The resource manager can instruct the one or more worker nodes of the cloud computing system to perform the stress test. Data generated by the worker nodes during the stress test can be received by the resource manager and used to train a projection framework comprising a trained machine learning model. The projection framework can generate a resource projection and the projection can be used to provision cloud computing resources to host the cloud service.
Techniques are described for snapshot key inter-dependency resolution during cross-region replications. Dependency between a first type of replication-related information (e.g., crypto keys associated with a parent directory iNode or a file iNode) and a second type of replication-related information (e.g., files, file data/FMAPs, or symbolic links) during a cross-region replication may be resolved to enable non-blocking delta application in a target file system. In some embodiments, temporary dummy entries for the first type of information may be created in the B-tree of the target file system for the out-of-order download (e.g., the second type being downloaded before the first type) of these two types of information. In some embodiments, a consolidation process may be performed between the dummy entries and the later-arriving first type of information.
Novel techniques are disclosed for accessing resources in both CSP-provided infrastructure in a region and a remote infrastructure through various control planes associated with a virtual private label cloud (vPLC). In some embodiments, the CSP-provided infrastructure in a region and a remote infrastructure are connected through a communication channel. In some embodiments, a control plane associated with the CSP-provided infrastructure in a region can provide access to both infrastructures (i.e., the CSP-provided infrastructure in a region and the remote infrastructure). In some embodiments, a control plane associated with the vPLC in the CSP-provided infrastructure in a region can provide access to both infrastructures. Yet, in other embodiments, a control plane associated with the vPLC but located within the remote infrastructure can provide access to both infrastructures.
Discussed herein is a framework that provisions for customized processing for different classes of traffic. A network device in a communication path between a source host machine and a destination host machine extracts a tag from a packet received by the network device. The packet originates at a source executing on the source host machine and whose destination is the destination host machine. The tag set by the source and indicative of a first traffic class to be associated with the packet, the first traffic class being selected by the source from a plurality of traffic classes. The network device determines, based on the tag, that the first traffic class corresponds to a latency sensitive traffic and processes the packet using one or more settings configured at the network device for processing packets associated with the first traffic class.
H04L 47/28 - Commande de flux; Commande de la congestion par rapport à des considérations temporelles
H04L 47/2441 - Trafic caractérisé par des attributs spécifiques, p.ex. la priorité ou QoS en s'appuyant sur la classification des flux, p.ex. en utilisant des services intégrés [IntServ]
H04L 47/26 - Commande de flux; Commande de la congestion utilisant un retour explicite à la source, p.ex. paquets de signalisation de congestion
Aspects of the present disclosure include implementing fabric availability and synchronization (FAS) agents within a fabric network. In one example, a first FAS agent executing on a first network device may receive, from a second network device, a command to modify a configuration of a second network device. The first FAS may upgrade the configuration of the first network device based on the command from a current configuration to a new configuration. The first FAS agent increment a state identifier associated with the configuration of the first network device to a new state identifier associated with the new configuration. The first FAS agent may then transmit a control packet that includes the new state identifier. A second FAS agent executing on the second network device may receive the control packet and execute the command to update the configuration of the second network device to the new configuration.
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0659 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en isolant ou en reconfigurant les entités défectueuses
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
H04L 41/084 - Configuration en utilisant des informations préexistantes, p.ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 41/0853 - Récupération de la configuration du réseau; Suivi de l’historique de configuration du réseau en recueillant activement des informations de configuration ou en sauvegardant les informations de configuration
Techniques for implementing an orchestration service for data replication are provided. In one technique, a recipe is stored that comprises (1) a set of configuration parameters and (2) executable logic, for a data replication operation, that comprises multiple sub-steps. Each sub-step corresponds to one or more configuration parameters in the set of configuration parameters, which includes a first parameter that is associated with a default value and a second parameter that is not so associated. User input that specifies a value for the second parameter is received. The set of configuration parameters is updated to associate the value with the second parameter. The data replication operation is then initiated by processing the executable logic, which processing comprises, for each sub-step of one or more sub-steps, making an API call to a data replication service. In response to each API call, a response is received from the data replication service.
G06F 16/00 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 16/21 - Conception, administration ou maintenance des bases de données
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
9.
SECURE BI-DIRECTIONAL NETWORK CONNECTIVITY SYSTEM BETWEEN PRIVATE NETWORKS
A secure private network connectivity system (SNCS) within a cloud service provider infrastructure (CSPI) is described that provides secure private network connectivity between external resources residing in a customer's on-premise environment and the customer's resources residing in the cloud. The SNCS provides secure private bi-directional network connectivity between external resources residing in a customer's external site representation and resources and services residing in the customer's VCN in the cloud without a user (e.g., an administrator) of the enterprise having to explicitly configure the external resources, advertise routes or set up site-to-site network connectivity. The SNCS provides a high performant, scalable, and highly available site-to-site network connection for processing network traffic between a customer's on-premise environment and the CSPI by implementing a robust infrastructure of network elements and computing nodes that are used to provide the secure site to site network connectivity.
Herein is a universal anomaly threshold based on several labeled datasets and transformation of anomaly scores from one or more anomaly detectors. In an embodiment, a computer meta-learns from each anomaly detection algorithm and each labeled dataset as follows. A respective anomaly detector based on the anomaly detection algorithm is trained based on the dataset. The anomaly detector infers respective anomaly scores for tuples in the dataset. The following are ensured in the anomaly scores from the anomaly detector: i) regularity that an anomaly score of zero cannot indicate an anomaly and ii) normality that an inclusive range of zero to one contains the anomaly scores from the anomaly detector. A respective anomaly threshold is calculated for the anomaly scores from the anomaly detector. After all meta-learning, a universal anomaly threshold is calculated as an average of the anomaly thresholds. An anomaly is detected based on the universal anomaly threshold.
A computer sorts empirical validation scores of validated training scenarios of an anomaly detector. Each training scenario has a dataset to train an instance of the anomaly detector that is configured with values for hyperparameters. Each dataset has values for metafeatures. For each predefined ranking percentage, a subset of best training scenarios is selected that consists of the ranking percentage of validated training scenarios having the highest empirical validation scores. Linear optimizers train to infer a value for a hyperparameter. Into many distinct unvalidated training scenarios, a scenario is generated that has metafeatures values and hyperparameters values that contains the value inferred for that hyperparameter by a linear optimizer. For each unvalidated training scenario, a validation score is inferred. A best linear optimizer is selected having a highest combined inferred validation score. For a new dataset, the best linear optimizer infers a value of that hyperparameter.
Data can be received that includes information corresponding to a set of users. Privacy protection protocols that apply to the data can be identified. A subset of the data can be identified as being personally identifiable information (PII) data, where the subset includes a set of PII attributes. The PII attributes can be split into categories based on a format of a data field in the PII attributes. The processed PII data can be combined with non-PII data to create processed client data. It can be determined to add noise to part of the processed PII data. An amount of noise can be determined based on the privacy protection protocols. The amount of noise can be added to part of the processed PII data to produce protected data. A machine-learning model can be trained using the protected data.
Techniques are described herein for applying access controls to logical secure elements (LSEs) running on the same secure element hardware platform. Embodiments include a firmware component that determines whether a message targeting an LSE is authorized to trigger an operation. For example, the firmware component may verify a signature of the received message using a public key, shared secret, or other access control key. Additionally or alternatively, access control policies may be defined to constrain the load of the LSEs on the SE platform hardware and/or to prioritize LSE access. For example, the access control policies may define usage thresholds, such as maximum threshold memory and/or processor utilization rates. As another example, the access controls may restrict the active time for an LSE to a threshold duration. If access constraints are violated or the message cannot be verified, then the firmware component may delay or deny the operation.
Systems and methods for automatic network health check are disclosed herein. A method for performing an automatic health check includes determining to perform a health check on a portion of a communications network, the communications network including a plurality of hosts that each include a routing agent and an advertising agent. The method includes adding a test route indicated as applicable to every host and pointing to an IP address to a database, and receiving the test route from the database with the routing agents of at least some of the plurality of hosts. The method includes providing the test route from the routing agent to the advertising agent, advertising the test route with of the at least some of the plurality of hosts to a plurality of switches within the communications network, and determining success of health check based on information received from the plurality of switches
Systems and methods for performing an automatic route flip are disclosed herein. The method can include receiving a request to flip a primary route and a secondary route in a communications network including at least a first host and a second host, each including a routing agent and an advertising agent. The method includes identifying the first host as having a dynamic path length and the second host as having a static path length, updating routing information in a database accessible by the first host to change the path length of the first host from a first path length to a second path length, receiving the updated routing information from the database with the routing agent of the first host, and advertising the updated routing information with the first host to at least one switch within the communications network.
H04L 45/122 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données Évaluation de la route la plus courte en minimisant les distances, p.ex. en sélectionnant une route avec un nombre minimal de sauts
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
Novel techniques are disclosed for virtualizing a cloud infrastructure in a region provided by a cloud service provider (CSP) to allow a reseller of the CSP to provide reseller-offered cloud services using a securely isolated portion of the CSP-provided infrastructure in the region and have a direct business relationship with the reseller'customers. In certain embodiments, the CSP-provided infrastructure in a region is organized into one or more data centers. In certain embodiments, the securely isolation portion of the CSP-provided infrastructure comprises at least one compute resource or a memory resource.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
Novel techniques for creating service endpoints associated with different virtual private label clouds (vPLCs) for accessing a cloud service are disclosed. In certain embodiments, an endpoint management service (EMS) uses a novel architecture that enables the concurrent use of multiple vPLC-specific service endpoints with one endpoint per cloud service per vPLC to access the same cloud service running on multiple vPLC-specific resources. In some embodiments, each vPLC-specific service endpoint may be associated with a fully qualified domain name (FQDN) and an IP address.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
18.
PREDICTING MARKETING OUTCOMES USING CONTRASTIVE LEARNING
Techniques for predicting marketing outcomes using contrastive learning are disclosed, including: obtaining historical marketing messages; obtaining historical open rates associated respectively with the historical marketing messages; based on the historical marketing messages, generating latent space representations associated respectively with the historical marketing messages; based on the latent space representations and respective contents of the historical marketing messages, training a first machine learning model to map contents of marketing messages to corresponding latent space representations of the marketing messages; based at least on the latent space representations and the historical open rates, training a second machine learning model to map latent space representations of marketing messages to predicted open rates of the marketing messages.
Novel techniques are disclosed for enabling customizable consoles of different virtual private label clouds (vPLCs). In some embodiments, one console server may execute multiple consoles for multiple vPLCs and CSP. In other embodiments, one console server may be dedicated to a vPLC-specific console. In certain embodiments, console customization including a customized set of console user interfaces (UIs) may be performed for each vPLC-specific console.
Techniques for presenting a graphical user interface (GUI) for configuring a cloud service workstation are disclosed. The system presents a GUI that presents a plurality of possible workstation configurations and the costs associated with each respective workstation configuration, prior to creation of a workstation. The GUI updates the cost associated with a workstation configuration responsive to receiving a selection to modify the workstation configuration from a user. The user may request a different configuration based on a single user input, without specifying which resources to modify. The GUI may recommend a workstation configuration based on one or more user inputs such as a budget, an application service domain, a duration, or a processing power requirement.
Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.
Techniques are disclosed for deploying a computing resource (e.g., a service) in response to user input. A computer-implemented method can include operations of receiving (e.g., by a gateway computer of a cloud-computing environment) a request comprising an identifier for a computing component of the cloud-computing environment. The computing device receiving the request may determine whether the identifier exists in a routing table that is accessible to the computing device. If so, the request may be forwarded to the computing component. If not, the device may transmit an error code (e.g., to the user device that initiated the request) indicating the computing component is unavailable and a bootstrap request to a deployment orchestrator that is configured to deploy the requested computing component. Once deployed, the computing component may be added to a routing table such that subsequent requests can be properly routed to and processed by the computing component.
H04L 67/1031 - Commande du fonctionnement des serveurs par un répartiteur de charge, p.ex. en ajoutant ou en supprimant de serveurs qui servent des requêtes
H04L 67/51 - Découverte ou gestion de ceux-ci, p.ex. protocole de localisation de service [SLP] ou services du Web
H04L 67/63 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises en acheminant une demande de service en fonction du contenu ou du contexte de la demande
23.
METADATA CUSTOMIZATION FOR VIRTUAL PRIVATE LABEL CLOUDS
Novel techniques are disclosed for providing vPLC-specific metadata service including customized vPLC-specific metadata. In certain embodiments, each vPLC may generate a customized metadata using its corresponding vPLC-specific customization instructions. In some embodiments, a vPLC-specific metadata service may be performed using pre-generated customized vPLC-specific metadata, on-the-fly customized metadata, pre-generated CSP-format metadata, or combinations thereof.
Novel techniques for resource usage monitoring, billing, and enforcement for virtual private label clouds (vPLCs) are disclosed. In some embodiments, resource usage for a vPLC associated with a reseller is monitored at both reseller level and customer-of-reseller level using resource IDs, and stored as usage information in two levels and associated with a tenancy ID for the reseller (at the reseller level) and tenancy IDs for customers of the reseller (at the customer-of-reseller level). In some embodiments, a two-level billing process generates invoices using two-level pricing information and the generated invoices to either resellers or customers of resellers directly. In some embodiments, usage enforcement can be performed per vPLC or per customer tenancy of a reseller's customer.
Novel techniques are disclosed that enable the creation of a two-tier marketplace comprising a CSP marketplace and one or more marketplaces for virtual private label clouds (vPLCs). Each marketplace can be created and operated independently. In some embodiments, a publisher may publish a solution offering directly on a vPLC marketplace without involving the CSP marketplace. In other embodiments, a solution offering published on a marketplace may be automatically republished on another marketplace. Yet, in another embodiment, a customer subscribing to a vPLC marketplace can see a composite view of a directly published solution listing and a republished solution listing.
Techniques for facilitating connectivity to vPLCs created in a CSP-provided infrastructure in a region. Within the CSP-provided infrastructure in a region, when the destination of a packet is determined to be an endpoint associated with a particular vPLC, the packet is tagged with information related to the particular vPLC. The vPLC-related information for the particular vPLC can include, for example, a vPLC identifier identifying the particular vPLC, an identifier identifying a customer associated with the endpoint, a virtual cloud network identifier identifying a virtual cloud network (VCN) belonging to the particular vPLC and where the endpoint is part of the VCN, and other vPLC-related information. The packet is then routed or communicated within the CSP-provided infrastructure in a region along with the tagged vPLC-related information. The vPLC-related information is used as part of the connectivity and for routing of packets within the CSP-provided infrastructure in a region.
Disclosed are techniques for processing user profiles using data structures that are specialized for processing by a GPU. More particularly, the disclosed techniques relate to systems and methods for evaluating characteristics of user profiles to determine whether to offload certain user profiles to the GPU for processing or to process the user profiles locally by one or more central processing units (CPUs). Processing user profiles may include comparing the interest tags included in the user profiles with logic trees, for example, logic trees representing marketing campaigns, to identify user profiles that match the campaigns.
Aspects of the present disclosure include implementing fabric availability and synchronization (FAS) agents within a fabric network. In one example, a first FAS agent executing on a first network device may receive, from a second network device, a command to modify a configuration of a second network device. The first FAS may upgrade the configuration of the first network device based on the command from a current configuration to a new configuration. The first FAS agent increment a state identifier associated with the configuration of the first network device to a new state identifier associated with the new configuration. The first FAS agent may then transmit a control packet that includes the new state identifier. A second FAS agent executing on the second network device may receive the control packet and execute the command to update the configuration of the second network device to the new configuration.
H04L 41/082 - Réglages de configuration caractérisés par les conditions déclenchant un changement de paramètres la condition étant des mises à jour ou des mises à niveau des fonctionnalités réseau
H04L 41/0659 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en isolant ou en reconfigurant les entités défectueuses
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
H04L 41/084 - Configuration en utilisant des informations préexistantes, p.ex. en utilisant des gabarits ou en copiant à partir d’autres éléments
H04L 41/0853 - Récupération de la configuration du réseau; Suivi de l’historique de configuration du réseau en recueillant activement des informations de configuration ou en sauvegardant les informations de configuration
Techniques are disclosed for tuning external invocations utilizing weight-based parameter resampling. In one example, a computer system determines a plurality of samples, each sample being associated with a parameter value of a plurality of potential parameter values of a particular parameter. The computer system assigns weights to each of the parameter values, and then selects a first sample for processing via a first external invocation based on a weight of the parameter value of the first sample. The computer system then determines feedback data associated with a level of performance of the first external invocation. The computer system adjusts the weights of the parameter values of the particular parameter based on the feedback data. The computer system then selects a second sample of the plurality of samples to be processed via execution of a second external invocation based on the adjustment of weights of the parameter values.
G06F 16/215 - Amélioration de la qualité des données; Nettoyage des données, p.ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/21 - Conception, administration ou maintenance des bases de données
In an embodiment, a database management system (DBMS) hosted by a computer receives a request to execute a database statement and responsively generates an interpretable execution plan that represents the database statement. The DBMS decides whether execution of the database statement will or will not entail interpreting the interpretable execution plan and, if not, the interpretable execution plan is compiled into object code based on partial evaluation. In that case, the database statement is executed by executing the object code of the compiled plan, which provides acceleration. In an embodiment, partial evaluation and Turing-complete template metaprogramming (TMP) are based on using the interpretable execution plan as a compile-time constant that is an argument for a parameter of an evaluation template.
Systems and techniques for budget-based management of a cloud infrastructure are disclosed. A system monitors a cloud infrastructure for one or more trigger-action conditions associated with the cloud infrastructure. When a trigger-action condition is detected, the system determines a cloud infrastructure modification action that corresponds to the detected trigger condition. The system may apply the cloud infrastructure modification action to the cloud infrastructure. A cloud infrastructure modification action may modify one or more the workstation resources such that a rate of budget consumption is changed, for example, by pausing a resource, deleting a resource, resuming a paused resource, or changing from one resource to a different resource.
In a computer, each of multiple anomaly detectors infers an anomaly score for each of many tuples. For each tuple, a synthetic label is generated that indicates for each anomaly detector: the anomaly detector, the anomaly score inferred by the anomaly detector for the tuple and, for each of multiple contamination factors, the contamination factor and, based on the contamination factor, a binary class of the anomaly score. For each particular anomaly detector excluding a best anomaly detector, a similarity score is measured for each contamination factor. The similarity score indicates how similar, between the particular anomaly detector and the best anomaly detector, are the binary classes of labels with that contamination factor. For each contamination factor, a combined similarity score is calculated based on the similarity scores for the contamination factor. Based on a contamination factor that has the highest combined similarity score, the computer detects that an additional anomaly detector is inaccurate.
Techniques are described herein for running multiple logical secure elements (LSEs) on the same physical secure element (SE) hardware. For example, embodiments may include running multiple logical Subscriber Identification Modules (SIM) cards on the same physical SIM card or universal integrated circuit card (UICC). Additionally or alternatively, embodiments may include running other secure element applications and services on the same SE hardware. The techniques allow for mobile devices users to access multiple security services, which may originate from different security service providers (SSPs), in a secure manner using the same SE hardware without requiring the integration of multiple physical slots on a mobile device or the physical exchange of different cards within the same slot.
G06F 21/34 - Authentification de l’utilisateur impliquant l’utilisation de dispositifs externes supplémentaires, p.ex. clés électroniques ou cartes à puce intelligentes
G06F 21/72 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information dans les circuits de cryptographie
G06F 21/74 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du calcul ou du traitement de l’information opérant en mode dual ou compartimenté, c. à d. avec au moins un mode sécurisé
G06F 21/78 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données
34.
CONCURRENT AND NON-BLOCKING OBJECT DELETION FOR CROSS-REGION REPLICATIONS
Techniques are described for enabling concurrent and non-blocking replication object deletion during cross-region replications. In some embodiments, in a target file system, a target replication pipeline as part of a cross-region replication, and a deletion pipeline operate in parallel. The deletion pipeline deletes processed objects reaching the last pipeline stage of the target replication pipeline after each checkpoint in the target replication pipeline. In some embodiments, after a non-recoverable failure during the cross-region replication, the cross-region replication can be restarted from the beginning (i.e., fresh restart) without waiting for its unused objects in the Object Store to be deleted by utilizing a generation number associated with each object to delete the unused objects in a background process while allowing deleting processed objects as normal for the freshly restarted cross-region replication.
Systems and methods for route mismatch identification are disclosed herein. A method of route mismatch identification can create in cache an expected routing table based on expected routing information received by a routing agent of a host from a database accessible by each of the plurality of hosts. The method can include creating in cache an actual routing table based on actual routing information received by the routing agent of the host from an advertising agent of the host, comparing the actual routing table and the expected routing table, and taking an action based on the comparison of the actual routing table and the expected routing table.
Novel techniques are disclosed for enabling identity cloud service for virtual private label clouds (vPLCs). A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In some embodiments, the identity management may be configured with either a shared identity cloud service (IDCS) stack model or an independent IDCS stack model. In certain embodiments, two-tier vPLC-aware identity management functions are performed for resellers of the CSP and customers of the resellers.
G06Q 20/40 - Autorisation, p.ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasin; Examen et approbation des payeurs, p.ex. contrôle des lignes de crédit ou des listes négatives
37.
RESOURCE ALLOCATION FOR VIRTUAL PRIVATE LABEL CLOUDS
Novel techniques of resource allocation services for virtual private label cloud (vPLC) are disclosed. A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In certain embodiments, the resource allocation services check a first-level policy and a resource database to determine whether a requested resource is allowed and available to be allocated to a vPLC associated with a reseller. The resource allocation services may further check a second-level policy and the resource database to determine whether the requested resource is allowed and available to be allocated to a customer of the reseller. In some embodiments, the resource allocation services may allocate resources for a vPLC according to a partitioning requirement.
Techniques are provided for using context tags in named-entity recognition (NER) models. In one particular aspect, a method is provided that includes receiving an utterance, generating embeddings for words of the utterance, generating a regular expression and gazetteer feature vector for the utterance, generating a context tag distribution feature vector for the utterance, concatenating or interpolating the embeddings with the regular expression and gazetteer feature vector and the context tag distribution feature vector to generate a set of feature vectors, generating an encoded form of the utterance based on the set of feature vectors, generating log-probabilities based on the encoded form of the utterance, and identifying one or more constraints for the utterance.
The disclosed systems, methods and computer readable media relate to managing Non-Volatile Memory Express (NVMe) over Transmission Control Protocol (TCP) (NVMeOTCP) connections between a smart network interface card (smartNIC) and a block storage data plane (BSDP) of a cloud computing environment. A software agent (“agent”) executing at the smartNIC may manage a number of network paths (active and, in some cases, passive network paths). The agent may monitor the network traffic (e.g., input/output operations (IOPS)) through the paths (e.g., using established NVMeOTCP connections corresponding to the paths). If a condition is met relating to a performance threshold associated with the monitored paths, the agent may increase or decrease the number established NVMeOTCP connections to match real time network conditions.
Techniques for computing global feature explanations using adaptive sampling are provided. In one technique, first and second samples from an dataset are identified. A first set of feature importance values (FIVs) is generated based on the first sample and a machine-learned model. A second set of FIVs is generated based on the second sample and the model. If a result of a comparison between the first and second FIV sets does not satisfy criteria, then: (i) an aggregated set is generated based on the last two FIV sets; (ii) a new sample that is double the size of a previous sample is identified from the dataset; (iii) a current FIV set is generated based on the new sample and the model; (iv) determine whether a result of a comparison between the current and aggregated FIV sets satisfies criteria; repeating (i)-(iv) until the result of the last comparison satisfies the criteria.
Methods and systems are disclosed for automatic generation of content distribution images that include receiving user input corresponding to a content-distribution operation. The user input may be parsed to identify keywords. Image data corresponding to the keywords can be identified. Image-processing operations may be executed on the image data. Executing a generative adversarial network on the processed image data, which includes: executing a first neural network on the processed-image data to generate first images that correspond to the keywords, the first images generated based on a likelihood that each image of the first images would not be detected as having been generated by the first neural network. A user interface can display the first images with second images that include images that were previously part of content-distribution operations or images that were designated by an entity as being available for content-distribution operations.
Systems, computer-implemented methods, and computer-readable media for facilitating resource balancing based on resource capacities and resource assignments are disclosed. Electronic communications, received via interfaces, from monitoring devices to identify resource descriptions of resources may be monitored. A resource descriptions data store may be updated to associate each entity of the entities and resource capacities of each resource type of resource types. A first electronic communication, from resource-controlling systems, may be detected. Model data from a model data store may be accessed based on the identified resource descriptions. A first model may be identified based on the model data. A resources assessment corresponding may be generated based on whether a threshold is satisfied based on the first model, a first resource capacity of a first resource type, and the first electronic communication. An electronic notification may be transmitted to the client devices to identify the resources assessment.
H04L 47/76 - Contrôle d'admission; Allocation des ressources en utilisant l'allocation dynamique des ressources, p.ex. renégociation en cours d'appel sur requête de l'utilisateur ou sur requête du réseau en réponse à des changements dans les conditions du réseau
H04L 47/70 - Contrôle d'admission; Allocation des ressources
43.
CONTINUOUS HYPER-PARAMETER TUNING WITH AUTOMATIC DOMAIN WEIGHT ADJUSTMENT BASED ON PERIODIC PERFORMANCE CHECKPOINTS
Techniques are disclosed herein for continuous hyperparameter tuning with automatic domain weight adjustment based on periodic performance checkpoints. In one aspect, a method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that is defined at least in part on a plurality of domains of a search space that is associated with the machine learning algorithm. For each trial of a hyperparameter tuning process: running the machine learning algorithm in different domains using the set of hyperparameter values, periodically checking a performance of the machine learning algorithm in the different domains based on the hyperparameter objective function; and continuing hyperparameter tuning with a new set of hyperparameter values after automatically adjusting the domain weights according to a regression status of the different domains. Once the machine learning algorithm has reached convergence, at least one machine learning model is output.
Techniques are described for enabling replication-aware resource management and task management in a cloud infrastructure for cross-region replication. In some embodiments, each replication job is associated with a set of replication-related information. In certain embodiments, the replication-aware resource management allocates resources, using a combination of various resource allocation schemes, to a fleet of replicators to allow the fleet to select replication jobs in a job queue, and perform resource scaling based on monitored performance metrics reported by the fleet. In some embodiments, the replication-aware task management enables replication job selection based on the set of replication-related information to optimize the performance of all cross-region replications in the region.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
In one aspect, a system receives a request to cluster a set of log records. Responsive to receiving the request, the system identifies at least one dictionary that defines a set of tokens and corresponding token weights and generates, based at least in part on the set of tokens and corresponding token weights, a set of clusters such that each cluster in the set of clusters represents a unique combination of two or more tokens from the dictionary and groups a subset of log records mapped to the unique combination of two or more tokens. The system may then perform one or more automated actions based on at least one cluster in the set of clusters.
When a request for accessing a service is received, a user object may be stored in a long-term data store, as well as in a short-term cache. The cache may be divided into a regular cache that stores full versions of the user objects, and a surrogate cache that stores compact versions of the user object. The compact version of the user object may include a field that is derived from the full user object indicating whether a subsequent request for access to a particular service should be granted. After access is granted/denied based on this value in the compact user object, the system can process an update to the full user object offline. This surrogate cache structure may be used to rapidly approve/deny requests, decoupling this procedure from the processing involved with a full user object.
H04L 67/1097 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau pour le stockage distribué de données dans des réseaux, p.ex. dispositions de transport pour le système de fichiers réseau [NFS], réseaux de stockage [SAN] ou stockage en réseau [NAS]
A first data accessor acquires a lock associated with a critical section. The first data accessor initiates a help session associated with a first operation of the critical section. In the help session, a second data accessor (which has not acquired the first lock) performs one or more sub-operations of the first operation. The first data accessor releases the lock after at least the first operation has been completed.
Techniques for business-to-business (B2B) chat routing are disclosed, including: receiving, by a B2B chatbot during a chat session with a user, user input including a user-supplied business name; performing a business lookup based at least on the user-supplied business name, to obtain a canonical business name and a unique business identifier associated with the canonical business name; performing a customer relationship management (CRM) system lookup based at least on the unique business identifier, to identify a corresponding business account; routing the chat session from the B2B chatbot to a human chat agent assigned to the corresponding business account.
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
Techniques are described for transmitting metric data between tenancies. Metric data is gathered for resources within a customer tenancy of a multi-tenant environment. This metric data is sent to a service tenancy of the multi-tenant environment, where the service tenancy is separate from the customer tenancy. The metric data is validated and preprocessed within the service tenancy to make sure that all required fields (such as key-value pairs) are located within the metric data. The preprocessed metric data is then sent to a telemetry service for analysis.
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p.ex. des interruptions ou des opérations d'entrée–sortie
G06N 5/04 - Modèles d’inférence ou de raisonnement
Techniques for generating a schema transformation for application data to monitor and manage the application in a runtime environment are disclosed. A system runs an application plugin in a runtime environment to identify data generated by application modules in one or both of an application build process and an application execution process. The application plugin is a software program executed together with the application build process. The application plugin identifies a source schema associated with application data. The application plugin identifies a target schema associated with an analysis program or machine learning model. The application plugin generates a schema transformation to convert application runtime data into a target data set. The system applies the target data set to an analysis program, such as a machine learning model, to generate output analysis data associated with the application.
Techniques are disclosed for generating machine learning models that are insensitive to drift. A system trains a machine learning model using a divergent training dataset including synthesized data points simulating drift. The system can evaluate the machine learning models in terms of accuracy, latency, efficiency, and other metrics. Based on the evaluation, the system can select a machine learning model least susceptible to drift.
Fingerprint inference of software artifacts includes receiving a request including classes, generating request fingerprints from the classes, and querying at least one index with the request fingerprints to identify a matching set of artifact versions. Fingerprint inference further includes obtaining, for each matching artifact version in the matching set of artifact versions, a count of the request fingerprints matching a indexed fingerprint related, in the at least one index, to the artifact version, and selecting a subset of the matching set of artifact versions having a count that is maximal amongst the matching set of artifact versions. Fingerprint inference further includes returning the subset of the matching set of artifact versions.
Embodiments perform the anomaly detection of expense reports in response to receiving an expense report as input data, the expense report including a plurality of expenses. Embodiments create a plurality of groups of expenses, each group corresponding to a different combination of a category of the expense, a location of the expense and a season of the expense. Embodiments generate and train an unsupervised machine learning model corresponding to each group, and assign each of the expenses of the expense report into a corresponding group and input the expenses into the unsupervised machine learning model corresponding to the group. Embodiments then generate an anomaly prediction at each unsupervised machine learning model for each expense of the expense report.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
54.
BURST DATACENTER CAPACITY FOR HYPERSCALE WORKLOADS
In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.
Techniques discussed herein include providing a cloud computing environment in which applications are deployed using virtual-machine-based virtualization with a static pool of computing nodes (e.g., substrate nodes, overlay nodes) and container-based virtualization with a dynamic pool of computing nodes (e.g., nodes managed by a container orchestration platform). The control plane functionality may be invoked by a deployment orchestrator (e.g., using a client of the container orchestration platform). In some embodiments, the control plane may include a set of applications that are configured to communicate with core services for certificate generation and rotation, namespace and quota management, metric monitoring and alarming, node authentication, and cluster membership management.
H04L 41/0895 - Configuration de réseaux ou d’éléments virtualisés, p.ex. fonction réseau virtualisée ou des éléments du protocole OpenFlow
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p.ex. prêt à l’emploi [plug-and-play]
56.
DATA PLANE TECHNIQUES FOR SUBSTRATE MANAGED CONTAINERS
Techniques discussed herein include providing a cloud computing environment in which applications are deployed by a deployment orchestrator using virtual-machine-based virtualization with a static pool of computing nodes (e.g., substrate nodes, overlay nodes) and container-based virtualization with a dynamic pool of computing nodes (e.g., nodes managed by a container orchestration platform). Components of a data plane may be used to deploy containers to micro-virtual machines. A container runtime interface (CRI) may receive a deployment request from the deployment orchestrator. A container networking interface of the data plane may configure network connections and allocate an IP address for the container. A container runtime of the data pane may generate and configure the container with the IP address and run the container within a micro-virtual machine that is compatible with the container orchestration platform.
Techniques are disclosed for automatically inferring software-defined network policies from the observed workload in a computing environment. The disclosed techniques include monitoring network traffic flow originating from network interfaces corresponding to containers that execute components of an application, recording details of a new network connection or a change in the existing network connection, obtaining information concerning the components of the application, identifying metadata for a component involved in the new network connection or the change in an existing network connection based on a comparison of the details of the new network connection or a change in the existing network connection and the information concerning the components of the application, generating a network policy for the component using at least the metadata for the component, and integrating the network policy for the component into a deployment package for the application.
H04L 41/0893 - Affectation de groupes logiques aux éléments de réseau
H04L 41/0266 - Normalisation; Intégration Échange ou transport d’informations de gestion de réseau en utilisant l’Internet; Intégration de serveurs de gestion du Web dans des éléments de réseau; Protocoles basés sur les services du Web en utilisant des métadonnées, des objets ou des commandes pour formater l’information de gestion, p.ex. en utilisant un langage de balisage eXtensible [XML]
H04L 41/08 - Gestion de la configuration des réseaux ou des éléments de réseau
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p.ex. prêt à l’emploi [plug-and-play]
H04L 43/062 - Génération de rapports liés au trafic du réseau
H04L 43/0811 - Surveillance ou test en fonction de métriques spécifiques, p.ex. la qualité du service [QoS], la consommation d’énergie ou les paramètres environnementaux en vérifiant la disponibilité en vérifiant la connectivité
58.
PROVIDING BULK SERVER-SIDE CUSTOM ACTIONS FOR MULTIPLE ROWS OF A CLIENT-SIDE SPREAD SHEET
Server-side custom actions are associated with data fields. The data fields are associated with multiple rows of a client-side spread sheet. The server-side custom actions are defined by one or more web services. Input is received into the data fields of the client-side spread sheet. The input from the data fields is transmitted in a single request to the one or more web services. Results of the server-side custom actions are received. The results are displayed in the client-side spread sheet.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 40/18 - Traitement de texte Édition, p.ex. insertion ou suppression utilisant des lignes réglées de tableurs
H04L 67/02 - Protocoles basés sur la technologie du Web, p.ex. protocole de transfert hypertexte [HTTP]
H04L 67/51 - Découverte ou gestion de ceux-ci, p.ex. protocole de localisation de service [SLP] ou services du Web
59.
APPLICATION ROUTING INFRASTRUCTURE FOR PRIVATE -LEVEL REDIRECT TRAPPING AND CREATION OF NAT MAPPING TO WORK WITH CONNECTIVITY IN CLOUD AND CUSTOMER NETWORKS
A computer program product, system, and computer implemented method for application-level redirect trapping and creation of NAT mapping to work with routing infrastructure for private connectivity in cloud and customer networks. The approach disclosed herein generally comprises a method of leveraging a reverse connection endpoint and IP address mapping controller to capture redirection messages from a private cloud or network (e.g., a service consumer network or a service consumer hybrid cloud). This allows at least the IP address mapping controller to manage a cloud networking infrastructure to provide for a service provider network (e.g., a public cloud) to support applications that overcome the isolation requirements of a private cloud or network to perform useful work. For example, without saddling the private cloud or network user with a heavy pre-configuration burden, the approach disclosed herein supports redirection to dynamically determined IP addresses at the private cloud or network.
A model validation system is described that is configured to automatically validate model artifacts corresponding to models. For a model artifact being validated, the model validation system is configured to dynamically determine the validation checks to be performed for the model artifact, where the validation checks include various validation checks to be performed at the model artifact level and also for individual components included in the model artifact. The checks to be performed are dynamically determined based upon the attributes of the model artifact and of the components within the model artifact. The system is configured to generate a validation report that comprises information regarding the checks performed and the results generated from performing the various validation checks. The validation report may also include information suggesting actions for passing checks that result in a failed check, or for improving the scores of certain validation checks.
Systems and methods for a VLAN switching and routing service (VSRS) are disclosed herein. A method can include generating a table for an instance of a VSRS, which VSRS couples a first virtual layer 2 network (VLAN) with a second network. The table can contain information identifying IP addresses, MAC addresses, and virtual interface identifiers for instances within the virtual layer 2 network. The method can include receiving with the VSRS a packet from a first instance designated for delivery to a second instance within the virtual layer 2 network, identifying with the VSRS the second instance within the virtual layer 2 network for delivery of the packet based on information received with the packet and information contained within the table, and delivering the packet to the identified second instance.
H04L 45/00 - Routage ou recherche de routes de paquets dans les réseaux de commutation de données
H04L 45/02 - Mise à jour ou découverte de topologie
H04L 45/745 - Recherche de table d'adresses; Filtrage d'adresses
H04L 49/00 - TRANSMISSION D'INFORMATION NUMÉRIQUE, p.ex. COMMUNICATION TÉLÉGRAPHIQUE Éléments de commutation de paquets
H04L 61/103 - Correspondance entre adresses de types différents à travers les couches réseau, p.ex. résolution d’adresse de la couche réseau dans la couche physique ou protocole de résolution d'adresse [ARP]
H04L 61/4552 - Mécanismes de recherche entre plusieurs répertoires; Synchronisation des répertoires, p.ex. méta-répertoires
Techniques are disclosed for tuning external invocations utilizing weight-based parameter resampling. In one example, a computer system determines a plurality of samples, each sample being associated with a parameter value of a plurality of potential parameter values of a particular parameter. The computer system assigns weights to each of the parameter values, and then selects a first sample for processing via a first external invocation based on a weight of the parameter value of the first sample. The computer system then determines feedback data associated with a level of performance of the first external invocation. The computer system adjusts the weights of the parameter values of the particular parameter based on the feedback data. The computer system then selects a second sample of the plurality of samples to be processed via execution of a second external invocation based on the adjustment of weights of the parameter values.
G06F 16/215 - Amélioration de la qualité des données; Nettoyage des données, p.ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/21 - Conception, administration ou maintenance des bases de données
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATIC CATEGORY 1 MESSAGE FILTERING RULES CONFIGURATION BY LEARNING TOPOLOGY INFORMATION FROM NETWORK FUNCTION (NF) REPOSITORY FUNCTION (NRF)
A method for automatic configuration and use of Category 1 message filtering rules includes, at a network function (NF), subscribing, with an NF repository function (NRF), to receive notification of NF profile changes. The method further includes receiving, from the NRF and as a result of the subscribing, notification of an NF profile change. The method further includes automatically configuring, based on the notification of the NF profile change, at least one Category 1 message filtering rule implemented. The method further includes using the at least one Category 1 message filtering rule to filter service based interface (SBI) messages.
Techniques for automatic error mitigation in database systems using alternate plans are provided. After receiving a database statement, an error is detected as a result of compiling the database statement. In response to detecting the error, one or more alternate plans that were used to process the database statement or another database statement that is similar to the database statement are identified. A particular alternate plan of the one or more alternate plans is selected. A result of the database statement is generated based on processing the particular alternate plan.
Techniques for UNDO and REDO operations in a computer-user interface are disclosed. The techniques enables users to configure entities for UNDO and REDO operations. The techniques also enable users to roll back individual entity to an immediate previous state in one UNDO operation and subsequently to the other previous states. Other entities are not affected by the UNDO operations to the entity.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
66.
MANAGING COMPOSITE TOKENS FOR CONTENT ACCESS REQUESTS
Techniques for managing composite tokens for content access requests are disclosed. A system provides a client device with a composite token to allow the client device to make subsequent requests to access content of a content provider without requiring re-authentication of the client device with each request. The composite token includes an access segment associated with permissions to access content. The composite token further includes a regeneration segment associated with permissions to invalidate the composite token and create a new composite token associated with a same user or session. The system invalidates a previous composite token and regenerates a new composite token if the access segment expires. The system requires re-authentication if the regeneration segment expires or if a composite token is received that is not the most recently-generated composite token.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
Techniques are provided for synchronizing database system metadata between primary and standby persistent storage systems using an object store. A first persistent storage system enabled to store first configuration metadata describing the configuration of the first persistent storage system. A first broker process of the first persistent storage system detects receipt, at an object store endpoint, of a new version of an object message sent by a second broker process of a second persistent storage system. The object message specifies a particular value of a configuration attribute of second configuration metadata from the second persistent storage system. In response to detecting receipt of the new version of the object message, the first broker process reads the particular value of the configuration attribute in the object message. The first broker process sets the configuration attribute in the first configuration metadata to the particular value.
Techniques described herein include frameworks and models for identifying, analyzing, and addressing hangs within distributed and heterogenous computing environments. A hang detection framework may model a distributed computing environment as a complex forest of interrelated requests. The hang detection framework may generate hang graphs based upon requests that are being processed and/or waited upon within the distributed environment. For example, a node within an acyclic graph may represent an execution entity that is currently processing one or more requests. Directed edges that connect one node to another may represent requests upon which an execution entity is waiting for another execution node to fulfill. The model may be used to isolate and address the root cause of hangs within the computing environment.
G06F 11/07 - Réaction à l'apparition d'un défaut, p.ex. tolérance de certains défauts
G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption
G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]
69.
SYSTEM AND METHOD FOR SHARING VITALS AMONG SERVICE REPLICAS TO ENABLE PROCESSING OF LONG RUNNING AUTOMATION WORKFLOWS IN A CONTAINER ORCHESTRATION SYSTEM
Described herein are systems and methods for sharing vitals among service replicas to enable processing of long running workflows within a container orchestration system. A method can provide a container orchestration system that provides within one or more container orchestration environments, a runtime for containerized workloads and services. The method can provide a healthbus within the container orchestration system, the healthbus comprising a memory. The method can deploy a plurality of pods within the container orchestration system, each pod comprising a memory. The method can periodically publish, by each pod, a health message to the healthbus, the health message comprising at least an indication of an identification of the pod and an indication of a time interval in which the pod has been active. The method can periodically query, by each pod, the healthbus to determine a world view of the container orchestration system.
A read-write set of a blockchain transaction specifies a delta value by which to add or subtract from the current value of a delta-enabled world state record. In connection with committing the blockchain transaction to a world state record, the then-current value of the world state record is read and adjusted by the delta to determine the actual value to assign to the world state record. The actual value computed is correct even though the version number and current value at the time the read-write set was generated may have changed by the time the commitment of the blockchain transaction has commenced. Multi-version concurrency is foregone.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
Embodiments generate changes to a logical model. Embodiments receive the changes in a configuration file, the changes comprising a declarative configuration, extract the changes and load the changes into a database and update a corresponding database model. Embodiments generate a first logical model that represents the database model and generate a second logical model that includes the changes. Embodiments generate automatically in a container using the declarative configuration a compiled visualization image from the second logical model, wherein the visualization image is adapted to be used by a business intelligence system to provide a visualization of data that incorporates the changes.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR UNIQUELY IDENTIFYING SESSION CONTEXTS FOR HANDLING SPIRAL CALLS IN A SESSION INITIATION PROTOCOL (SIP) CALL STATEFUL PROXY
A method for identifying session contexts for handling spiral calls includes, at a SIP session manager, receiving, from a first node, a first SIP request message. The method further includes determining that the first SIP request message is a request for establishing a new session or subscription. The method further includes, in response to determining that the first SIP request message is a request for establishing a new session or subscription, generating a first unique identifier. The method further includes using the first unique identifier as or to generate a first session context identifier. The method further includes creating a first session context database record for the first session or subscription. The method further include inserting the first unique identifier in a Record-Route header. The method further includes adding the Record-Route header to a first outbound SIP request message and routing the first outbound SIP request message.
Techniques are provided for determining the semantic-type of a target column based on “fingerprints” that are created based on the values in the target column. The fingerprint set for the target column is only generated once, not once per semantic-type. Thus, the target column only needs to be scanned once, and resource usage is minimized. Once generated, the fingerprint set of the column is compared against fingerprint set that corresponds to each semantic-type to generate a “similarity measure”. The semantic-type whose fingerprint set produces the highest similarity measure relative to the target column's fingerprint set is determined to be the semantic-type of the target column.
Principal component analysis (PCA) accelerates and increases accuracy of genetic algorithms. In an embodiment, a computer generates many original chromosomes. Each original chromosome contains a sequence of original values. Each position in the sequences in the original chromosomes corresponds to only one respective distinct parameter in a set of parameters to be optimized. Based on the original chromosomes, many virtual chromosomes are generated. Each virtual chromosome contains a sequence of numeric values. Positions in the sequences in the virtual chromosomes do not correspond to only one respective distinct parameter in the set of parameters to be optimized. Based on the virtual chromosomes, many new chromosomes are generated. Each new chromosome contains a sequence of values. Each position in the sequences in the new chromosomes corresponds to only one respective distinct parameter in the set of parameters to be optimized. The computer may be configured based on a best new chromosome.
Techniques are disclosed for managing aspects of identifying and/or deploying hardware of a dedicated cloud to be hosted at a customer location (a “DRCC”). A DRCC may comprise cloud infrastructure components provided by a cloud provider but hosted by computing devices located at the customer's (a “cloud owner's”) location. Services of the central cloud-computing environment may be similarly executed at the DRCC. A number of user interfaces may be hosted within the central cloud-computing. These interfaces may be used to track deployment and region data of the DRCC. A deployment state may be transitioned from a first state to a second state based at least in part on the tracking and the deployment state may be presented at one or more user interfaces. Using the disclosed user interfaces, a user may manage the entire lifecycle of a DRCC and its corresponding hardware components.
H04L 41/22 - Dispositions pour la maintenance, l’administration ou la gestion des réseaux de commutation de données, p.ex. des réseaux de commutation de paquets comprenant des interfaces utilisateur graphiques spécialement adaptées [GUI]
H04L 41/0806 - Réglages de configuration pour la configuration initiale ou l’approvisionnement, p.ex. prêt à l’emploi [plug-and-play]
H04L 67/1008 - Sélection du serveur pour la répartition de charge basée sur les paramètres des serveurs, p.ex. la mémoire disponible ou la charge de travail
H04L 67/53 - Services réseau en utilisant des fournisseurs tiers de services
H04L 67/75 - Services réseau en affichant sur l'écran de l'utilisateur les conditions du réseau ou d'utilisation
76.
BRANCH PREDICTION FOR USER INTERFACES IN WORKFLOWS
Systems, methods, and other embodiments associated with branch prediction in workflows are described. In one embodiment, a branch predictor is configured to make branch predictions at decision elements of a workflow that executes serially, by at least: monitoring the workflow to identify when a decision element is encountered during execution. In response to encountering a first decision element in the workflow that includes a plurality of branch paths: (i) executing a prediction that predicts a resulting path of the first decision element to predict a first user interface from a plurality of possible user interfaces that are associated with the workflow; and (ii) pre-building the first user interface into memory including a structure and content configured for being rendered on a display. The pre-built first user interface is then displayed on a display device when the workflow reaches a first terminal element associated with the first user interface.
The present disclosure relates to systems and methods for an intelligent assistant (e.g., a chatbot) that can be used to enable a user to generate a machine learning system. Techniques can be used to automatically generate a machine learning system to assist a user. In some cases, the user may not be a software developer and may have little or no experience in either machine learning techniques or software programming. In some embodiments, a user can interact with an intelligent assistant. The interaction can be aural, textual, or through a graphical user interface. The chatbot can translate natural language inputs into a structural representation of a machine learning solution using an ontology. In this way, a user can work with artificial intelligence without being a data scientist to develop, train, refine, and compile machine learning models as stand-alone executable code.
G06N 20/20 - Techniques d’ensemble en apprentissage automatique
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
78.
SCORE PROPAGATION ON GRAPHS WITH DIFFERENT SUBGRAPH MAPPING STRATEGIES
Techniques for propagating scores in subgraphs are provided. In one technique, multiple path scores are stored, each path score associated with a path (or subgraph), of multiple paths, in a graph of nodes. The path scores may be generated by a machine-learned model. For each path score, a path that is associated with that path score is identified and nodes of that path are identified. For each identified node, a node score for that node is determined or computed based on the corresponding path score and the node score is stored in association with that node. Subsequently, for each node in a subset of the graph, multiple node scores that are associated with that node are identified and aggregated to generate a propagated score for that node. In a related technique, a propagated score of a node is used to compute a score for each leaf node of the node.
Systems that determine relationships between network components of a cluster using packet filters are disclosed. A system can identify objects that implement services of a cluster and network connections associated with respective pairs of the objects. The system can also filter out network connections from the identified network connections. The filtering can remove connections between source objects and destination objects based on the destination objects lacking any components that implement a service in cluster. The filtering can also retain network connections between source objects and destination objects based on the source objects including components that each implement at least one service, and based on the second destination object including components that each implement at least one service. Additionally, the system can generate relationship maps and network topologies using the determined relationships.
H04L 41/0893 - Affectation de groupes logiques aux éléments de réseau
H04L 41/12 - Découverte ou gestion des topologies de réseau
H04L 67/60 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises
80.
TEXTUAL EXPLANATIONS FOR ABSTRACT SYNTAX TREES WITH SCORED NODES
Herein is a machine learning (ML) explainability (MLX) approach in which a natural language explanation is generated based on analysis of a parse tree such as for a suspicious database query or web browser JavaScript. In an embodiment, a computer selects, based on a respective relevance score for each non-leaf node in a parse tree of a statement, a relevant subset of non-leaf nodes. The non-leaf nodes are grouped in the parse tree into groups that represent respective portions of the statement. Based on a relevant subset of the groups that contain at least one non-leaf node in the relevant subset of non-leaf nodes, a natural language explanation of why the statement is anomalous is generated.
A system for analyzing security threat changes of proposed changes to an infrastructure environment. For example, system and approaches for determining actions to be performed based on security threat changes corresponding to proposed changes to the infrastructure environment is disclosed.
G06F 21/57 - Certification ou préservation de plates-formes informatiques fiables, p.ex. démarrages ou arrêts sécurisés, suivis de version, contrôles de logiciel système, mises à jour sécurisées ou évaluation de vulnérabilité
82.
CALIBRATING CONFIDENCE SCORES OF A MACHINE LEARNING MODEL TRAINED AS A NATURAL LANGUAGE INTERFACE
Techniques are disclosed herein for calibrating confidence scores of a machine learning model trained to translate natural language to a meaning representation language. The techniques include obtaining one or more raw beam scores generated from one or more beam levels of a decoder of a machine learning model trained to translate natural language to a logical form, where each of the one or more raw beam scores is a conditional probability of a sub-tree determined by a heuristic search algorithm of the decoder at one of the one or more beam levels, classifying, by a calibration model, a logical form output by the machine learning model as correct or incorrect based on the one or more raw beam scores, and providing the logical form with a confidence score that is determined based on the classifying of the logical form.
G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06F 40/253 - Analyse grammaticale; Corrigé du style
83.
GENERATING TAGGED CONTENT FROM TEXT OF AN ELECTRONIC DOCUMENT
Techniques for generating formatting tags for textual content obtained from a source electronic document are disclosed. A system parses a digital file to obtain information about characters in an electronic document. The system applies tags to text generated based on the textual content of the electronic document by creating segments of textually-consecutive characters and applying corresponding text formatting style tags to the segments. The system further identifies segments of text overlapping bounding boxes in the electronic document. The system generates textual content including a segment of text and a corresponding hyperlink associated with the segment of text. The system further generates textual content by selectively applying line breaks from the source electronic document in the textual content.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06F 16/9538 - Présentation des résultats des requêtes
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
84.
Generating an electronic document with a consistent text ordering
Techniques for generating text content arranged in a consistent read order from a source document including text corresponding to different read orders are disclosed. A system parses a binary file representing an electronic document to identify characters and metadata associated with the characters. The system pre-sorts a character order of characters in each line of the electronic document to generate an ordered list of characters arranged according to the right-to-left reading order. The system performs a layout-mirroring operation to change a position of characters within the modified document relative to a right edge of the document and a left edge of the document. Subsequent to performing layout-mirroring, the system identifies native left-to-right reading-order text in-line with the native right-to-left reading-order text. The system flips the reading order of the native left-to-right read-order characters into the left-to-right reading order to be consistent with the native right-to-left read-order text.
G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
G06F 40/103 - Mise en forme, c. à d. modification de l’apparence des documents
Systems, computer instructions encoded on non-transitory computer-accessible storage media and computer-implemented methods are disclosed for improving the inference performance of machine learning systems using a divide-and-conquer technique. An application configured to perform inferences using a trained machine learning model may be evaluated to identify opportunities to execute the portions of the application in parallel. The application may then be divided into multiple independently executable tasks according to the identified opportunities. Weighting values for individual ones of the tasks may be assigned according to expected computational intensity values of the respective tasks. Then, computational resources may be distributed among the tasks according to the respective weighting values and the application executed using the distributed computational resources.
Techniques are disclosed herein for converting a natural language utterance to an intermediate database query representation. An input string is generated by concatenating a natural language utterance with a database schema representation for a database. Based on the input string, a first encoder generates one or more embeddings of the natural language utterance and the database schema representation. A second encoder encodes relations between elements in the database schema representation and words in the natural language utterance based on the one or more embeddings. A grammar-based decoder generates an intermediate database query representation based on the encoded relations and the one or more embeddings. Based on the intermediate database query representation and an interface specification, a database query is generated in a database query language.
Techniques are disclosed herein for training and deploying a named entity recognition model. The techniques include implementing a nested labeling scheme for named entities within the training data and then training a machine learning model on the training data The techniques further include extracting an entity hierarchy for a predicted class based on a hierarchical template associated with a composite label, where the predicted class is representative of multiple named entity classes comprising at least a parent class and a child class associated with the composite label. The techniques further include increasing the volume of training data via data mining for sequence tags in a language corpus and then training a machine learning model on the training data.
Techniques are disclosed herein for using named entity recognition to resolve entity expression while transforming natural language to a meaning representation language. In one aspect, a method includes accessing natural language text, predicting, by a first machine learning model, a class label for a token in the natural language text, predicting, by a second machine-learning model, operators for a meaning representation language and a value or value span for each attribute of the operators, in response to determining that the value or value span for a particular attribute matches the class label, converting a portion of the natural language text for the value or value span into a resolved format, and outputting syntax for the meaning representation language. The syntax comprises the operators with the portion of the natural language text for the value or value span in the resolved format.
Parameter permutation is performed for federated learning to train a machine learning model. Parameter permutation is performed by client systems of a federated machine learning system on updated parameters of a machine learning model that have been updated as part of training using local training data. An intra-model shuffling technique is performed at the client systems according to a shuffling pattern. Then, the encoded parameters are provided to an aggregation server using Private Information Retrieval (PIR) queries generated according to the shuffling pattern.
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
H04L 9/14 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité utilisant plusieurs clés ou algorithmes
Techniques are disclosed herein for addressing catastrophic forgetting and over-generalization while training a model to transform natural language to a logical form such as a meaning representation language. The techniques include accessing training data comprising natural language examples, augmenting the training data to generate expanded training data, training a machine learning model on the expanded training data, and providing the trained machine learning model. The augmenting includes (i) generating contrastive examples by revising natural language of examples identified to have caused regression during training of a machine learning model with the training data, (ii) generating alternative examples by modifying operators of examples identified within the training data that belong to a concept that exhibits bias, or (iii) a combination of (i) and (ii).
G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06N 3/006 - Vie artificielle, c. à d. agencements informatiques simulant la vie fondés sur des formes de vie individuelles ou collectives simulées et virtuelles, p.ex. simulations sociales ou optimisation par essaims particulaires [PSO]
G06N 3/084 - Rétropropagation, p.ex. suivant l’algorithme du gradient
91.
Accessing A Parametric Field Within A Specialized Context
A parametric constant resolves to different values in different contexts, but a single value within a particular context. An anchor constant is a parametric constant that allows for a degree of parametricity for an API point. The context for the anchor constant is provided by a caller to the API point. The anchor constant resolves to an anchor value that records specialization decisions for the API point within the provided context. Specialization decisions may include type restrictions, memory layout, and/or memory size. The anchor value together with an unspecialized type of the API point result in a specialized type of the API point. A class object representing the specialized type is created. The class object may be accessible to the caller, but the full value of the anchor value is not accessible to the caller. The API point is executed based on the specialization decisions embodied in the anchor value.
G06F 9/30 - Dispositions pour exécuter des instructions machines, p.ex. décodage d'instructions
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 12/02 - Adressage ou affectation; Réadressage
G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation
92.
TECHNIQUES FOR MAINTAINING SNAPSHOT KEY CONSISTENCY INVOLVING GARBAGE COLLECTION DURING FILE SYSTEM CROSS-REGION REPLICATION
Techniques are described for enabling concurrent cross-region replications and garbage collection while maintaining consistency and data integrity among file systems. In some embodiments, techniques for garbage collection fencing utilize a system-level garbage fencing key (GC fencing key) and one or more job-level GC fencing keys in a source file system that perform one or more cross-region replications with one or more target file systems, one replication and one job-level GC fencing key per target file system. In some embodiments, one job-level GC fencing key in a source file system and one job-level GC fencing key in a source file system together provide garbage fencing for a cross-region replication. In certain embodiments, the metadata information in a GC fencing key can inform, instruct, or be used to configure garbage collectors to skip garbage collection for a range of snapshots in a file system.
Techniques are disclosed for augmenting training data for training a machine learning model to generate database queries. Training data comprising a first training example comprising a first natural language utterance, a logical form for the first natural language utterance, and associated first metadata is obtained. From the first training example, a template utterance is generated. A second natural language utterance is generated by filling slots in the template utterance based on a database schema and database values. Updated metadata is produced based on the first metadata and the second natural language utterance. A second training example is generated, comprising the second natural language utterance, the logical form for the first natural language utterance, and the updated metadata. The training data is augmented by adding the second training example. A machine learning model is trained to generate a database query comprising the database operation using the augmented training data set.
Systems and methods identify whether an input utterance is suitable for providing to a machine learning model configured to generate a query for a database. Techniques include generating an input string by concatenating a natural language utterance with a database schema representation for a database; providing the input string to a first machine learning model; based on the input string, generating, by the first machine learning model, a score indicating whether the natural language utterance is translatable to a database query for the database and should be routed to a second machine learning model, the second machine learning model configured to generate a query for the database based on the natural language utterance; comparing the score to a threshold value; and responsive to determining that the score exceeds the threshold value, providing the natural language utterance or the input string to the second machine learning model.
Systems and methods fine-tune a pretrained machine learning model. For a model having multiple layers, an initial set of configurations is identified, each configuration establishing layers to be frozen and layers to be fine-tuned. A configuration that is optimized with respect to one or more parameters is selected, establishing a set of fine-tuning layers and a set of frozen layers. An input for the model is provided to a remote system. An output of the set of frozen layers of the model, given the provided input, is received back and locally stored. The set of fine-tuning layers of the model is loaded from the remote system. The model is fine-tuned by retrieving the locally stored output of the set of frozen layers, and updating weights associated with the set of fine-tuning layers of the machine learning model.
Systems, methods, and other embodiments for passive component (e.g., spychip) detection through polarizability and advanced pattern recognition are described. In one embodiment a method includes applying an electromagnetic field to a target electronic system while the target electronic system is emitting a test pattern of electromagnetic interference. The method takes measurements of combined electromagnetic field strength emitted by the target electronic system while the electromagnetic field is being applied. The method detects the passive component based on dissimilarity between the measurements and estimates of electromagnetic field strength for the test pattern for a golden electronic system. The golden electronic system is of similar construction to the target electronic system and does not include the passive component. The method generates an electronic alert that the passive component is present in the target electronic system.
G01V 3/10 - Prospection ou détection électrique ou magnétique; Mesure des caractéristiques du champ magnétique de la terre, p.ex. de la déclinaison ou de la déviation fonctionnant au moyen de champs magnétiques ou électriques produits ou modifiés par les objets ou les structures géologiques, ou par les dispositifs de détection en utilisant des cadres inducteurs
In various embodiments, a data integration system is disclosed which enables incremental loads into a data warehouse by developing a data partitioning plan and selectively disabling and enabling indexes to facilitate incremental loads into fact tables.
Techniques are disclosed herein for adaptive training data augmentation to facilitate training named entity recognition (NER) models. Adaptive augmentation techniques are disclosed herein that take into consideration the distribution of different entity types within training data. The adaptive augmentation techniques generate adaptive numbers of augmented examples (e.g., utterances) based on the distribution of entities to make sure enough numbers of examples for minority class entities are generated during augmentation of the training data.
The present disclosure relates to systems and methods for enhancing data from disjunctive sources using a weighted interaction graph. First data about first entities can be received from a first data source. Second data about second entities at least partially different than the first entities can be received from a second data source. Relationships between each entity of the first entities and second entities can be determined, and a set of classes can be inferred from the first data and from the second data. A weighted interaction graph can be generated. The weighted interaction graph can indicate a likelihood of each entity interacting with a corresponding class. An extended set of data can be generated using the weighted interaction graph. The extended set of data can be output to facilitate communication with third entities that include the first entities and the second entities.
A computer measures for each column in many rows, a respective frequency of statements that filter the column in a workload of database statements, a respective count of distinct values used for filtration on the column in each statement individually, a respective frequency of each of the counts of distinct values used for filtration on the column across all of the database statements, and a respective value range of the column for each of many storage zones. A respective efficiency is measured for each of many distinct interleaved sorts. Each interleaved sort uses a respective distinct subset of the columns. Each interleaved sort is based on portions of each of the values for each row in a sampled subset of rows in each column of the subset of the columns of the interleaved sort. Efficiency measurement is based on frequencies of statements, value ranges of columns for each storage zone, and frequencies of counts of distinct values.
G06F 7/00 - Procédés ou dispositions pour le traitement de données en agissant sur l'ordre ou le contenu des données maniées
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
G06F 7/08 - Tri, c. à d. rangement des supports d'enregistrement dans un ordre de succession numérique ou autre, selon la classification d'au moins certaines informations portées sur les supports