Data can be received that includes information corresponding to a set of users. Privacy protection protocols that apply to the data can be identified. A subset of the data can be identified as being personally identifiable information (PII) data, where the subset includes a set of PII attributes. The PII attributes can be split into categories based on a format of a data field in the PII attributes. The processed PII data can be combined with non-PII data to create processed client data. It can be determined to add noise to part of the processed PII data. An amount of noise can be determined based on the privacy protection protocols. The amount of noise can be added to part of the processed PII data to produce protected data. A machine-learning model can be trained using the protected data.
Novel techniques are disclosed for virtualizing a cloud infrastructure in a region provided by a cloud service provider (CSP) to allow a reseller of the CSP to provide reseller-offered cloud services using a securely isolated portion of the CSP-provided infrastructure in the region and have a direct business relationship with the reseller' customers. In certain embodiments, the CSP-provided infrastructure in a region is organized into one or more data centers. In certain embodiments, the securely isolation portion of the CSP-provided infrastructure comprises at least one compute resource or a memory resource.
Novel techniques of resource allocation services for virtual private label cloud (vPLC) are disclosed. A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In certain embodiments, the resource allocation services check a first-level policy and a resource database to determine whether a requested resource is allowed and available to be allocated to a vPLC associated with a reseller. The resource allocation services may further check a second-level policy and the resource database to determine whether the requested resource is allowed and available to be allocated to a customer of the reseller. In some embodiments, the resource allocation services may allocate resources for a vPLC according to a partitioning requirement.
Novel techniques for creating service endpoints associated with different virtual private label clouds (vPLCs) for accessing a cloud service are disclosed. In certain embodiments, an endpoint management service (EMS) uses a novel architecture that enables the concurrent use of multiple vPLC-specific service endpoints with one endpoint per cloud service per vPLC to access the same cloud service running on multiple vPLC-specific resources. In some embodiments, each vPLC-specific service endpoint may be associated with a fully qualified domain name (FQDN) and an IP address.
Novel techniques are disclosed for accessing resources in both CSP-provided infrastructure in a region and a remote infrastructure through various control planes associated with a virtual private label cloud (vPLC). In some embodiments, the CSP-provided infrastructure in a region and a remote infrastructure are connected through a communication channel. In some embodiments, a control plane associated with the CSP-provided infrastructure in a region can provide access to both infrastructures (i.e., the CSP-provided infrastructure in a region and the remote infrastructure). In some embodiments, a control plane associated with the vPLC in the CSP-provided infrastructure in a region can provide access to both infrastructures. Yet, in other embodiments, a control plane associated with the vPLC but located within the remote infrastructure can provide access to both infrastructures.
H04L 41/5041 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
Novel techniques are disclosed for providing vPLC-specific metadata service including customized vPLC-specific metadata. In certain embodiments, each vPLC may generate a customized metadata using its corresponding vPLC-specific customization instructions. In some embodiments, a vPLC-specific metadata service may be performed using pre-generated customized vPLC-specific metadata, on-the-fly customized metadata, pre-generated CSP-format metadata, or combinations thereof.
Techniques for predicting marketing outcomes using contrastive learning are disclosed, including: obtaining historical marketing messages; obtaining historical open rates associated respectively with the historical marketing messages; based on the historical marketing messages, generating latent space representations associated respectively with the historical marketing messages; based on the latent space representations and respective contents of the historical marketing messages, training a first machine learning model to map contents of marketing messages to corresponding latent space representations of the marketing messages; based at least on the latent space representations and the historical open rates, training a second machine learning model to map latent space representations of marketing messages to predicted open rates of the marketing messages.
Novel techniques are disclosed for enabling customizable consoles of different virtual private label clouds (vPLCs). In some embodiments, one console server may execute multiple consoles for multiple vPLCs and CSP. In other embodiments, one console server may be dedicated to a vPLC-specific console. In certain embodiments, console customization including a customized set of console user interfaces (UIs) may be performed for each vPLC-specific console.
Techniques are disclosed herein for objective function optimization in target based hyperparameter tuning. In one aspect, a computer-implemented method is provided that includes initializing a machine learning algorithm with a set of hyperparameter values and obtaining a hyperparameter objective function that comprises a domain score for each domain that is calculated based on a number of instances within an evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial. For each trial of a hyperparameter tuning process: training the machine learning algorithm to generate a machine learning model, running the machine learning model in different domains using the set of hyperparameter values, evaluating the machine learning model for each domain, and once the machine learning model has reached convergence, outputting at least one machine learning model.
G06F 40/40 - Processing or translation of natural language
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
10.
IDENTITY MANAGEMENT FOR VIRTUAL PRIVATE LABEL CLOUDS
Novel techniques are disclosed for enabling identity cloud service for virtual private label clouds (vPLCs). A vPLC is created for a reseller of a Cloud Services Provider (CSP) using CSP-provided infrastructure in a region such that the reseller can provide one or more reseller-offered cloud services to customers of the reseller. In some embodiments, the identity management may be configured with either a shared identity cloud service (IDCS) stack model or an independent IDCS stack model. In certain embodiments, two-tier vPLC-aware identity management functions are performed for resellers of the CSP and customers of the resellers.
Techniques for facilitating connectivity to vPLCs created in a CSP-provided infrastructure in a region. Within the CSP-provided infrastructure in a region, when the destination of a packet is determined to be an endpoint associated with a particular vPLC, the packet is tagged with information related to the particular vPLC. The vPLC-related information for the particular vPLC can include, for example, a vPLC identifier identifying the particular vPLC, an identifier identifying a customer associated with the endpoint, a virtual cloud network identifier identifying a virtual cloud network (VCN) belonging to the particular vPLC and where the endpoint is part of the VCN, and other vPLC-related information. The packet is then routed or communicated within the CSP-provided infrastructure in a region along with the tagged vPLC-related information. The vPLC-related information is used as part of the connectivity and for routing of packets within the CSP-provided infrastructure in a region.
In some aspects, techniques may include monitoring a primary load of a datacenter and a reserve load of the datacenter. The primary load and reserve load can be monitored by a computing device. The primary load of the datacenter can be configured to be powered by one or more primary generator blocks having a primary capacity, and the reserve load of the datacenter can be configured to be powered by one or more reserve generator blocks having a reserve capacity. Also, the techniques may include detecting that the primary load of the datacenter exceeds the primary capacity. In addition, the techniques may include connecting the reserve generator blocks to at least one of the primary generator blocks and the primary load using a computing device switch.
H02J 9/06 - Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source with automatic change-over
G06F 1/26 - Power supply means, e.g. regulation thereof
Techniques for UNDO and REDO operations in a computer-user interface are disclosed. The techniques enables users to configure entities for UNDO and REDO operations. The techniques also enable users to roll back individual entity to an immediate previous state in one UNDO operation and subsequently to the other previous states. Other entities are not affected by the UNDO operations to the entity.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATIC CATEGORY 1 MESSAGE FILTERING RULES CONFIGURATION BY LEARNING TOPOLOGY INFORMATION FROM NETWORK FUNCTION (NF) REPOSITORY FUNCTION (NRF)
A method for automatic configuration and use of Category 1 message filtering rules includes, at a network function (NF), subscribing, with an NF repository function (NRF), to receive notification of NF profile changes. The method further includes receiving, from the NRF and as a result of the subscribing, notification of an NF profile change. The method further includes automatically configuring, based on the notification of the NF profile change, at least one Category 1 message filtering rule implemented. The method further includes using the at least one Category 1 message filtering rule to filter service-based interface (SBI) messages.
Techniques for automatic error mitigation in database systems using alternate plans are provided. After receiving a database statement, an error is detected as a result of compiling the database statement. In response to detecting the error, one or more alternate plans that were used to process the database statement or another database statement that is similar to the database statement are identified. A particular alternate plan of the one or more alternate plans is selected. A result of the database statement is generated based on processing the particular alternate plan.
Techniques are disclosed for managing aspects of identifying and/or deploying hardware of a dedicated cloud to be hosted at a customer location (a "DRCC"). A DRCC may comprise cloud infrastructure components provided by a cloud provider but hosted by computing devices located at the customer's (a "cloud owner's") location. Services of the central cloud-computing environment may be similarly executed at the DRCC. A number of user interfaces may be hosted within the central cloud-computing. These interfaces may be used to track deployment and region data of the DRCC. A deployment state may be transitioned from a first state to a second state based at least in part on the tracking and the deployment state may be presented at one or more user interfaces. Using the disclosed user interfaces, a user may manage the entire lifecycle of a DRCC and its corresponding hardware components.
G06Q 10/08 - Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 41/50 - Network service management, e.g. ensuring proper service fulfilment according to agreements
17.
DYNAMIC INCLUSION OF METADATA CONFIGURATIONS INTO A LOGICAL MODEL
Embodiments generate changes to a logical model. Embodiments receive the changes in a configuration file, the changes comprising a declarative configuration, extract the changes and load the changes into a database and update a corresponding database model. Embodiments generate a first logical model that represents the database model and generate a second logical model that includes the changes. Embodiments generate automatically in a container using the declarative configuration a compiled visualization image from the second logical model, wherein the visualization image is adapted to be used by a business intelligence system to provide a visualization of data that incorporates the changes.
Techniques are disclosed herein for calibrating confidence scores of a machine learning model trained to translate natural language to a meaning representation language. The techniques include obtaining one or more raw beam scores generated from one or more beam levels of a decoder of a machine learning model trained to translate natural language to a logical form, where each of the one or more raw beam scores is a conditional probability of a sub-tree determined by a heuristic search algorithm of the decoder at one of the one or more beam levels, classifying, by a calibration model, a logical form output by the machine learning model as correct or incorrect based on the one or more raw beam scores, and providing the logical form with a confidence score that is determined based on the classifying of the logical form.
e.g.e.g., utterances) based on the distribution of entities to make sure enough numbers of examples for minority class entities are generated during augmentation of the training data.
Techniques are disclosed herein for converting a natural language utterance to an intermediate database query representation. An input string is generated by concatenating a natural language utterance with a database schema representation for a database. Based on the input string, a first encoder generates one or more embeddings of the natural language utterance and the database schema representation. A second encoder encodes relations between elements in the database schema representation and words in the natural language utterance based on the one or more embeddings. A grammar-based decoder generates an intermediate database query representation based on the encoded relations and the one or more embeddings. Based on the intermediate database query representation and an interface specification, a database query is generated in a database query language.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
A method for providing a dedicated region cloud at customer is provided. A first physical port of a network virtualization device (NVD) included in a datacenter is communicatively coupled to a first top-of-rack (TOR) switch and a second TOR switch. A second physical port of the NVD is communicatively coupled with a network interface card (NIC) associated with a host machine. The second physical port provided a first logical port and a second logical port for communications between the NVD and the NIC. The NVD receives a packet from the host machine via the first logical port or the second logical port. Upon receiving the packet, the NVD determines a particular TOR, from a group including the first TOR and the second TOR, for communicating the packet. The NVD transmits the packet to the particular TOR to facilitate communication of the packet to a destination host machine.
A method for providing a dedicated region cloud at customer is provided. A first physical port of a network virtualization device (NVD) included in a datacenter is communicatively coupled to a first top-of-rack (TOR) switch and a second TOR switch. A second physical port of the NVD is communicatively coupled to a network interface card (NIC) associated with a host machine. The NVD receives a packet from the host machine via the second physical port of the NVD. The NVD further determines a particular TOR, from a group including the first TOR and the second TOR, for communicating the packet, and transmits the packet to the particular TOR to facilitate communication of the packet to a destination host machine.
Disclosed herein is a method of providing fault domains within a rack. An availability domain comprising a rack is provided, where the rack comprising a plurality of top-of-rack (TOR) switches and a plurality of host machines. A first fault domain is created within the availability domain. The first fault domain comprised a first TOR switch from the plurality of TOR switches and a first subset of host machines from the plurality of host machines. The first subset of host machines is communicatively coupled to the first TOR. A second fault domain is created within the availability domain, where the second fault domain comprised a second TOR switch from the plurality of TOR switches and a second subset of host machines from the plurality of host machines. The second subset of host machines is communicatively coupled to the second TOR.
Described herein is a network fabric architecture for DRCC. The fabric includes a plurality of blocks of switches. A compute fabric (CFAB) block is provided that is communicatively coupled to the plurality of blocks of switches. The CFAB block includes: (i) a set of one or more racks, where each rack comprised one or more servers configured to execute one or more workloads of a customer, and (ii) a first plurality of switches organized into a first plurality of levels. The first plurality of switches is communicatively couples the set of one or more racks to the plurality of blocks of switches. A network fabric block is provided that is communicatively coupled to the plurality of blocks of switches and includes (i) one or more edge devices including a first edge device providing connectivity (to a workload) to a first external resource, and (ii) a second plurality of switches organized into a second plurality of levels. The second plurality of switches communicatively couples the one or more edge devices to the plurality of blocks of switches.
Techniques are disclosed for managing aspects of a dedicated region cloud at a customer location (a "DRCC"). A DRCC may comprise cloud infrastructure components provided by a cloud provider and hosted by computing devices located at the customer's (a "cloud owner's") location. Services of the central cloud-computing environment may be similarly executed at the DRCC. The DRCC may include a service configured to collect, store, and/or present data corresponding to the cloud infrastructure components via one or more interfaces (e.g., interfaces provided to the cloud provider and/or the cloud owner). Data collected within the DRCC (e.g., capacity and usage data, etc.) may be provided and accessible to the central cloud at any suitable time. Obtaining such data enables the user to ascertain various operational aspects of the DRCC, while enabling the system and/or user to execute various DRCC-specific operations regarding capacity planning, health and performance, change management, and the like.
H04L 49/25 - Routing or path finding in a switch fabric
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/04 - Network management architectures or arrangements
Techniques for automatically partitioning materialized views are provided. In one technique, a definition of a materialized view is identified. Based on the definition, multiple candidate partitioning schemes are identified. A query is generated that indicates one or more of the candidate partitioning schemes. The query is then executed, where executing the query results in one or more partition counts, each corresponding to a different candidate partitioning scheme of the one or more candidate partitioning schemes. Based on the one or more partition counts, a candidate partitioning scheme is selected from among the plurality of candidate partitioning schemes. The materialized view is automatically partitioned based on the candidate partitioning scheme.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
27.
TARGETED ENERGY USAGE DEVICE PRESENCE DETECTION USING MULTIPLE TRAINED MACHINE LEARNING MODELS
Embodiments generate machine learning predictions to discover targeted device energy usage. Embodiments train a first machine learning model to predict a presence of a first device, where a training data used to train the first machine is deficient for a second device. Embodiments train a second machine learning model to predict a presence of a second device. Embodiments receive input data of household energy use and weather data and, based on the input data, use the trained first machine learning model to predict the presence of the first device per household. Based on the input data, embodiments use the trained second machine learning model to predict the presence of the second device per household. Embodiments then subtract the households predicted to have the second device from the households predicted to have the first device to generate a prediction of households that have the first device.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
28.
INCREASING OLTP THROUGHPUT BY IMPROVING THE PERFORMANCE OF LOGGING USING PERSISTENT MEMORY STORAGE
Before modifying a persistent ORL (ORL), a database management system (DBMS) persists redo for a transaction and acknowledges that the transaction is committed. Later, the redo is appended onto the ORL. The DBMS stores first redo for a first transaction into a first PRB and second redo for a second transaction into a second PRB. Later, both redo are appended onto an ORL. The DBMS stores redo of first transactions in volatile SRBs (SLBs) respectively of database sessions. That redo is stored in a volatile shared buffer that is shared by the database sessions. Redo of second transactions is stored in the volatile shared buffer, but not in the SLBs. During re-silvering and recovery, the DBMS retrieves redo from fast persistent storage and then appends the redo onto an ORL in slow persistent storage. After re-silvering, during recovery, the redo from the ORL is applied to a persistent database block.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
29.
EXTREMA-PRESERVED ENSEMBLE AVERAGING FOR ML ANOMALY DETECTION
Systems, methods, and other embodiments associated with associated with preserving signal extrema for ML model training when ensemble averaging time series signals for ML anomaly detection are described. In one embodiment, a method includes identifying locations and values of extrema in a training signal; ensemble averaging the training signal to produce an averaged training signal; placing the values of the extrema into the averaged training signal at respective locations of the extrema to produce an extrema-preserved averaged training signal; placing the values of the extrema into the averaged training signal at respective locations of the extrema to produce an extrema-preserved averaged training signal; and training a machine learning model using the extrema-preserved averaged training signal to detect anomalies in a signal.
INTEGRATION OF ANONYMIZED, MEMBER-DRIVEN CLOUD-BASED GROUPS AND CONTENT DELIVERY SERVICES THAT COLLECT INDIVIDUAL INFORMATION ABOUT CONTENT INTERACTIONS WITHOUT COMPROMISING IDENTITIES OF GROUP MEMBERS
Techniques are described through which groups of individuals and/or other entities may interface with a data cloud blockchain network and/or cloud-based platform to collectively share data in a secure, controlled manner. Decentralized groups that are connected to the data cloud network may be registered and listed in a searchable directory. Entities that are interested in accessing data associated with a group may browse the directory, execute smart contracts within a blockchain, and track online content interactions of a group in a manner that does not compromise the anonymity of individual group members. Data usage and performance metrics may be tracked on the blockchain network using data cloud services, and the metrics may be written to distributed ledgers within the blockchain network. Smart contracts and chaincode within the network may initiate blockchain transactions based on performance metrics and/or other aspects associated with accessing information about a group.
Systems and methods for geometric based flow programming are disclosed herein. The method can include receiving at least one compiled rule at a first Network Virtualization Device ("NVD"), each of the at least one compiled rules can be applicable to a class of packets received by the first NVD for delivery to a Virtualized Network Interface Card ("VNIC"). The method can include receiving a first packet at the first NVD for delivery to a first VNIC, determining with the first NVD that a first rule of the at least one compiled rule is applicable to the first packet, and processing with the first NVD the first packet according to the first rule.
In some embodiments, a two-stage augmentation technique is applied to a first set of training data for an intent prediction model by first applying one or more data augmentation techniques followed by an additional augmentation technique to post-process the first-stage result; the first set of training data and the post-processed augmented training data are combined to train the intent prediction model. In another embodiment, an entity-aware ("EA") technique and the two-stage augmentation technique are applied together to result in a second set of training data; the first and the second sets of training data are combined to train the intent prediction model. In another embodiment, one or more negative entity-aware data augmentation techniques are applied to the first set of training data to result in a second set of training data; the first and the second sets of training data are combined to train the intent prediction model.
Techniques are described for efficient replication and maintaining snapshot data consistency during file storage replication between file systems in different cloud infrastructure regions. In certain embodiments, provenance IDs are used to efficiently identify a starting point (e.g., a base snapshot) for a cross-region replication process, conserve cloud resources while reducing network and IO traffic. In certain embodiments, snapshot creation and deletion requests that occur during cross-region replications may be temporarily withheld until appropriate times to execute such requests safely, depending on the timing relationship between such requests and cross-region replication cycles.
Techniques are described for checkpointing multiple key ranges in parallel and concurrently during file storage replications between file systems in different cloud infrastructure regions. In certain embodiments, multiple range threads processing multiple key ranges, one thread per key range, create checkpoints for their respective key ranges in parallel and concurrently after processing a per-determined number of B-tree keys. In certain embodiments, each thread requests a lock from a central checkpoint record and takes turns for updating a status byte while continuing processing the B-tree keys in its responsible key range. In certain embodiments, upon encountering a failure event, either a system crash or a thread failure, each thread restarts its B-tree key processing from a B-tree key after the most recent checkpoint. In certain embodiments, two generation numbers are assigned to two groups of processed B-tree key-value pairs, one before and one after a failure event, within a key range.
Techniques are described for performing different types of restart operations for a file storage replication between a source file system and a target file system in different cloud infrastructure regions. In certain embodiments, the disclosed techniques perform a restart operation to terminate a current cross-region replication by synchronizing resource cleanup operations in the source file system and the target file system, respectively. In other embodiments, disclosed techniques perform a restart operation to allow a customer to reuse the source file system by identifying a restartable base snapshot in the source file system without dependency on the target file system.
Techniques are described for implementing a container environment where each pod within the container environment is provided with a unique IP address and a virtual communication device such as an IPvlan device. Communications from source pods are directly routed to destination pods within the container environment by one or more virtualized network interface cards (VNICs) utilizing the unique IP addresses of the destination pods, without the need for bridging and encapsulation. This reduces a size of data being transmitted and also eliminates a compute cost necessary to perform encapsulation of data during transmission.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 45/00 - Routing or path finding of packets in data switching networks
37.
SCALABLE AND SECURE CROSS-REGION AND OPTIMIZED FILE SYSTEM REPLICATION FOR CLOUD SCALE
Novel techniques for end-to-end file storage replication and security between file systems in different cloud infrastructure regions are disclosed herein. In one embodiment, a file storage service generates deltas between snapshots in a source file system, and transfers the deltas and associated data through a high-throughput object storage to recreate a new snapshot in a target file system located in a different region during disaster recovery. The file storage service utilizes novel techniques to achieve scalable, reliable, and restartable end-to-end replication. Novel techniques are also described to ensure a secure transfer of information and consistency during the end-to-end replication.
Techniques are described for performing hierarchical key management involving an end-to-end file storage replication between different cloud infrastructure regions. The hierarchical key management comprises three different keys, a first security key for the source region, a session key, valid only for a session, for the transfer of data between two different regions, and a second security key for the target region. Novel techniques are also described for using different file keys for different files of a file system in each region.
Techniques are disclosed for augmenting data sets used for training machine learning models and for generating predictions by trained machine learning models. These techniques may increase a number and diversity of examples within an initial training dataset of sentences by extracting a subset of words from the existing training dataset of sentences. The techniques may conserve scarce sample data in few-shot situations by training a data generation model using general data obtained from a general data source.
Log data that includes a plurality of log records is asynchronously processed to validate a configuration of each log record and data included in each log record. It is determined that one or more attributes of a particular subset of log records of the plurality of log records corresponds to one or more errors. Using the particular subset, one or more enriched log records are generated by augmenting each log record of the particular subset of log records with error information that indicates one or more categories corresponding to the one or more errors. A user interface is generated to facilitate correction of the one or more errors, the user interface comprising a plurality of interactive elements corresponding to a plurality of error metrics of different categories of errors, wherein the one or more categories of the one or more errors are included in the different categories of errors.
The present invention relates to machine learning (ML) explainability (MLX). Herein are local explanation techniques for black box ML models based on coalitions of features in a dataset. In an embodiment, a computer receives a request to generate a local explanation of which coalitions of features caused an anomaly detector to detect an anomaly. During unsupervised generation of a new coalition, a first feature is randomly selected from features in a dataset. Which additional features in the dataset can join the coalition, because they have mutual information with the first feature that exceeds a threshold, is detected. For each feature that is not in the coalition, values of the feature are permuted in imperfect copies of original tuples in the dataset. An average anomaly score of the imperfect copies is measured. Based on the average anomaly score of the imperfect copies, a local explanation is generated that references (e.g. defines) the coalition.
Methods, systems, and computer readable media for reporting a reserved load to a network function in a communications network are disclosed. One method includes determining, by a NF service producer, a current compute load metric value for the NF service producer operating in a communications network and detecting a number of active sessions supported at the NF service producer. The method further includes deriving a reserved compute load metric value corresponding to a predicted number of subsequent service requests at the NF service producer based on the number of active sessions and a predictive reserved load percentage value and calculating an adjusted reported compute load metric value amounting to a sum of the current compute load metric value and the reserved compute load metric value.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 41/0897 - Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
H04L 41/147 - Network analysis or design for predicting network behaviour
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 43/20 - Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
H04L 67/51 - Discovery or management thereof, e.g. service location protocol [SLP] or web services
43.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR UTILIZING NETWORK FUNCTION (NF) SERVICE ATTRIBUTES ASSOCIATED WITH REGISTERED NF SERVICE PRODUCERS IN A HIERARCHICAL NETWORK
Methods, systems, and computer readable media tor utilizing network function (NF) service attributes associated with registered network function service producers in a hierarchical network are disclosed. One method comprises receiving, by a root network function repository function (NRF) operating in a hierarchical network and from a regional NRF operating in a first region of the hierarchical network, a NF registration message or a NF update message, wherein the NF registration message or NF update message includes an Nrflnfo structure that contains one or more NF service attributes specifying one or more NF services provided by at least one NF service producer registered with the regional NRF. The method further includes extracting, by the root NRF, the one or more NF service attributes from the Nrflnfo structure, and creating, by the root NRF, one or more indexed entries containing the one or more NF service attributes in a local state information database.
The present invention relates to threshold estimation and calibration for anomaly detection. Herein are machine learning (ML) and extreme value theory (EVT) techniques for normalizing and thresholding anomaly scores without presuming a values distribution. In an embodiment, a computer receives many unnormalized anomaly scores and, according to peak over threshold (POT), selects a highest subset of the unnormalized anomaly scores that exceed a tail threshold. Based on the highest subset of the unnormalized anomaly scores, parameters of a probability density function are trained according to EVT. After training and in a production environment, a normalized anomaly score is generated based on an unnormalized anomaly score and the trained parameters of the probability density function. Anomaly detection compares the normalized anomaly score to an optimized anomaly threshold.
Systems, methods, and other embodiments for predicting a future characteristic of a target object/product are described based on a digital target image. In one embodiment, the method includes a machine learning model identifying a set of similar known product images by comparing the target product image to a group of known product images. For each similar known product image, product attributes are retrieved including historical characteristic/event data associated with each similar known product image. A predicted characteristic model for the target product is generated which is based on a similarity score combined with the historical characteristic/event data associated with each similar known product image to generate a predicted characteristic for the target product.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06Q 30/0201 - Market modelling; Market analysis; Collecting market data
46.
SYSTEMS AND METHODS FOR HEADER PROCESSING IN A SERVER COMPUTING ENVIRONMENT
In accordance with an embodiment, described herein are systems and methods for use with a microservices or other computing environment, including a web server together with related libraries and features usable to build cloud-native applications or services. In accordance with various embodiments, the systems and methods can include the use of various components or features that support, for example: (a) header processing, (b) client and server connection abstraction, (c) router abstraction, and (d) identifying a protocol of a connection.
The present invention avoids overfitting in deep neural network (DNN) training by using multitask learning (MTL) and self-supervised learning (SSL) techniques when training a multi-branch DNN to encode a sequence. In an embodiment, a computer first trains the DNN to perform a first task. The DNN contains: a first encoder in a first branch, a second encoder in a second branch, and an interpreter layer that combines data from the first branch and the second branch. The DNN second trains to perform a second task. After the first and second trainings, production encoding and inferencing occur. The first encoder encodes a sparse feature vector into a dense feature vector from which an inference is inferred. In an embodiment, a sequence of log messages is encoded into an encoded trace. An anomaly detector infers whether the sequence is anomalous. In an embodiment, the log messages are database commands.
Techniques for implementing a semi-supervised framework for purpose-oriented anomaly detection are provided. In one technique, a data item in inputted into an unsupervised anomaly detection model, which generates first output. Based on the first output, it is determined whether the data item represents an anomaly. In response to determining that the data item represents an anomaly, the data item is inputted into a supervised classification model, which generates second output that indicates whether the data item is unknown. In response to determining that the data item is unknown, a training instance is generated based on the data item. The supervised classification model is updated based on the training instance.
Discussed herein is a framework that facilitates access to services offered in a target cloud environment for resources deployed in a source cloud environment. The source cloud environment is different and independent with respect to the target cloud environment. A compute instance executed in a source cloud environment generates a request to use a service provided in the target cloud environment. The request is transmitted from the source cloud environment to the target cloud environment via an intercloud service gateway. The service is executed in the target cloud environment based on an access role that is associated with the compute instance.
The present disclosure relates to a framework that provides execution of serverless functions in a cloud environment based on occurrence of events/notifications from services in an entirely different cloud environment. A target agent obtains a notification from a source agent, where the target agent is deployed in a target cloud environment and the source agent is deployed in a source cloud environment that is different than the target cloud environment. The target agent determines a function that is to be invoked based on the notification. Upon successfully verifying whether the target agent is permitted to invoke the function that is deployed in a target customer tenancy of the target cloud environment, the target agent invokes the function in the target customer tenancy of the target cloud environment.
The present embodiments relate to identifying a ransomware attack. One embodiment relates to a method comprising configuring an operating system to collect metrics related to a hardware component. A message can be received from a user space library to validate an instruction detected in a cache, the instruction being associated with the hardware component. A metric can be compared to a threshold metric. The metric can be associated with the hardware component. A likelihood of a ransomware attack can be determined based at least in part on the comparison. A message can be transmitted to the user space library comprising the determination of the likelihood of the ransomware.
Aspects of the disclosure include a dynamic cloud workload reallocation based on an active ransomware attack. An example method includes receiving a first message that a computing instance is potentially infected by ransomware. The method further includes receiving a security state-based metric related to the computing instance based at least in part on the first message. The method further includes comparing the security state-based metric to a threshold metric. The method further incudes determining a likelihood of a ransomware attack based at least in part on the comparison. The method further includes transmitting second message to a job scheduler to reschedule workloads directed toward the computing instance based at least in part on the determination.
Techniques are described herein for an integrated in-front database cache ("IIDC") providing an in-memory, consistent, and automatically managed cache for primary database data. An IIDC comprises a database server instance that (a) caches data blocks from a source database managed by a second database server instance, and (b) performs recovery on the cached data using redo records for the database data. The IIDC instance implements relational algebra and is configured to run any complexity of query over the cached database data. Any cache miss results in the IIDC instance fetching the needed block(s) from a second database server instance managing the source database that provides the IIDC instance with the latest version of the requested data block(s) that is available to the second instance. Because redo records are used to continuously update the data blocks in an IIDC cache, the IIDC guarantees consistency of query results.
Discussed herein is a framework that provisions for customized processing for different classes of traffic. A network device in a communication path between a source host machine and a destination host machine extracts a tag from a packet received by the network device. The packet originates at a source executing on the source host machine and whose destination is the destination host machine. The tag set by the source and indicative of a first traffic class to be associated with the packet, the first traffic class being selected by the source from a plurality of traffic classes. The network device determines, based on the tag, that the first traffic class corresponds to a latency sensitive traffic and processes the packet using one or more settings configured at the network device for processing packets associated with the first traffic class.
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/263 - Rate modification at the source after receiving feedback
H04L 47/33 - Flow control; Congestion control using forward notification
Discussed herein is a framework that provisions for customized processing for different classes of traffic. A network device in a communication path between a source host machine and a destination host machine extracts a tag from a packet received by the network device. The packet originates at a source executing on the source host machine and whose destination is the destination host machine. The tag set by the source and indicative of a first traffic class to be associated with the packet, the first traffic class being selected by the source from a plurality of traffic classes. The network device determines the first traffic class based on the tag extracted from the packet and processes the packet based on the first traffic class.
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/263 - Rate modification at the source after receiving feedback
H04L 47/33 - Flow control; Congestion control using forward notification
Discussed herein is a framework that provisions for customized processing for different classes of traffic. A network device in a communication path between a source host machine and a destination host machine extracts a tag from a packet received by the network device. The packet originates at a source executing on the source host machine and whose destination is the destination host machine. The tag set by the source and indicative of a first traffic class to be associated with the packet, the first traffic class being selected by the source from a plurality of traffic classes. The network device determines, based on the tag, that the first traffic class corresponds to a bandwidth sensitive traffic and processes the packet using one or more settings configured at the network device for processing packets associated with the first traffic class.
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/263 - Rate modification at the source after receiving feedback
H04L 47/33 - Flow control; Congestion control using forward notification
In an embodiment, a computer hosts a machine learning (ML) model that infers a particular inference for a particular tuple that is based on many features. The features are grouped into predefined super-features that each contain a disjoint (i.e. nonintersecting, mutually exclusive) subset of features. For each super-feature, the computer: a) randomly selects many permuted values from original values of the super-feature in original tuples, b) generates permuted tuples that are based on the particular tuple and a respective permuted value, and c) causes the ML model to infer a respective permuted inference for each permuted tuple. A surrogate model is trained based on the permuted inferences. For each super-feature, a respective importance of the super-feature is calculated based on the surrogate model. Super-feature importances may be used to rank super-features by influence and/or generate a local ML explainability (MLX) explanation.
Techniques are described herein for a graph-organized file system (GOFS) that represents a data graph using a plurality of gnode data structures and a plurality of edge entry data structures. Gnodes and edge entries both store one or more "in-structure" metadata values for graph component properties. A GOFS partition includes dedicated storage for "out-of-structure" graph component metadata values that are accessed using graph component identifiers. Search operations may use the in-structure and/or out-of-structure metadata values to efficiently identify graph search results. Search criteria may involve in-structure metadata values for both nodes and relationships. Accessing in-structure metadata values for a particular search operation may be performed using an index or from within the graph component data structures themselves. When the search criteria of a search operation involve out-of-structure metadata values, generating search operation results can be performed based on accessing dedicated metadata storage using component identifiers, or using an index.
Techniques are disclosed for generating a topology of components based on a set of components provided by a user. The system identifies, for each particular component of the first set of components, one or more characteristics. The characteristics may include at least one of: a rule associated with the particular component, a requirement associated with the particular component, a data input type corresponding to the particular component, and data output type corresponding to the particular component. Based on the characteristics, the system determines that an additional component not included in the first set of components is required for connecting the first set of components. The system selects the additional component and determines a topology of components that includes the first set of components and the additional component. The system also determines a dataflow between components in the topology of components.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR INTEGRITY PROTECTION FOR SUBSCRIBE/NOTIFY AND DISCOVERY MESSAGES BETWEEN NETWORK FUNCTION (NF) AND NF REPOSITORY FUNCTION (NRF)
A method for integrity protection for subscribe/notify and NF discovery transactions between an NF and an NRF includes receiving, from the NF, a subscribe or discovery request message, determining that the subscribe or discovery request message includes at least one indicator requesting NRF communications integrity protection, and computing an integrity check value of at least a portion of the subscribe or discovery request message and comparing the computed integrity check value to an integrity check value included in the subscribe or discovery request message. The method further includes determining that the computed integrity check value matches the integrity check value included in the subscribe or discovery request message, and formulating a response to the subscribe or discovery request message, generating and adding at least one digital signature to the response message, and transmitting the response message to the NF.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Various techniques can include systems and methods for using contrastive learning to predict anomalous events in data processing systems. The method can include accessing an unstructured data file and contextual data associated with the unstructured data file. The method can also include generating an event-data input element for the unstructured data file. The event-data input element can include a set of feature vectors. The set of feature vectors can include a first feature vector generated by using a first encoder to process the unstructured file and a second feature vector generated by using a second encoder to process the contextual data. The method can also include generating a classification result of the unstructured data file by using a machine-learning model to process the event-data input element, in which the classification result includes a prediction of whether the particular event corresponds to an anomalous event.
G06N 3/084 - Backpropagation, e.g. using gradient descent
62.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR PROVIDING NETWORK FUNCTION (NF) REPOSITORY FUNCTION (NRF) WITH CONFIGURABLE PRODUCER NF INTERNET PROTOCOL (IP) ADDRESS MAPPING
A method for supporting configurable producer network function (NF) Internet protocol (IP) address mappings includes, at an NF repository function (NRF), receiving, from a requesting node, a request message for network 5 address and/or service information of a producer NF. The method further includes determining, from the request message, at least one consumer NF parameter. The method further includes using the at least one consumer NF parameter, a producer NF IP address mapping rule. The method further includes, in response to locating the producer NF IP address mapping rule, 10 determining, using the producer NF IP address mapping rule, an IP address to return to the requesting node. The method further includes generating a response message including the IP address and transmitting the response message to the requesting node.
Techniques are described for identifying resources within a region of a cloud computing environment that may be leveraged during a region build. A Multi-Flock Orchestrator (MFO) may be configured to obtain configuration files corresponding to services to be bootstrapped within the region during a region build process. MFO may determine an order by which the services are to be bootstrapped and transmits a first request in accordance with the order. Planning data may be received (e.g., indicating an intent to create a new resource). MFO may obtain (e.g., via a Resource Identification Service) an identifier corresponding to a previously created resource. MFO can modify the planning data with the identifier and transmits a second request comprising the modified planning data. Transmitting the second request can cause resource corresponding to the flock configuration file to be bootstrapped within the region using the resource corresponding to the identifier.
The present embodiments relates to identifying and tracking capabilities of a cloud computing environment under build. An orchestration service can identify, from one or more configuration files, a collective set of capabilities individually relating to services or applications to be bootstrapped by the cloud infrastructure orchestration service within the cloud computing environment under build. For each respective capability of the collective set of capabilities, a first set of capabilities on which publishing the respective capability depends may be identified. A visualization can be generated. A first portion of the visualization can specify a first subset of capabilities of the collective set of capabilities that depend on no unpublished capabilities based at least in part on identifying the first set of capabilities. A second portion of the visualization can specify a second subset of capabilities of the collective set of capabilities that depend on one or more currently unpublished capabilities.
Techniques are described for performing an automated region build. An orchestration service (e.g., a Multi-Flock Orchestrator) of a cloud-computing environment may obtain configuration files corresponding to services to be bootstrapped within a region corresponding to one or more data centers. Each of the services may be associated with a respective set of resources comprising at least one infrastructure component or a corresponding software artifact. The orchestration service may identify dependencies between the services based at least in part on the configuration files. An order by which operations for bootstrapping the services are to be executed may be determined based at least in part on the dependencies identified. The orchestration service may incrementally instruct a provisioning and deployment manager to execute corresponding operations for bootstrapping the services in accordance with the determined order.
Techniques are described for scheduling and executing multiple releases for a service. Techniques are described for determining that a new capability is published in a data center. For a flock for a service for which a first release has been previously scheduled and executed, a second release may be scheduled for the flock in response to identifying that the new published capability is an optional capability dependency for the flock for the service. The flock comprises a set of one or more resources for providing the service. The second release for the flock is executed. As a result of the execution of the second release, additional enhanced capabilities may be added to the service.
Techniques are described for monitoring the health of services in a computing environment such as a data center. More particularly, the present disclosure describes techniques for monitoring the health and availability of capabilities in a computing environment such as a data center by enabling alarms to be associated with the capabilities. A capability refers to a set of resources in a data center. By providing the ability to associate an alarm with a capability, the health or availability of the associated capability can be monitored or ascertained by tracking the state of the alarm associated with the capability. For example, if the alarm associated with a particular capability is triggered, it may indicate that the particular capability and the one or more resources corresponding to the particular capability are not in a healthy state. Accordingly, by monitoring alarms associated with capabilities, the health of the associated capabilities can be ascertained.
Techniques are disclosed for migrating services from a virtual bootstrap environment. A distributed computing system can generate a virtual cloud network in a data center of a host region. A virtual bootstrap environment may be implemented in the virtual cloud network. The virtual bootstrap environment can include a plurality of services. The distributed computing system can also deploy an instance of one of the plurality of services to a target region data center. When the instance has been deployed, an indication that the deployment was successful can be received by the distributed computing system. In response, the distributed computing system may identify additional resources associated with the deployed instance of the service and update another service in the virtual bootstrap environment with that resource.
Techniques are described for performing an automated region build using a version set that identifies versions of configuration files and/or artifacts with which the region build is to be performed. A Multi-Flock Orchestrator (MFO) may be configured to maintain multiple version sets identifying a respective set of configuration files associated with various services to be bootstrapped. The MFO may execute a validation process using one version set. A second version set may be identified from the first based on identifying configuration files that successfully passed the validation process. The automated region build can be performed using the second version set.
Techniques are described for identifying resources within a region of a cloud-computing environment. A Resource Identification Service (RIS) may be configured to obtain a flock configuration file comprising resource discovery data associated with a service. The resource discovery data may indicate a set of parameters with which a previously existing resource of the cloud-computing environment is to be identified. RIS may execute operations to identify the previously existing resource based at least in part on matching attributes associated with previously existing resource to the set of parameters of the resource discovery data. The RIS may identify, from the flock configuration file, a set of import operations to perform to obtain an identifier corresponding to the previously existing resource. The identifier may be provided to cause the previously existing resource to be utilized in a region build.
Techniques are described for managing dependencies between components during an automated region build. An orchestration service (e.g., a Multi-Flock Orchestrator (MFO)) obtains configuration files corresponding to services to be bootstrapped to a region. The configuration files provide data from which bootstrapping tasks for bootstrapping the plurality of services within the region are identifiable. MFO 1) identifies dependencies between respective services based at least in part on parsing the plurality of configuration files 2) generates, based at least in part on the parsing, a build dependency graph (e.g., a data structure that identifies the configuration files and the dependencies and indicates a corresponding order with which bootstrapping tasks are to be performed), and 3) incrementally instructs a provisioning and deployment manager to execute bootstrapping tasks based at least in part on traversing the build dependency graph, causing the services to be bootstrapped to the region according to the dependencies identified.
Techniques are described for performing an automated region build with real time region data. Region data including region identifiers and execution target identifiers for the region of a cloud-computing environment may be maintained (e.g., by a cloud infrastructure orchestration service (CIOS)). A modification of the region data is detected (or new region data is detected). One or more configuration files corresponding to bootstrapping resources (e.g., at the execution targets) within the region are obtained. Operations are executed to cause the configuration files to be updated. This may include recompiling or otherwise injecting region data into the configuration files. A region build may be executed to bootstrap resources within the region using the updated configuration files.
The present embodiments relate to determining a critical path that identifies an order for bootstrapping a subset of resources within a data center under build. A cloud infrastructure orchestration service (CIOS) can identify, from one or more configuration files, a collective set of capabilities individually relating to resources to be bootstrapped by the CIOS within the data center under build. CIOS can identify, for each respective capability, a first set of capabilities on which publishing the respective capability depends. User input identifies a selected flock. For the selected flock, one or more unpublished capabilities corresponding to at least one of the one or more capabilities associated with the selected flock can be identified and a ranking can be derived for those capabilities. A visualization identifying some portion of the unpublished capabilities corresponding to the selected flock is generated, the unpublished capabilities being identified and arranged in accordance with the ranking.
A framework for establishing new regions and/or new realms is disclosed. For example, techniques for establishing new regions and/or new realms include receiving a request to establish a new data center in a region of a cloud service infrastructure, the request including a configuration for the new data center; determining a set of one or more services to be provided by the new data center based at least in part on the configuration; determining, based at least in part on the set of one or more services, whether reference seed instructions have been stored corresponding to the set of one or more services; producing, based on whether the reference seed instructions have been stored corresponding to the set of one or more services, a set of seed instructions for provisioning the set of one or more services to the new data center; and providing the set of seed instructions to provision the new data center to provide the set of one or more services.
A test environment is provided for testing of a flock configuration. A configuration file of a service is parsed to identify one or more capabilities for executing a release of the configuration file of the service. The one or more capabilities correspond to operations performed with respect to one or more resource types. A capability-aware-proxy server included in the test environment is configured based on the one or more capabilities identified from the configuration file of the service. The release of the configuration file of the service is executed in the test environment in accordance with the configured capability aware-proxy server. The capability aware-proxy server generates a response message corresponding to an execution result of the release of the configuration file of the service.
Techniques are disclosed for establishing a virtual bootstrap environment. A distributed computing environment may generate a virtual cloud network within a host region corresponding to one or more data centers. The distributed computing system may then implement a virtual bootstrap environment within the virtual cloud network. A first service may be deployed to the virtual bootstrap environment. A network connection may be established between the host region and a target region. The first service in the virtual bootstrap environment can then deploy resources to the target region over the network connection.
Techniques are disclosed for establishing a distributed virtual private network within a virtual bootstrap environment. A distributed computing system can generate a virtual cloud network in a data center of a host region. The virtual cloud network can include a plurality of host instances, including an instance hosting a virtual private network router. A second instance can provide a secondary network address to the virtual private network router. A third instance can send a request addressed to the secondary network address. The virtual cloud network may route the request to the virtual private network router according to a default route of a routing table. The request may then be forwarded by the virtual private network router to the secondary address using a networking tunnel established between the first instance and the second instance.
A multi-cloud infrastructure included in a first cloud environment provided by a first cloud services provider generates a set of one or more graphical user interfaces for each of a plurality of external cloud environments that are provided by a plurality of external cloud services providers. The set of one or more graphical user interfaces for an external cloud environment of the plurality of external cloud environments is generated based upon a native graphical user interface provided by the external cloud environment. Responsive to a request received by the first cloud environment and from a first external cloud environment of the plurality of external cloud environments, providing as a response to the request, a first graphical user interface from the set of one or more graphical user interfaces generated for the first external cloud environment.
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
Techniques are described for providing, in a first cloud infrastructure (FCI), an adaptor associated with a service provided by the FCI. The adaptor enables the service to be requested by one or more users associated with one or more accounts in a second cloud infrastructure (SCI), where the SCI is different than the FCI. The adaptor receives a first request from a first user associated with a first account in the SCI to create a resource in the FCI. The adaptor executes a workflow to provision the resource using the service, where the workflow includes processing comprising retrieving a resource-principal that is associated with the resource and transmitting a second request to the service provided by the FCI. The second request includes the resource-principal and corresponds to creation of the resource.
Techniques are described for creating a network-link between a first virtual network in a first cloud environment and a second virtual network in a second cloud environment. The first virtual network in the first cloud environment is created to enable a user associated with a customer tenancy in the second cloud environment to access one or more services provided in the first cloud environment. The network-link is created based on one or more link-enabling virtual networks being deployed in the first cloud environment and the second cloud environment.
Techniques are described for creating a network-link between a first virtual network in a first cloud environment and a second virtual network in a second cloud environment. The first virtual network in the first cloud environment is created to enable a user associated with a customer tenancy in the second cloud environment to access one or more services provided in the first cloud environment. The network-link is created based on one or more link-enabling virtual networks being deployed in the first cloud environment and the second cloud environment.
Techniques are described for creating a network-link between a first virtual network in a first cloud environment and a second virtual network in a second cloud environment. The first virtual network in the first cloud environment is created to enable a user associated with a customer tenancy in the second cloud environment to access one or more services provided in the first cloud environment. The network-link is created based on one or more link-enabling virtual networks being deployed in the first cloud environment and the second cloud environment.
Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment. The first cloud infrastructure receives a request from a user associated with an account in the second cloud infrastructure. The request corresponding to using a service provided by the first cloud infrastructure. A tenancy is created for the user in the first cloud infrastructure to enable the user to utilize the service, and a link-resource object is created that includes information linking the tenancy of the user in the first cloud infrastructure to the account of the user in the second cloud infrastructure, the link-resource object enabling the user to utilize the service provided by the first cloud infrastructure.
Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment. The first cloud infrastructure receives a request from a user associated with an account in the second cloud infrastructure. The request corresponding to using a service provided by the first cloud infrastructure. A tenancy is created for the user in the first cloud infrastructure to enable the user to utilize the service, and a link-resource object is created that includes information linking the tenancy of the user in the first cloud infrastructure to the account of the user in the second cloud infrastructure, the link-resource object enabling the user to utilize the service provided by the first cloud infrastructure.
Techniques are described for establishing a private network path from a first cloud environment to a second cloud environment. A tenancy associated with the first cloud environment is provided in the second cloud environment. The tenancy includes a set of one or more resources that enable communication between the first cloud environment and the second cloud environment. A request originating in the second cloud environment and associated with a service provided by the first cloud environment is caused to be received by a first resource from the set of one or more resources. Using at least one resource from the set of one or more resources, the request is transmitted from the second cloud environment to first cloud environment.
Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment, where the second cloud environment is different than the first cloud environment. The multi-cloud infrastructure enables a user associated with an account with a second cloud services provider to use, from the second cloud infrastructure, a first service from the set of one or more cloud services. The multi-cloud infrastructure creates a link between the account with the second cloud service provider and a tenancy created in the first cloud infrastructure for enabling using the first service by the user.
Techniques are described for providing a multi-cloud control plane (MCCP) in a first cloud infrastructure (included in a first cloud environment provided by a first cloud services provider) that enables services and/or resources provided in the first cloud infrastructure to be utilized by users of a second cloud environment. The first cloud infrastructure receives a request from a user associated with an account in the second cloud infrastructure. The request corresponding to using a service provided by the first cloud infrastructure. A tenancy is created for the user in the first cloud infrastructure to enable the user to utilize the service, and a link-resource object is created that includes information linking the tenancy of the user in the first cloud infrastructure to the account of the user in the second cloud infrastructure, the link-resource object enabling the user to utilize the service provided by the first cloud infrastructure.
Techniques are described for exporting observability data related to execution of a service from a first cloud environment to a second cloud environment. The service is executed in the first cloud environment for a customer of a second cloud environment. Observability data associated with execution of the service is collected in the first cloud environment for the customer of the second cloud environment. The observability data comprises one or more metrics associated with the execution of the service. The observability data collected from the first cloud environment is communicated to the second cloud environment to enable a user associated with the customer of the second cloud environment to access the observability data via the second cloud environment.
Discussed herein are techniques that utilize locality information of host machines included in a cluster network for the execution of graphical processing unit based workloads. For each host machine of a plurality of host machines, locality information for the host machine is stored therein. The locality information for a host machine identifies a rack comprising the host machine. Responsive to receiving a request requesting execution of a workload, one or more host machines of the plurality of host machines are identified as being available for executing the workload. For each of the one or more host machines, the locality information for the host machine is obtained. Further, linkage information of the one or more host machines is identified. The locality information and the linkage information of the one or more host machines is provided in response to the request.
Discussed herein are techniques that utilize hierarchical locality information of host machines included in a cluster network for the execution of general workloads. Hierarchical locality information for each host machine of a plurality of host machines is stored. The hierarchical locality information for a host machine identifying, for each locality of a plurality of localities, location information for the locality. Responsive to receiving a request requesting execution of a workload, the hierarchical locality information for the plurality of host machines is obtained and provided (e.g., to a customer) in response to the request.
A framework is disclosed for managing authorization for performance of actions with a computing system. For example, techniques for performing authorization of users and/or clients for access to an infrastructure service provided by a cloud servicer provider (CSP) and/or for performance of actions with the infrastructure service include: receiving a request for an action to be performed by the infrastructure service; identifying one or more authorizers from which authorization of the action is to be received; determining one or more operations to be performed by the cloud infrastructure service to complete the action; signing the one or more operations via an elliptic curve digital signature algorithm; storing the signed one or more operations; and initiating an inquiry procedure for the authorization of the action based at least in part on responses received from the one or more authorizers.
A framework for managing credentials for access to a secured entity of an infrastructure service is described herein. For example, a computing system performs operations including: receiving a request for performance of an action from a client device; determining a subscriber corresponding to the client device; determining that the subscriber has provided the client device authorization for performance of the action; generating a credential for access to the secured entity on behalf of the client device; maintaining the credential separate from the client device; and utilizing the credential for performance of the action on behalf of the client device.
A framework for transferring workloads between security regions of an infrastructure service. For example, techniques for transferring workloads between security regions across a private network based on signatures associated with the security regions.
Systems, methods, and machine-readable media may place workloads of source systems in a migration of data and applications from the source systems to target systems. Data relating to a first number of source nodes and a second number of target nodes may be received and analyzed. A migration plan that specifies placement of workloads from source systems into target systems may be created. The placement may include packing a pluggable environment or a clustered environment. The packing the pluggable environment or the clustered environment may include: when the second number of target nodes is greater than the first number of source nodes, placing workload from the source nodes to the target nodes; and, when the second number of target nodes is less than the first number of source nodes, the workload from the source nodes is not placed to the target nodes.
Methods, systems, and computer readable media for restricting a number of hops conducted in a communications network are disclosed. One method includes receiving, by a hypertext transfer protocol (HTTP) proxy element in a first network region, a service request message including a header section that specifies a maximum number of hops value and conducting a search for a producer network function (NF) in the first network region to provide a network service requested in the service request message. The method further includes determining the maximum number of hops value in the header section of the service request message if the HTTP proxy element is unable to locate the producer NF in the first network region, reducing the maximum number of hops value in the header section of the service request message by one to derive an updated maximum number of hops value if the HTTP proxy element determines that the maximum number of hops value in the header section is greater than zero, and directing the service request message containing the updated maximum number of hops value to a second HTTP proxy element located in a second network region.
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
H04L 67/51 - Discovery or management thereof, e.g. service location protocol [SLP] or web services
H04L 67/563 - Data redirection of data network streams
96.
METHOD, SYSTEM, AND COMPUTER READABLE MEDIA FOR DYNAMICALLY UPDATING DOMAIN NAME SYSTEM (DNS) RECORDS FROM REGISTERED NETWORK FUNCTION (NF) PROFILE INFORMATION
A method for dynamically updating domain name system (DNS) records from network function (NF) profile information includes, at an NF repository function (NRF) including at least one processor, receiving a message relating to an NF profile of an NF. The method further includes constructing, based on NF service availability information in the NF profile and local configuration of the NRF, a DNS resource record for the NF. The method further includes utilizing a dynamic DNS update procedure to dynamically update a DNS resource record for the NF with a DNS server.
H04L 61/5076 - Update or notification mechanisms, e.g. DynDNS
97.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR AUTOMATIC DOMAIN NAME SYSTEM (DNS) CONFIGURATION FOR 5G CORE (5GC) NETWORK FUNCTIONS (NFS) USING NF REPOSITORY FUNCTION (NRF)
A method for automatic domain name system (DNS) configuration for 5G core (5GC) network functions (NFs) includes, at an NF repository function (NRF) including at least one processor, receiving a message concerning a 5GC network function. The method further includes determining a first DNS resource record parameter for the 5GC NF. The method further includes determining a second DNS resource record parameter for the 5GC NF. The method further includes automatically configuring a DNS with a mapping between the first and second DNS resource record parameters for the 5GC NF.
In some aspects, a network interface card (NIC) may receive, at a first node of a network interface card associated with a disconnected network, a message intended for the disconnected network and sent using a first communication protocol. The network interface card may send the message from the first node to a second node of the network interface card using a second communication protocol the second communication protocol being configured for unidirectional communication. The network interface card may receive the message at the second node. The network interface card may send, from the second node, the message to a destination node of the disconnected network using a third communication protocol. Numerous other aspects are described.
Techniques for using machine learning model validated sensor data to generate recommendations for remediating issues in a monitored system are disclosed. A machine learning model is trained to identify correlations among sensors for a monitored system. Upon receiving current sensor data, the machine learning model identifies a subset of the current sensor data that cannot be validated. The system generates estimated values for the sensor data that cannot be validated based on the learned correlations among the sensor values. The system generates the recommendations for remediating the issues in the monitored system based on validated sensor values and the estimated sensor values.
Techniques are provided for improved training of a machine-learning model that includes multiple layers and is configured to process textual language input. The machine- learning model includes one or more blocks in which each block includes a multi-head self- attention network, a first connection for providing input to the multi-head self-attention network, and a second (residual) connection for providing the input to a normalization layer, bypassing the multi-head self-attention network. During training, the second connection is dropped out according to a dropout parameter. Additionally, or alternatively, an attention weight matrix is used for dropout by blocking diagonal entries in the attention weight matrix. As a result, the machine-learning model increasingly focuses on contextual information, which provides more accurate language processing results.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages