An example method of automatically deploying a containerized workload on a hypervisor based device is provided. The method generally includes booting the device running a hypervisor, in response to booting the device: automatically obtaining, by the device, one or more intended state configuration files from a server external to the device, the one or more intended state configuration files defining a control plane configuration for providing services for at least deploying and managing the containerized workload and workload configuration parameters for the containerized workload; deploying a control plane pod configured according to the control plane configuration; deploying one or more worker nodes based on the control plane configuration, and deploying one or more workloads identified by the workload configuration parameters on the one or more worker nodes.
Some embodiments of the invention provide a method for implementing a software-defined private mobile network (SD-PMN) for an entity. At a physical location of the entity, the method deploys a first set of control plane components for the SD-PMN, the first set of control plane components including a security gateway, a user-plane function (UPF), an AMF (access and mobility management function), and an SMF (session management function). At an SD-WAN (software-defined wide area network) PoP (point of presence) belonging to a provider of the SD- PMN, the method deploys a second set of control plane components for the SD-PMN that includes a subscriber database that stores data associated with users of the SD-PMN. The method uses an SD-WAN edge router located at the physical location of the entity and a SD-WAN gateway located at the SD-WAN PoP to establish a connection from the physical location of the entity to the SD- WAN PoP.
H04W 84/04 - Large scale networks; Deep hierarchical networks
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
H04W 24/02 - Arrangements for optimising operational condition
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
Today, stateful services (e.g., firewall services, load balancing services, encryption services, etc.) running inside guest machines (e.g., guest virtual machines (VMs)) can be very expensive, particularly for applications that need to handle large volumes of firewall, load balancing, and VPN (virtual private network) traffic. In some such cases, these stateful services can cause bottlenecks for datacenter traffic going in and out of the datacenter, and result in significant negative impacts on customer experiences. Additionally, service-critical guest machines may need to migrate from one host to another, and need to maintain service capability and throughput before and after the migration such that from a user perspective, the service is not only uninterrupted, but also performant.
Some embodiments of the invention provide a method for defining a telecommunications network deployment for a particular geographic region that includes of a set of sub-regions. The telecommunications network including an access network, an edge network, and a core network. The method is performed for each sub-region in the set of sub-regions. The method determines population density of UEs (user equipment) within the sub-region. Based on the determined population density, the method identifies an area type for the sub-region from a set of area types. The method simulates performance of the telecommunications network to explore, based on the identified area type, multiple configurations for access nodes that connect the UEs to the telecommunications network, each configuration in the multiple configurations indicating (1) a number of access nodes to be included in the telecommunications network deployment and (2) locations at which each access node is to be deployed. The method selects a particular configuration for access nodes from the multiple configurations for use in defining the telecommunications network deployment.
Computer-implemented methods, media, and systems for automating secured deployment of containerized workloads on edge devices are disclosed. One example computer-implemented method includes receiving, by a software defined wide area network (SD-WAN) edge device and from a remote manager, resource quotas for a compute service to be enabled at the SD-WAN edge device. Pre-deployment sanity checks are performed by confirming availability of resources satisfying the resource quotas, where the resources are at the SD-WAN edge device. In response to the confirmation of the availability of resources satisfying the resource quotas, one or more security constructs are set up to isolate SD-WAN network functions at the SD-WAN edge device from the compute service at the SD-WAN edge device. The compute service is attached to a SD-WAN network by the SD-WAN edge device. An acknowledgement that the compute service is enabled at the SD-WAN edge device is sent to the remote manager.
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04L 41/5051 - Service on demand, e.g. definition and deployment of services in real time
H04L 41/5054 - Automatic deployment of services triggered by the service manager, e.g. service implementation by automatic configuration of network components
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
6.
METHOD TO REALIZE SCANNER REDIRECTION BETWEEN A CLIENT AND AN AGENT USING DIFFERENT SCANNING PROTOCOLS
A scanner redirection method includes the steps of: receiving from an application running on a host server, a request for scanner properties; acquiring properties of the physical scanner; converting the properties of the physical scanner that are described according to a first scanning protocol to properties of the physical scanner that are described according to a second scanning protocol; transmitting the properties of the physical scanner that are described according to the second scanning protocol to the application; in response to detecting a user selection made on an image of a user interface, transmitting the user selection to the application; and in response to the user selection, receiving from the application, a request for a scanned image, and transmitting a request to an image capture core to acquire the scanned image from the physical scanner.
Disclosed herein is a system and method for controlling network traffic among namespaces in which various entities, such as virtual machines, pod virtual machines, and a container orchestration system, such as Kubernetes, reside and operate. The entities have access to a network that includes one or more firewalls. The traffic that is permitted to flow over the network among and between the namespaces is defined by a security policy definition. The security policy definition is posted to a master node in a supervisor cluster that supports and provisions the namespaces. The master node invokes a network manager to generate a set of firewall rules and program the one or more firewalls in the network to enforce the rules.
Disclosed are various examples for controlling and managing data access to increase user privacy and minimize intentional or inadvertent misuse of accessed information. Upon detecting a request for an administrator review of a user client device, permission for administrator access can be obtained from a user associated with the user client device. The client device identifier can be obfuscated such that the administrator accessing the data is not provided the actual device identifier. An administrator review session between the user client device and an administrator client device can be established to allow the administrator client device access to the permitted client device data.
Some embodiments provide a method for one of multiple shared API processing services in a container cluster that implements a network policy manager shared between multiple tenants. The method receives a configuration request from a particular tenant to modify a logical network configuration for the particular tenant. Configuration requests from the plurality of tenants are balanced across the plurality of shared API processing services. Based on the received configuration request, the method posts a logical network configuration change to a configuration queue in the cluster. The configuration queue is dedicated to the logical network of the particular tenant. Services are instantiated separately in the container cluster for each tenant to distribute configuration changes from the respective configuration queues for the tenants to datacenters that implement the tenant logical networks such that configuration changes for one tenant do not slow down processing of configuration changes for other tenants.
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/342 - Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
10.
METHOD FOR MODIFYING AN SD-WAN USING METRIC-BASED HEAT MAPS
Some embodiments provide a method for using a heat map to modify an SD-WAN (software-defined wide-area network) deployed for a set of geographic locations. From a set of managed forwarding elements (MFEs) that forward multiple data message flows through the SD- WAN to a set of destination clusters, the method collects multiple metrics associated with the multiple data message flows. Based on the collected multiple metrics, the method generates a heat map that accounts for (1) the multiple data message flows, (2) locations of the set of MFEs, and (3) locations of the one or more destination clusters. The method uses the generated heat map to identify at least one modification to make to the SD-WAN to improve forwarding of the multiple data message flows.
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/5009 - Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
H04L 43/026 - Capturing of monitoring data using flow identification
Disclosed are various embodiments for coordinating the rollback of installed operating systems to an earlier, consistent state. In response to determining that a data processing unit (DPU) installed on a computing device has failed to successfully boot a first time, the computing device can be power cycled for a first time. In response to determining that the DPU has successfully booted a second time, a first version of a host operating system can be booted. A DPU operating system (DPU OS) is then booted from a DPU alternate boot image. In response to determining that the first version of the host operating system fails to match an executing version of the DPU OS, the computing device can be power cycled a second time and the host operating system is then booted from a host alternate boot image.
Some embodiments provide a method for performing data message processing at a smart NIC of a computer that executes a software forwarding element (SFE). The method determines whether a received data message matches an entry in a data message classification cache stored on the smart NIC based on data message classification results of the SFE. When the data message matches an entry, the method determines whether the matched entry is valid by comparing a timestamp of the entry to a set of rules stored on the smart NIC. When the matched entry is valid, the method processes the data message according to the matched entry without providing the data message to the SFE executing on the computer.
A version control interface provides for time travel with metadata management under a common transaction domain as the data. Examples generate a time-series of master branch snapshots for data objects stored in a data lake, with the snapshot comprising a tree data structure such as a hash tree and associated with a time indication. Readers select a master branch snapshot from the time-series, based on selection criteria (e.g., time) and use references in the selected master branch snapshot to read data objects from the data lake. This provides readers with a view of the data as of a specified time.
Some embodiments provide a method of implementing context-aware routing for a software-defined wide-area network, at an SD-WAN edge forwarding element (FE) located at a branch network connected to the SD-WAN. The method receives, from an SD-WAN controller, geolocation route weights for each of multiple cloud datacenters across which a set of application resources is distributed. The application resources are all reachable at a same virtual network address. For each of the cloud datacenters, the method installs a route for the virtual network address between the branch network and the cloud datacenter. The routes have different total costs based at least in part on the geolocation metrics received from the SD-WAN controller. The SD-WAN edge FE selects between the routes to establish connections to the set of application resources.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
Systems, apparatus, articles of manufacture, and methods are disclosed to manage a deployment of virtual machines in a cluster by, in a first host of a plurality of hosts, monitor, with first control plane services, an availability of second control plane services at a second host of the plurality of hosts, wherein the first control plane services and the second control plane services support implementation of application programming interface (API) requests in association with managing a cluster, after a determination that the second control plane services at the second host is not available, assign the first control plane services at the first host to operate in place of the second control plane services at the second host, and in the first host, assign, via the first control plane services at the first host, resources of one or more hosts in the cluster to support the API request.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
A version control interface provides for accessing a data lake with transactional semantics. Examples generate a plurality of tables for data objects stored in the data lake. The tables each comprise a set of name fields and map a space of columns or rows to a set of the data objects. Transactions read and write data objects and may span a plurality of tables with properties of atomicity, consistency, isolation, durability (ACID). Performing the transaction comprises: accumulating transaction-incomplete messages, indicating that the transaction is incomplete, until a transaction-complete message is received, indicating that the transaction is complete. Upon this occurring, a master branch is updated to reference the data objects according to the transaction-incomplete messages and the transaction-complete message. Tables may be grouped into data groups that provide atomicity boundaries so that different groups may be served by different master branches, thereby improving the speed of master branch updates.
Some embodiments provide a method for sending data messages at a network interface controller, NIC, (100) of a computer (135). From a network stack executing on the computer (135), the method receives (i) a header for a data message to send and (ii) a logical memory (155) address of a payload for the data message. The method translates the logical memory address into a memory address for accessing a particular one of multiple devices (115, 140, 150) connected to the computer. The method reads payload data from the memory address of the particular device (115, 140,150). The method sends the data message with the header received from the network stack and the payload data read from the particular device (115, 140, 150).
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 49/901 - Buffering arrangements using storage descriptor, e.g. read or write pointers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 3/06 - Digital input from, or digital output to, record carriers
18.
IN-MEMORY SCANNING FOR FILELESS MALWARE ON A HOST DEVICE
The disclosure herein describes the processing of malware scan requests from VCIs by an anti-malware scanner (AMS) on a host device. A malware scan request is received by the AMS from a VCI, the malware scan request including script data of a script from a memory buffer of the VCI. The AMS scans the script data of the malware scan request, outside of the VCI, and determines that the script includes malware. The AMS notifies the VCI that the script includes malware, whereby the VCI is configured to prevent execution of the script or take other mitigating action. The AMS provides scanning for fileless malware to VCIs on a host device without consuming or otherwise affecting resources of the VCIs.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
G06F 12/08 - Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
19.
AUTHENTICATION ORCHESTRATION ACROSS REMOTE APPLIANCES
Bootstrapping a new remote appliance based on a request received at a main appliance based on established trust between the two appliances can be implemented as computer-implemented methods, media, and systems. A request is received at an authentication orchestrator at the main appliance to perform an operation requested by a user for execution on a remote appliance. The authentication orchestrator at the main appliance obtains an authentication token issued by an identity provider at the main appliance for the user associated with the request. The authentication orchestrator requests to exchange the authentication token issued by the identity provider at the main appliance for a new authentication token that is issued by an identity provider at the remote appliance. The authentication orchestrator at the main appliance initiates an authentication of the user at an appliance manager at the remote appliance based on providing the new authentication token.
G06F 21/41 - User authentication where a single sign-on provides access to a plurality of computers
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Techniques for delivering remote applications to servers in an on-demand fashion (i.e., as end-users need them) are provided. In one set of embodiments, these techniques include packaging the installed contents (e.g., executable code and configuration data) of the remote applications into containers, referred to as application packages, that are placed on shared storage and dynamically attaching (i.e., mounting) an application package to a server at a time an end-user requests access a remote application in that package, thereby enabling the server to launch the application.
Disclosed are various examples of hosting a data processing unit (DPU) management operating system using an operating system software stack of a preinstalled DPU operating system. The preinstalled DPU operating system of the DPU is leveraged to provide a virtual machine environment. A DPU management operating system is executed within the virtual machine environment of the preinstalled DPU operating system. A third-party DPU function or a management service function is provided using the DPU hardware resources accessed through the DPU management operating system and the virtual machine environment.
A method for opening unknown files in a malware detection system, is provided. The method generally includes receiving a request to open a file classified as an unknown file, opening the file in a container, collecting at least one of a log of events carried out by the file or observed behavior traces of the file while open in the container, transmitting, to a file analzyer, at least one of the file, the log of events, or the behavior traces for static analysis, determining, a final verdict for the file, based on at least one of the file, the log of events, or the behavior traces, wherein the final verdict for the file is based on the static analysis or dynamic analysis of the file, and taking one or more actions based on a policy configured for the first endpoint and the final verdict.
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/53 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity, buffer overflow or preventing unwanted data erasure by executing in a restricted environment, e.g. sandbox or secure virtual machine
23.
AUTOMATED DISCOVERY OF VULNERABLE ENDPOINTS IN AN APPLICATION SERVER
The disclosure provides an approach for discovering vulnerable application server endpoints. Embodiments include retrieving, from an application server, an object representing a front controller of the application server. Embodiments include extracting, from the object, values for a plurality of variables. Embodiments include constructing, based on the values for the plurality of variables, one or more universal resource locators (URLs) corresponding to one or more methods of the front controller. Embodiments include sending one or more unauthenticated requests to one or more resources indicated by the one or more URLs. Embodiments include determining, based on a given response to a given unauthenticated request of the one or more unauthenticated requests, whether a given URL of the one or more URLs is vulnerable. Embodiments include performing one or more actions based on the determining of whether the given URL is vulnerable.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Shen, Jianjun
Gu, Ran
Jiang, Caixia
Fauser, Yves
Abstract
Some embodiments of the invention provide a method for adding routable subnets to a logical network that connects multiple machines and is implemented by a software defined network (SDN). The method receives an intent-based API that includes a request to add a routable subnet to the logical network. The method defines (i) a VLAN (virtual local area network) tag associated with the routable subnet, (ii) a first identifier associated with a first logical switch to which at least a first machine in the multiple machines that executes a set of containers belonging to the routable subnet attaches, and (iii) a second identifier associated with a second logical switch designated for the routable subnet. The method generates an API call that maps the VLAN tag and the first identifier to the second identifier. The method provides the API call to a management and control cluster of the SDN to direct the management and control cluster to implement the routable subnet.
A method for locating malware in a malware detection system, is provided. The method generally includes storing, at a first endpoint, a mapping of a first file hash and a first file path for a first file classified as an unknown file, opening, at the first endpoint, the first file prior to determining whether the first file is benign or malicious, determining, at the first endpoint, a first verdict for the first file, the first verdict indicating the first file is benign or malicious, locating the first file using the mapping of the first file hash and the first file path, and taking one or more actions based on a policy configured for the first endpoint and the first verdict indicating the first file is benign or malicious.
Provisioning a data processing unit (DPU) management operating system (OS). A management hypervisor installer executed on a host device launches or causes a server component to provide a management operating system (OS)installer image at a particular URI accessible over a network internal to the host device. A baseboard management controller (BMC) transfers the DPU management OS installer image to the DPU device. A volatile memory based virtual disk is created using the DPU management OS installer image. The DPU device is booted to a DPU management OS installer on the volatile memory based virtual disk. The DPU management OS installer installs a DPU management operating system to a nonvolatile memory of the DPU device on reboot of the DPU device.
Some embodiments provide a method that identifies a first number of requests received at a first application. Based on the first number of requests received at the first application, the method determines that a second application that processes requests after processing by the first application requires additional resources to handle a second number of requests that will be received at the second application. The method increases the amount of resources available to the second application prior to the second application receiving the second number of requests.
Described herein are systems, methods, and software to manage the identification of control packets in an encapsulation header. In one implementation, a computing system may receive a Geneve packet at a network interface and determine that the Geneve packet includes an Operations and Management (OAM) flag. Once the OAM flag is identified, the computing system can select a processing queue from a plurality of processing queues for a main processing system of the computing system based on the OAM flag and assign the Geneve packet to the processing queue.
A combined data processing unit (DPU) and server solution with DPU operating system (OS) integration is described. A DPU OS is executed on a DPU or other computing device, where the DPU OS exercises secure calls provided by a DPU's trusted firmware component, that may be invoked by DPU OS components to abstract DPU vendor-specific and server vendor-specific integration details. An invocation of one of the secure calls made on the DPU to communicate with its associated server computing device is identified. In an instance in which the one of the secure calls is invoked, the secure call invoked is translated into a call or request specific to an architecture of the server computing device and the call is performed, which may include sending a signal to the server computing device in a format interpretable by the server computing device.
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Tang, Qiang
Xiao, Zhaoqian
Abstract
Some embodiments of the invention provide a method of sending data in a network that includes multiple worker nodes, each worker node executing at least one set of containers, a gateway interface, and a virtual local area network (VLAN) tunnel interface. The method configures the gateway interface of each worker node to associate the gateway interface with multiple subnets. Each subnet is associated with a namespace, a first worker node executes a first set of containers of a first namespace, and a second worker node executes a second set of containers of the first namespace and a third set of containers of a second namespace. The method sends data between the first set of containers and the second set of containers through a VLAN tunnel between the first and second worker nodes. The method sends data between the first set of containers and the third set of containers through the gateway interface.
Systems and methods are described for providing a virtual machine ("VM") as a service. A user device can install a VM to enable itself as an edge node. The user device can then and use a portion of its computing resources to provide the service to the endpoint device by running the VM. In an example, an edge node can directly receive a request for a service from an endpoint device. The edge node can determine that it needs assistance from another device to jointly provide the service. Then another user device which is available to operate as an edge node can join the edge team.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
32.
TRAFFIC REDUNDANCY DEDUPLICATION FOR BLOCKCHAIN RECOVERY
In some embodiments, a method receives data for a block in a blockchain during a recovery process in which a recovering replica is recovering the block for a first instance of the blockchain being maintained by the recovering replica. The block is received from a second instance of the blockchain being maintained by a source replica. The method splits the data for the block into a plurality of chunks. Each chunk includes a portion of the data for the block; It is determined whether the recovering replica can recover a chunk in the plurality of chunks using a representation of the chunk. In response to determining that the recovering replica can recover the chunk, sending the representation of the chunk to the recovering replica. In response to determining that the recovering replica cannot recover the chunk, sending the data for the chunk to the recovering replica.
A version control interface for data provides a layer of abstraction that permits multiple readers and writers to access data lakes concurrently. An overlay file system, based on a data structure such as a tree, is used on top of one or more underlying storage instances to implement the interface. Each tree node tree is identified and accessed by means of any universally unique identifiers. Copy-on-write with the tree data structure implements snapshots of the overlay file system. The snapshots support a long-lived master branch, with point-in-time snapshots of its history, and one or more short-lived private branches. As data objects are written to the data lake, the private branch corresponding to a writer is updated. The private branches are merged back into the master branch using any merging logic, and conflict resolution policies are implemented. Readers read from the updated master branch or from any of the private branches.
Some embodiments provide a method for a first smart NIC of multiple smart NICs of a host computer. Each of the smart NICs executes a smart NIC operating system that performs virtual networking operations for a set of data compute machines executing on the host computer. The method receives a data message sent by one of the data compute machines executing on the host computer. The method performs virtual networking operations on the data message to determine that the data message is to be transmitted from a port of a second smart NIC of the multiple smart NICs. The method passes the data message to the second smart NIC via a private communication channel connecting the plurality of smart NICs.
H04L 41/0668 - Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
A method of managing configurations of a plurality of data centers that are each managed by one or more management servers, includes the steps of: in response to a change made to the configurations of one of the data centers, updating a desired state document that specifies a desired state of each of the data centers, the updated desired state document including the change; and instructing each of the data centers to update the configurations thereof according to the desired state specified in the updated desired state document. The management servers include a virtual infrastructure management server and a virtual network management server and the configurations include configurations of software running in the virtual infrastructure management server and the virtual network management server, and configurations of the data center managed by the virtual infrastructure management server and the virtual network management server.
H04L 41/0266 - Exchanging or transporting network management information using the Internet; Embedding network management web servers in network elements; Web-services-based protocols using meta-data, objects or commands for formatting management information, e.g. using eXtensible markup language [XML]
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/085 - Retrieval of network configuration; Tracking network configuration history
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/00 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
Some embodiments provide a method for forwarding multicast data messages at a forwarding element on a host computer. The method receives a multicast data message from a routing element executing on the host computer along with metadata appended to the multicast data message by the routing element. Based on a destination address of the multicast data message, the method identifies a set of recipient ports for a multicast group with which the multicast data message is associated. For each recipient port, the method uses the metadata appended to the multicast data message by the routing element to determine whether to deliver a copy of the multicast data message to the recipient port.
A method of upgrading an application executing in a software-defined data center (SDDC) includes: expanding a database of a first version of the application, while services of the first version of the application are active, to generate an expanded database, the expanded database supporting both the services of the first version of the application and services of a second version of the application; replicating the database of the first version to a database of the second version of the application while the services of the second version are inactive; and contracting, in response to activation of the services of the second version and deactivation of the services of the first version, the database of the second version, while the services of the second version re active, to generate a contracted database, the contracted database supporting the services of the second version.
This disclosure relates generally to configuring an application or service with reconfigurable cryptographic features taking the form of cryptographic algorithms, protocols or functions. The application or service can be configured with a cryptographic provider configured to receive abstracted cryptographic API calls and retrieve specific cryptographic features based on established cryptographic policies. This configuration allows for rapid updates to the cryptographic framework and for the cryptographic framework to be managed remotely in enterprise environments.
This relates generally to configuring and automatically selecting a cipher solution for secure communication. An example method includes, at an electronic device, receiving a request initiated by a requestor for one or more cryptographic operations, determining contextual information associated with the requestor, selecting a cipher solution for processing the request based on the contextual information and a policy engine, and processing the request for the one or more cryptographic operations by executing one or more cryptographic algorithms in accordance with the selected cipher solution.
The disclosure provides an approach for cryptographic agility. Embodiments include receiving, by a cryptographic agility system associated with an application, a request to establish a. secure communication session. Embodiments include, prior to establishing the secure communication session, selecting, by the cryptographic agility system, a first cryptographic technique and a second cryptographic technique for the secure communication session. Embodiments include, during the secure communication session, utilizing the first encryption technique for securely communicating a first set of data. Embodiments include determining that a condition has been met for switching from the first encryption technique to the second encryption technique. Embodiments include, based on the determining that the condition has been met, utilizing the second encryption technique for securely communication a second set of data.
A software-defined wide area network (SD-WAN) environment that leverages network virtualization management deployment is provided. Edge security services managed by the network virtualization management deployment are made available in the SD-WAN environment. Cloud gateways forward SD-WAN traffic to managed service nodes to apply security services. Network traffic is encapsulated with corresponding metadata to ensure that services can be performed according to the desired policy. Point-to-point tunnels are established between cloud gateways and the managed service nodes to transport the metadata to the managed service nodes using an overlay logical network. Virtual network identifiers (VNIs) in the metadata are used by the managed service nodes to identify tenants/policies. A managed service node receiving a packet uses provider service routers (T0-SR) and tenant service routers (T1-SRs) based on the VNI to apply the prescribed services for the tenant, and the resulting traffic is returned to the cloud gateway that originated the traffic.
Described herein are systems, methods, and software to manage replay windows in multipath connections between gateways. In one implementation, a first gateway may receive a packet directed toward a second gateway and identify a path from a plurality of paths to the second gateway. Once identified, the first gateway may increment a sequence number associated with the path and encapsulate the packet with a unique identifier for the path in the header with the incremented sequence number. The first gateway the communicates the encapsulated packet to the second gateway.
Some embodiments of the invention provide a method for evaluating multiple candidate resource elements that are candidates for deploying a set of one or more tenant deployable elements in a public cloud. For each particular tenant deployable element, the method deploys in the public cloud at least one instance of each of a set of one or more candidate resource elements and at least one agent to execute on the deployed resource element instance. The method communicates with each deployed agent to collect metrics for quantifying performance of the agent's respective resource element instance. The method then aggregates the collected metrics in order to generate a report that quantifies performance of each candidate resource element in the set of candidate resource elements for deploying the particular tenant deployable element in the public cloud.
Some embodiments provide a method that collects metrics for one or more paths of a first tunnel implementing a first security association (SA) and for one or more paths of a second tunnel implementing a second SA. The method selects a path based on the collected metrics of the paths of the first and second tunnels. When the selected path belongs to the first tunnel, the method encrypts data transmitted as encrypted payload of the first SA and transmits the encrypted payload in the first tunnel. When the selected path belongs to the second tunnel, the method encrypts data to be transmitted as encrypted payload of the second SA and transmits the encrypted payload in the second tunnel.
The present disclosure is directed to a leader-based partially synchronous BFT SMR protocol that improves upon existing protocols by exhibiting two rounds of communication latency, linear authenticator complexity, and optimistic responsiveness. This is achieved through the novel use of an aggregate signature scheme as part of the protocol's view-change procedure.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
46.
ALLOCATING ADDITIONAL BANDWIDTH TO RESOURCES IN A DATACENTER THROUGH DEPLOYMENT OF DEDICATED GATEWAYS
Some embodiments provide policy-driven methods for deploying edge forwarding elements in a public or private SDDC for tenants or applications. For instance, the method of some embodiments allows administrators to create different traffic groups for different applications and/or tenants, deploys edge forwarding elemnts for the different traffic groups, and configures forwarding elements in the SDDC to direct data message flows of the applications and/or tenants through the edge forwarding elements deployed for them. The policy-driven method of some embodiments also dynamically deploys edge forwarding elements in the SDDC for applications and/or tenants after detecting the need for the edge forwarding elements based on monitored traffic flow conditions.
Some embodiments of the invention provide a method of facilitating routing through a software-defined wide area network (SD-WAN) defined for an entity. A first edge forwarding node located at a first multi -machine site of the entity, the first multi-machine site at a first physical location and including a first set of machines, serves as an edge forwarding node for the first set of machines by forwarding packets between the first set of machines and other machines associated with the entity via other forwarding nodes in the SD-WAN. The first edge forwarding node receives configuration data specifying for the first edge forwarding node to serve as a hub forwarding node for forwarding a set of packets from a second set of machines associated with the entity and operating at a second multi-machine site at a second physical location to a third set of machines associated with the entity and operating at a third multi-machine site at a third physical location. The first edge forwarding node serves as a hub forwarding node to forward the set of packets from the second set of machines to the third set of machines.
Some embodiments of the invention provide a method for micro-segmenting traffic flows in a software defined wide area network (SD-WAN). At a first edge forwarding node of a first multi-machine site in the SD-WAN, the method receives, from a particular forwarding element, a first packet of a packet flow originating from a second multi-machine site that is external to the SD-WAN, the packet flow destined for a particular machine at the first multi-machine site. The method uses deep packet inspection (DPI) on the first packet to identify contextual information not provided by the particular forwarding element about the first packet and the packet flow. Based on the identified contextual information, the method applies one or more policies to the first packet before forwarding the first packet to the particular machine.
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Liu, Wenfeng
Shen, Jianjun
Gu, Ran
Cao, Rui
Han, Donghai
Abstract
Some embodiments provide a method of tracking errors in a container cluster network overlaying a software defined network (SDN), sometimes referred to as a virtual network. The method sends a request to instantiate a container cluster network object to an SDN manager of the SDN. The method then receives an identifier of a network resource of the SDN for instantiating the container cluster network object. The method associates the identified network resource with the container cluster network object. The method then receives an error message regarding the network resource from the SDN manager. The method identifies the error message as applying to the container cluster network object. The error message, in some embodiments, indicates a failure to initialize the network resource. The container cluster network object may be a namespace, a pod of containers, or a service.
To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
The disclosure provides an approach for a non-disruptive system upgrade. Embodiments include installing an upgraded version of an operating system (OS) on a computing system while a current version of the OS continues to run. Embodiments include entering a maintenance mode on the computing system, including preventing the addition of new applications and modifying the handling of storage operations on the computing system for the duration of the maintenance mode. Embodiments include, during the maintenance mode, configuring the upgraded version of the OS. Embodiments include, after configuring the upgraded version of the OS, suspending a subset of applications running on the computing system, transferring control over resources of the computing system to the upgraded version of the OS, and resuming the subset of the applications running on the computing system. Embodiments include exiting the maintenance mode on the computing system.
Some embodiments provide a method for performing radio access network (RAN) functions in a cloud at a medium access control (MAC) scheduler application that executes on a machine deployed on a host computer in the cloud. The method receives data, via a RAN intelligent controller (RIC), from a first RAN component. The method uses the received data to generate a MAC scheduling output. The method provides the MAC scheduling output to a second RAN component via the RIC.
H04B 7/06 - Diversity systems; Multi-antenna systems, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
53.
MANAGING INTERNET PROTOCOL (IP) ADDRESS ALLOCATION TO TENANTS IN A COMPUTING ENVIRONMENT
Described herein are systems, methods, and software to manage internet protocol (IP) address allocation for tenants in a computing environment. In one implementation, a logical router associated with a tenant in the computing environment requests a public IP address for a new segment instance from a controller. In response to the request, the controller may select a public IP address from a pool of available IP addresses and update networking address translation (NAT) on the logical router to associate the public IP address with a private IP address allocated to the new segment instance.
Some embodiments of the invention provide a method for proactively optimizing network performance for a software-defined wide area network (SD-WAN), which connects multiple devices operating in multiple network segments, during an active network flow. The method monitors the SD-WAN for network events related to the active network flow. The method detects a particular network event at a first device in a first segment in the SD-WAN traversed by the active network flow. Based on the particular network event, the method performs a proactive action on at least a second device in a second network segment in the SD-WAN that will be traversed by the active network flows in order to mitigate a potential negative impact of the particular network event on the performance of the SD-WAN to improve overall network performance.
H04L 41/5025 - Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
H04L 41/083 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
Some embodiments of the invention provide a method for network-aware load balancing for data messages traversing a software-defined wide area network (SD-WAN) (e.g., a virtual network) including multiple connection links between different elements of the SD-WAN. The method includes receiving, at a load balancer in a multi-machine site, link state data relating to a set of SD-WAN datapaths including connection links of the multiple connection links. The load balancer, in some embodiments, provides load balancing for data messages sent from a machine in the multi-machine site to a set of destination machines (e.g., web servers, database servers, etc.) connected to the load balancer over the set of SD-WAN datapaths. The load balancer selects, for the data message, a particular destination machine (e.g., a frontend machine for a set of backend servers) in the set of destination machines by performing a load balancing operation based on the received link state data.
Described herein are systems, methods, and software to manage the compression of route tables for communication between networking elements. In one implementation, a network device identifies network keys for a route table by replacing attributes in the tables with values. The network device further generates a compressed route table using the route keys and associating each of the route keys with one or more additional attributes. The network device also generates a dictionary to associate each of the values for the route keys to a corresponding attribute of the attributes.
Some embodiments provide a method for performing services on a host computer that executes several machines in a datacenter. The method configures a first set of one or more service containers for a first machine executing on the host computer, and a second set of one or more service containers for a second machine executing on the host computer. Each configured service container performs a service operation (e.g., a middlebox service operation, such as firewall, load balancing, encryption, etc.) on data messages associated with a particular machine (e.g., on ingress and/or egress data messages to and/or from the particular machine). For each particular machine, the method also configures a module along the particular machine's datapath to identify a subset of service operations to perform on a set of data messages associated with the particular machine, and to direct the set of data messages to a set of service containers configured for the particular machine to perform the identified set of service operations on the set of data messages. In some embodiments, the first and second machines are part of one logical network or one virtual private cloud that is deployed over a common physical network in the datacenter.
Some embodiments provide a method of providing distributed storage services to a host computer from a network interface card (NIC) of the host computer. At the NIC, the method accesses a set of one or more external storages operating outside of the host computer through a shared port of the NIC that is not only used to access the set of external storages but also for forwarding packets not related to an external storage. In some embodiments, the method accesses the external storage set by using a network fabric storage driver that employs a network fabric storage protocol to access the external storage set. The method presents the external storage as a local storage of the host computer to a set of programs executing on the host computer. In some embodiments, the method presents the local storage by using a storage emulation layer on the NIC to create a local storage construct that presents the set of external storages as a local storage of the host computer.
Some embodiments provide a method for operating a physical server in a network. The method stores multiple copies of a virtual machine (VM) image at a network-accessible storage. The method uses a first copy of the VM image as a virtual disk to execute a VM on a hypervisor of a first physical computing device. The method uses a second copy of the VM image as a virtual disk accessible via a smart network interface controller (NIC) of a second physical computing device to execute an operating system of the second physical computing device.
Some embodiments of the invention provide a method for providing flow processing offload (FPO) for a host computer at a physical network interface card (pNIC) connected to the host computer. A set of compute nodes executing on the host computer are each associated with a set of interfaces that are each assigned a locally-unique virtual port identifier (VPID) by a flow processing and action generator. The pNIC includes a set of interfaces that are assigned physical port identifiers (PPIDs) by the pNIC. The method includes receiving a data message at an interface of the pNIC and matching the data message to a stored flow entry that specifies a destination using a VPID. The method also includes identifying, using the VPID, a PPID as a destination of the received data message by performing a lookup in a mapping table storing a set of VPIDs and a corresponding set of PPIDs and forwarding the data message to an interface of the pNIC associated with the identified PPID.
Some embodiments of the invention provide a method for configuring multiple hardware offload units of a host computer to perform operations on packets associated with machines (e.g., virtual machines or containers) executing on the host computer and to pass the packets between each other efficiently. For instance, in some embodiments, the method configures a program executing on the host computer to identify a first hardware offload unit that has to perform a first operation on a packet associated with a particular machine and to provide the packet to the first hardware offload unit. The packet in some embodiments is a packet that the particular machine has sent to a destination machine on the network, or is a packet received from a source machine through a network and destined to the particular machine.
Some embodiments provide a method for network management and control system that manages one or more logical networks. From a first user, the method receives a definition of one or more security zones for a logical network. Each security zone definition includes a set of security rules for data compute nodes (DCNs) assigned to the security zone. From a second user, the method receives a definition of an application to be deployed in the logical network. The application definition specifies a set of requirements. Based on the specified set of requirements, the method assignes DCNs implementing the application to one or more of the security zones for the logical network.
Some embodiments provide a method, at a host computer, of provisioning a first program for enabling resource sharing on a smart network interface card (NIC ) of the host computer. The method receives the first program at the host computer along with a second program for sharing resources of the host computer. The method installs the second program on the host computer. The method provides the first program to the smart NIC for the smart NIC to install on the smart NIC.
Some embodiments provide a method for deploying edge forwarding elements in a public or private software defined datacenter (SDDC). For an entity, the method deploys a default first edge forwarding element to process data message flows between machines of the entity in a first network of the SDDC and machines external to the first network of the SDDC. The method subsequently receives a request to allocate more bandwidth to a first set of the data message flows entering or exiting the first network of the SDDC. In response, the method deploys a second edge forwarding element to process the first set of data message flows of the entity in order to allocate more bandwidth to the first set of the data message flows, while continuing to process a second set of data message flows of the entity through the default first edge node. The method in some embodiments receives the request for more bandwidth by first receiving a request to create a traffic group and then receiving a list of network addresses that are associated with the traffic group. In some embodiments, the method receives the list of network addresses associated with the traffic group by receiving a prefix of network addresses and receiving a request to associate the prefix of network addresses with the traffic group. Based on this request, the method then creates an association between the traffic group and the received prefix of network addresses.
Some embodiments of the invention provide a novel network architecture for advertising routes in an availability zone (e.g., a datacenter providing a set of hardware resources). The novel network architecture, in some embodiments, also provides a set of distributed services at the edge of a virtual private cloud (VPC) implemented in the availability zone (e.g., using the hardware resources of a datacenter) at a set of host computers in the A Z. The novel network architecture includes a set of route servers for receiving advertisements of network addresses (e.g., internet protocol (IP) addresses) as being available in the availability zone (A Z) from different routers in the AZ. The route servers also advertise the received network addresses to other routers in the AZ. In some embodiments, the other routers include routers executing on host computers in the AZ and gateway devices of the availability zone.
Some embodiments of the invention provide a method for connecting deployed machines in a set of one or more software-defined datacenters (SDDCs) to a virtual private cloud (VPC) in an availability zone (A Z). The method deploys network plugin agents (e.g. listening agents) on multiple host computers and configures the network plugin agents to receive notifications of events related to the deployment of network elements from a set of compute deployment agents executing on the particular deployed network plugin agent's host computer. The method, in some embodiments, is performed by a network manager that receives notifications from the deployed network plugin agents regarding events relating to the deployed machines and, in response to the received notifications, configures network elements to connect one or more sets of the deployed machines.
Workloads are scheduled on a common set of resources distributed across a cluster of hosts using at least two schedulers that operate independently. The resources include CPU, memory, network, and storage, and the workloads may be virtual objects, including VMs, and also operations including live migration of virtual objects, network file copy, reserving spare capacity for high availability restarts, and selecting hosts that are to go into maintenance mode. In addition, the at least two independent schedulers are assigned priorities such that the higher priority scheduler is executed to schedule workloads in its inventory on the common set of resources before the lower priority scheduler is executed to schedule workloads in its inventory on the common set of resources.
Some embodiments of the invention provide novel methods for facilitating a distributed SNAT (dSNAT) middled ox service operation for a first network at a host computer in the first network on which the dSNAT middled ox service operation is performed and a gateway device between the first network and a second network. The novel methods enable dSNAT that provides stateful SNAT at multiple host computers, thus avoiding the bottleneck problem associated with providing stateful SNAT at gateways and also significantly reduces the need to redirect packets received at the wrong host by using a capacity of off-the-shelf gateway devices to perform IPv6 encapsulation for IPv4 packets and assigning locally unique IPv6 addresses to each host executing a dSNAT middlebox service instance that are used by the gateway device.
Disclosed are various embodiments for securely storing data while an application is executing in a background state. An application can receive a message containing data, wherein the message is received by the application while the application is executing in a background state. The application can then encrypt the data in the message using a public key accessible to the application to generate encrypted data. Next, the application can store the encrypted data in an alternate data store. Subsequently, the application can authenticate a user of the computing device and switch execution to the foreground in response. Then, the application can decrypt a secure data store using an application specific encryption key. Next, the application can decrypt the encrypted data using a respective private key for the public key to generate decrypted data. The application can then store the decrypted data in the decrypted secure data store.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Today, single clusters of forwarding hub nodes in software-defined wide area networks (SD-WANs) are tied to fixed scale-out ratios. For example, an N node cluster would have a scale out factor of 1:N as a fixed ratio. If the first assigned cluster node is overloaded, the next node (i.e., second node) in the cluster takes over, and so on until the span reaches all available N nodes. The clustering services today are oblivious to application requirements and bind a rigid scheme for providing clustering services to multiple peering edge nodes (e.g., in a hub and spoke topology). In this manner, a high priority real time application traffic flow is treated the same way as that of a low priority (e.g., bulk) traffic flow with respect to the scale out ratio within the cluster. This can subsequently lead to sub-optimal performance for provisioning and load balancing traffic within the cluster, and, in some cases, under-utilization of cluster resources.
Some embodiments provide a novel method for performing network address translation to share a limited number of external source network addresses among a large number of connections. Instead of allocating an external source network address for an egressing packet just based on its internal source network address, the method of some embodiments allocates the external source network address based on the egressing packet's source network address and destination network address. This allows a limited number of external source network addresses to be re-used for different destination network address. For instance, in some embodiments, the method's network address allocation scheme allows the same 64K (e.g., 2Λ16) external source ports to be used for 64K connections for each destination network address.
As more networks move to the cloud, it is more common for one corporation or other entity to have networks spanning multiple sites. While logical networks that operate within a single site are well established, there are various challenges in having logical networks span multiple physical sites (e.g., datacenters). The sites should be self-contained, while also allowing for data to be sent from one site to another easily. Various solutions are required to solve these issues.
Some embodiments provide novel methods for providing a set of services for a logical network associated with an edge forwarding element acting between a logical network and an external network. In some embodiments, the services are provided using a logical service forwarding plane that connects the edge forwarding element to a set of service nodes that each provide a service in the set of services. The service classification operation of some embodiments identifies a chain of multiple service operations that has to be performed on the data message. In some embodiments, identifying the chain of service operations includes selecting a service path to provide the multiple services. After selecting the service path, the data message is sent along the selected service path to have the services provided. The data message is returned to the edge forwarding element by a last service node in the service path that performs the last service operation and the edge forwarding element performs next hop forwarding on the data message.
Some embodiments provide a system for implementing a logical network that spans multiple datacenters. The system includes, at each of the datacenters, a set of host computers that execute (i) data compute nodes (DCNs) belonging to the logical network and (ii) managed forwarding elements (MFEs) that implement the logical network to process data messages for the DCNs executing on the host computers. The system also includes, at each of the datacenters, a set of computing devices implementing logical network gateways for logical forwarding elements (LFEs) of the logical network. The logical network gateways are connected to the logical network gateways for the LFEs at the other datacenters. The MFEs executing on the host computers in a first datacenter communicate with the MFEs executing on the host computers in a second datacenter via the logical network gateways of the first and second datacenters.
H04L 12/713 - Route fault prevention or recovery, e.g. rerouting, route redundancy, virtual router redundancy protocol [VRRP] or hot standby router protocol [HSRP] using node redundancy, e.g. VRRP
H04L 12/715 - Hierarchical routing, e.g. clustered networks or inter-domain routing
H04L 12/721 - Routing procedures, e.g. shortest path routing, source routing, link state routing or distance vector routing
H04L 12/775 - Router architecture multiple routing entities, e.g. multiple software or hardware instances
75.
PARSING LOGICAL NETWORK DEFINITION FOR DIFFERENT SITES
As more networks move to the cloud, it is more common for corporations or other entities to have networks spanning multiple sites. While logical networks that operate within a single site are well established, there are various challenges in having logical networks span multiple physical sites (e.g., datacenters). The sites should be self-contained, while also allowing for data to be sent from one site to another easily. Various solutions are required to solve these issues.
VMWARE INFORMATION TECHNOLOGY (CHINA) CO., LTD. (China)
VMWARE, INC. (USA)
Inventor
Shen, Jianjun
Zhou, Zhensheng
Liu, Danting
Raut, Abhishek
Liu, Yang
Su, Kai
Sun, Qian
Liu, Vicky
Han, Donghai
Lan, Jackie
Abstract
A method for deploying network elements for a set of machines in a set of one or more datacenters, wherein the datacenter set is part of one availability zone. The method comprises: receives intent-based API (Application Programming Interface) requests, and parses these API requests to identify a set of network elements to connect and/or perform services for the set of machines. The API is a hierarchical document that can specify multiple different compute and/or network elements at different levels of compute and/or network element hierarchy. The method performs automated processes to define a virtual private cloud (VPC) to connect the set of machines to a logical network that segregates the set of machines from other machines in the datacenter set, wherein the set of machines include virtual machines and containers, the VPC is defined with a supervisor cluster namespace, and the API requests are provided as YAML files.
An example method of orchestrating a software-defined (SD) network layer of a virtualized computing system is described, the virtualized computing system including a host cluster, a virtualization management server, and a network management server each connected to a physical network, the host cluster having hosts and a virtualization layer executing on hardware platforms of the hosts. The method includes receiving, at the virtualization management server, a declarative specification describing a proposed state of an SD network for the host cluster, deploying, by the virtualization management server, virtualized infrastructure components in the host cluster in response to the proposed state in the declarative specification, and deploying, by the virtualization management server in cooperation with the network management server, logical network services supported by the virtualized infrastructure components in response to the proposed state in the declarative specification.
H04L 12/28 - Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
H04L 29/10 - Communication control; Communication processing characterised by an interface, e.g. the interface between the data link level and the physical level
78.
COMPUTING AND USING DIFFERENT PATH QUALITY METRICS FOR DIFFERENT SERVICE CLASSES
Some embodiments provide a method for quantifying quality of several service classes provided by a link between first and second forwarding nodes in a wide area network (WAN). At a first forwarding node, the method computes and stores first and second path quality metric (PQM) values based on packets sent from the second forwarding node for the first and second service classes. The different service classes in some embodiments are associated with different quality of service (QoS) guarantees that the WAN offers to the packets. In some embodiments, the computed PQM value for each service class quantifies the QoS provided to packets processed through the service class. In some embodiments, the first forwarding node adjusts the first and second PQM values as it processes more packets associated with the first and second service classes. The first forwarding node also periodically forwards to the second forwarding node the first and second PQM values that it maintains for the first and second service classes. In some embodiments, the second forwarding node performs a similar set of operations to compute first and second PQM values for packets sent from the first forwarding node for the first and second service classes, and to provide these PQM values to the first forwarding node periodically.
Some embodiments of the invention provide novel methods for providing a stateful service at a network edge device (e.g., an NSX edge) that has a plurality of north-facing interfaces (e.g., interfaces to an external network) and a plurality of corresponding south-facing interfaces (e.g., interfaces to a logical network). In some embodiments, the network edge device receives data messages from a first gateway device from a logical network, provides the stateful network service to the data message, and forwards the data message towards the destination through a corresponding interface connected to a physical network.
H04L 12/709 - Route fault prevention or recovery, e.g. rerouting, route redundancy, virtual router redundancy protocol [VRRP] or hot standby router protocol [HSRP] using path redundancy using M+N parallel active paths
H04L 12/713 - Route fault prevention or recovery, e.g. rerouting, route redundancy, virtual router redundancy protocol [VRRP] or hot standby router protocol [HSRP] using node redundancy, e.g. VRRP
H04L 12/741 - Header address processing for routing, e.g. table lookup
80.
SINGLE SIGN ON (SSO) CAPABILITY FOR SERVICES ACCESSED THROUGH MESSAGES
e.g.e.g., email) received by a user. A user can receive a message that includes an embedded URL or link that opens in a third-party service that requires authentication. Instead of requiring the user to enter authentication credentials for accessing the third-party service, a tunnel service can be used to intercept requests for authentication and redirect the requests to an identity manager that can issue a SSO token following an authentication of the user and device. Upon supplying the third-party service with the SSO token, the user can access the content associated with the third-party service without entering authentication credentials.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
81.
COLLECTING AN ANALYZING DATA REGARDING FLOWS ASSOCIATED WITH DPI PARAMETERS
Some embodiments provide a method for performing deep packet inspection (DPI) for an SD-WAN (software defined, wide area network) established for an entity by a plurality of edge nodes and a set of one or more cloud gateways. At a particular edge node, the method uses local and remote deep packet inspectors to perform DPI for a packet flow. Specifically, the method initially uses the local deep packet inspector to perform a first DPI operation on a set of packets of a first packet flow to generate a set of DPI parameters for the first packet flow. The method then forwards a copy of the set of packets to the remote deep packet inspector to perform a second DPI operation to generate a second set of DPI parameters. In some embodiments, the remote deep packet inspector is accessible by a controller cluster that configures the edge nodes and the gateways. In some such embodiments, the method forwards the copy of the set of packets to the controller cluster, which then uses the remote deep packet inspector to perform the remote DPI operation. The method receives the result of the second DPI operation, and when the generated first and second DPI parameters are different, generates a record regarding the difference.
An asynchronous state machine replication solution in a system of replicas includes executing multiple instances of a consensus protocol, referred to as leader-based views (LBVs) in each replica, where each replica is a leader participant in one of the LBV instances. Each replica drives a decision based on the consensus being reached among the LBV instances, rather than relying the expiration of timers and view changes to drive progress.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/18 - Error detection or correction of the data by redundancy in hardware using passive fault-masking of the redundant circuits, e.g. by quadding or by majority decision circuits
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
Some embodiments of the invention provide novel methods for performing services on data messages passing through a network connecting one or more datacenters, such as software defined datacenters (SDDCs). The method of some embodiments uses service containers executing on host computers to perform different chains (e.g., ordered sequences) of services on different data message flows. For a data message of a particular data message flow that is received or generated at a host computer, the method in some embodiments uses a service classifier executing on the host computer to identify a service chain that specifies several services to perform on the data message. For each service in the identified service chain, the service classifier identifies a service container for performing the service. The service classifier then forwards the data message to a service forwarding element to forward the data message through the service containers identified for the identified service chain. The service classifier and service forwarding element are implemented in some embodiments as processes that are defined as hooks in the virtual interface endpoints (e.g., virtual Ethernet ports) of the host computer's operating system (e.g., Linux operating system) over which the service containers execute.
Some embodiments provide a method for monitoring the status of a network connection between first and second host computers. The method is performed in some embodiments by a tunnel monitor executing on the first host computer that also separately executes a machine, where the machine uses a tunnel to send and receive messages to and from the second host computer. The method establishes a liveness channel with the machine to iteratively determine whether the first machine is operational. The method further establishes a monitoring session with the second host computer to iteratively determine whether the tunnel is operational. When a determination is made through the liveness channel that the machine is no longer operational, the method terminates the monitoring session with the second host computer. When a determination is made that the tunnel is no longer operational, the method notifies the machine through the liveness channel.
Various examples are disclosed for dynamic kernel slicing for virtual graphics processing unit (vGPU) sharing in serverless computing systems. A computing device is configured to provide a serverless computing service, receive a request for execution of program code in the serverless computing service in which a plurality of virtual graphics processing units (vGPUs) are used in the execution of the program code, determine a slice size to partition a compute kernel of the program code into a plurality of sub-kemels for concurrent execution by the vGPUs, the slice size being determined for individual ones of the sub-kernels based on an optimization function that considers a load on a GPU, determine an execution schedule for executing the individual ones of the sub-kernels on the vGPUs in accordance with a scheduling policy, and execute the sub-kemels on the vGPUs as partitioned in accordance with the execution schedule.
The disclosure provides an approach for overcoming the limitations of a cloud provider network when a data center with software-defined network and multiple hosts, each with multiple virtual machines, operates on the cloud provider network. Single-host aware routers and a multiple-host aware distributed router are combined into a hybrid router in each host. The hybrid router receives a route table from the control plane of the data center and updates the received table based on the locations of VMs, such as edge VMs and management VAs on each of the hosts. An agent in each host also updates a router in the cloud provider network based on the locations of the virtual machines on the hosts. Thus, the hybrid routers maintain local routing information and global routing information for the virtual machines on the hosts in the data center.
H04L 12/715 - Hierarchical routing, e.g. clustered networks or inter-domain routing
H04L 12/741 - Header address processing for routing, e.g. table lookup
H04L 12/713 - Route fault prevention or recovery, e.g. rerouting, route redundancy, virtual router redundancy protocol [VRRP] or hot standby router protocol [HSRP] using node redundancy, e.g. VRRP
87.
PERFORMING SLICE BASED OPERATIONS IN DATA PLANE CIRCUIT
Some embodiments of the invention provide a novel method of performing network slice- based operations on a data message at a hardware forwarding element (HFE) in a network. For a received data message flow, the method has the HFE identify a network slice associated with the received data message flow. This network slice in some embodiments is associated with a set of operations to be performed on the data message by several network elements, including one or more machines executing on one or more computers in the network. Once the network slice is identified, the method has the HFE process the data message flow based on a rule that applies to data messages associated with the identified slice.
Techniques for ensuring sufficient available storage capacity for data resynchronization or data reconstruction in a cluster of a hyper-converged infrastructure (HCI) deployment are provided. In one set of embodiments, a computer system can receive a request to provision or reconfigure an object on the cluster. The computer system can further calculate one or more storage capacity reservations for one or more host systems in the cluster, where the one or more storage capacity reservations indicate one or more amounts of local storage capacity to reserve on the one or more host systems respectively in order to ensure successful data resynchronization or data reconstruction in the case of a host system failure or maintenance event. If placement of the object on the cluster will result in a conflict with the one or more storage capacity reservations, the computer system can deny the request to provision or reconfigure the object.
Some embodiments provide a novel method for configuring managed forwarding elements (MFEs) to handle data messages for multiple logical networks that are implemented in a data center at the MFEs and to provide gateway service processing (e.g., firewall, DNS, etc.). A controller, in some embodiments, identifies logical networks implemented in the datacenter and MFEs available to provide gateway service processing and assigns gateway service processing for each logical network to a particular MFE. The MFEs, in some embodiments, receive data messages from endpoints in the logical networks that are destined for an external network. In some embodiments, the MFEs identify that the data messages require gateway service processing before being sent to the external network. The MFEs, in some embodiments, identify a particular MFE that is assigned to provide the gateway service processing for logical networks associated with the data messages.
H04L 12/713 - Route fault prevention or recovery, e.g. rerouting, route redundancy, virtual router redundancy protocol [VRRP] or hot standby router protocol [HSRP] using node redundancy, e.g. VRRP
Some embodiments provide a novel method for deploying different virtual networks over several public cloud datacenters for different entities. For each entity, the method (1) identifies a set of public cloud datacenters of one or more public cloud providers to connect a set of machines of the entity, (2) deploys managed forwarding nodes (MFNs) for the entity in the identified set of public cloud datacenters, and then (3) configures the MFNs to implement a virtual network that connects the entity's set of machines across its identified set of public cloud datacenters. In some embodiments, the method identifies the set of public cloud datacenters for an entity by receiving input from the entity's network administrator. In some embodiments, this input specifies the public cloud providers to use and/or the public cloud regions in which the virtual network should be defined. Conjunctively, or alternatively, this input in some embodiments specifies actual public cloud datacenters to use.
The present disclosure relates to centralized volume encryption key management for edge devices with trusted platform modules (TPM)s. In some aspects a volume encryption key is generated for a gateway device. A sealing authorization policy is also generated for the gateway device. The sealing authorization policy is generated based on a predetermined platform configuration register (PCR) mask and expected PCR values. The volume encryption key and the sealing authorization policy are transmitted from the management service to the gateway device to provision the gateway device with the volume encryption key.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
92.
MEMORY-AWARE PLACEMENT FOR VIRTUAL GPU ENABLED SYSTEMS
Disclosed are aspects of memory-aware placement in systems that include graphics processing units (GPUs) that are virtual GPU (vGPU) enabled. In some embodiments, a computing environment is monitored to identify graphics processing unit (GPU) data for a plurality of virtual GPU (vGPU) enabled GPUs of the computing environment, a plurality of vGPU requests are received. A respective vGPU request includes a GPU memory requirement. GPU configurations are determined in order to accommodate vGPU requests. The GPU configurations are determined based on an integer linear programming (ILP) vGPU request placement model. Configured vGPU profiles are applied for vGPU enabled GPUs, and vGPUs are created based on the configured vGPU profiles. The vGPU requests are assigned to the vGPUs.
Disclosed are various approaches for signing documents using mobile devices. A request is sent to a certificate authority for a signing certificate. The signing certificate is then received from the certificate authority. The signing certificate is then stored in the memory. Next, a file is received from a client application executed by the processor of the computing device. Then, the file is signed with the signing certificate to create a signed file. The signed file is then returned to the client application.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Examples of the present disclosure can include a method. The method may include (1) identifying, by a virtual infrastructure manager ("VIM"), a virtual network function ("VNF") descriptor from information obtained from the integrated network, (2) selectively generating at least one container on the physical network based on the VNF descriptor, (3) determining, by the VIM, an integrated network requirement based on state information associated with the integrated network, (4) providing, by the VIM, to a container management platform, the integrated network requirement, and (5) causing a VNF to be generated in the container to fulfill the integrated network requirement. Corresponding systems, non-transitory computer-readable media, and methods are also disclosed.
Various examples are described for defining automations for client devices enrolled with a management service. A computing environment can cause one or more user interfaces to be shown in a display of an administrator device that include at least one field for generating an automation that includes a trigger, a condition, and an action to automatically be performed when the condition is satisfied. The trigger defines a time at which the management service compares the condition to device profiles generated for client devices enrolled with the management service. The user interface can forecast a number of client devices that will be affected or subject to an automation, and can display results of the automation as it is executed in real time.
Examples described here include systems and methods for refreshing the operating system ("OS") of a device enrolled in a management platform. Execution of a first command file ensures that necessary components of the management platform residing on the device are stored in a partitioned portion of the device hard drive to preserve them during the OS refresh. After a new instance of the OS has been installed, execution of a second command file migrates the necessary components from the partitioned portion of the hard drive to the new OS instance. When the user logs back into the refreshed device, a third command file installs all necessary device management components at the new OS instance and re-enrolls the device with the management platform. In this manner, the OS of a managed device can be refreshed and re-enrolled in the management platform without significant input from a user or administrator.
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 21/34 - User authentication involving the use of external additional devices, e.g. dongles or smart cards
A switch in a slice-based network can be used to enforce quality of service ("QoS"). Agents can run in the switches, such as in the core of each switch. The switches can sort ingress packets into slice-specific ingress queues in a slice-based pool. The slices can have different QoS prioritizations. A switch-wide policing algorithm can move the slice-specific packets to egress interfaces. Then, one or more user-defined egress policing algorithms can prioritize which packets are sent out into the network first based on slice classifications.
A system can reduce congestion in slice-based networks, such as a virtual service network ("VSN"). The system can include a monitoring module that communicates with agents on switches, such as routers or servers. The switches report telematics data to the monitoring module, which determines slice-specific performance attributes such as slice latency and slice throughput. These slice-specific performance attributes are compared against software license agreement ("SLA") requirements. When the SLA is not met, the monitoring module can implement a new slice path for the slice to reduce the congestion.
In a slice-based network, switches can be programmed to perform routing functions based on a slice identifier. The switch can receive a packet and determine a slice identifier for the packet based on packet header information. The switch can use the slice identifier to determine a next hop. Using the slice identifier with a multi-path table, the switch can select an egress interface for sending the packet to the next hop. The multi-path table can ensure that traffic for a slice stays on the same interface link to the next hop, even when a link aggregation group ("LAG") is used for creation of a virtual channel across multiple interfaces or ports.