Examples of hybrid cooling System for datacenters are disclosed. In an example, the hybrid cooling system includes a chiller plant to provide supply of coolant, an air-cooling unit (ACU), and a coolant distribution line. The coolant distribution line comprises a first portion, a second portion, and a third portion in series fluid communication. The ACU receives supply of the coolant from the chiller plant via the first portion. The hybrid cooling system further includes a coolant distribution unit (CDU) coupled to an electronic component in the data hall. The ACU and the CDU are in series fluid communication via the second portion of the coolant distribution line and the coolant egressing the ACU passes through the second portion to be fed back to the CDU. The hybrid cooling system includes a heat exchanger in series fluid communication with the CDU via the third portion of the coolant distribution line.
A process includes, in a computer system, acquiring a first measurement that corresponds to a software container. Acquiring the measurement includes a hardware processor of the computer system measuring a given layer of a plurality of layers of layered file system structure corresponding to the software container. The given layer includes a plurality of files, and the first measurement includes a measurement of the plurality of files. The process includes storing the first measurement in a secure memory of the computer system. A content of the secure memory is used to verify an integrity of the software container.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
3.
MITIGATION OF A DENIAL OF SERVICE ATTACK IN A DEVICE PROVISIONING PROTOCOL (DPP) NETWORK
Systems and methods are provided for mitigating denial-of-service attacks that can disrupt onboarding internet-of-things (IoT) devices onto a network and ensuring legitimate IoT devices are onboarded. Example implementations include receiving, at an access point (AP) from a device, a chirp signal comprising a hash of data including a first public key of an IoT device. Upon verification of the first public key, the AP generates a context based on a first public key received from the authenticator. The context comprises information for onboarding the IoT device without subsequent communications between the AP, configurator and the authenticator. The AP can use the context to create and transmit authentication authorization requests responsive to chirp signals. In some examples, a chirp table can be created by a configurator for tracking severing APs. The chirp table can be utilized in provisioning APs for future chirp signals as needed.
In implementations of the present disclosure, there is provided an approach for generating post-quantum pre-shared keys (PPKs). A first device sends a first request including first data generated by the first device to a second device, and the first device receives a first response including second data generated by the second device from the second device. The first device derives a PPK based on the first data, the second data and a seed PPK, and then sends a second request including an identification of the seed PPK to the second device to enable the second device to determine the derived PPK. Then, the first device receives a second response including a confirmation to the second request from the second device. Implementations of the present disclosure can improve the efficiency of the PPK configuration and release the administrator device from tedious manual work.
A scheduling platform for scheduling serverless application tasks in persistent memory (PMEM) is provided. A profiler receives application requests from processes of serverless applications. The profiler categorizes the processes as persistent or non-persistent based on the application requests. A read/write batcher creates batches of the persistent requests including the read requests and write requests and assigns the batches to persistent memory banks. A scheduler creates a schedule of the batches to the persistent memory banks in a manner enabling optimization of job completion time.
One aspect of the present technology can provide a system for facilitating in-service software upgrade (ISSU) for a switch in a virtual switching stack. During operation, the system can initiate ISSU that facilitate uninterrupted traffic flow. The system can upgrade a first set of daemons of the switch that manage operations of the switch. The system can also upgrade a database stored on the switch. The database can store operational information of the switch. The system can further upgrade a second set of daemons of the switch that configure forwarding information on the forwarding hardware of the switch and facilitate data-plane operations for the switch. The forwarding information configured on the forwarding hardware can remain unchanged during the upgrade. The system can configure the upgraded second set of daemons to obtain control-plane information from a standby switch of a conductor switch of the virtual switching stack.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
H04L 41/0895 - Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
A network interface controller (NIC) facilitating incast management at a computing system is provided. During operation, the NIC can receive, via a network, a request to send data from a remote computing system. The NIC can determine that the request is among a plurality of requests from a plurality of remote computing systems accessible via the network. Based on a descriptor in the request, the NIC can determine a storage location of the data at the remote computing system. The NIC can then determine a level of congestion associated with the plurality of requests at the computing system. The NIC can schedule a data retrieval in response to the request based on the level of congestion and with respect to the plurality of requests. The NIC can then retrieve the data from the storage location based on remote access.
An apparatus facilitating efficient key refresh in a node is provided. During operation, the apparatus can determine a collective operation initiated by the node. The node can include a processor and can be in a distributed system comprising a plurality of nodes. The collective operation can be performed by a subset of the plurality of nodes in conjunction with each other. The apparatus can generate a new key based on a previous key maintained at the apparatus. Here, a respective key can be used for encrypting an inter-node packet in the distributed system. The apparatus can maintain the new and previous keys for the duration of the collective operation. Either of the new and previous keys can be used for decrypting messages received at the apparatus from other nodes of the distributed system. Upon determining a threshold point of the collective operation, the apparatus can discard the previous key.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A piece of networking equipment facilitating efficient congestion management is provided. During operation, the equipment can receive, via a network, a plurality of packets that include portions of a data segment sent from a sender device to a receiver device. The equipment can identify, among the plurality of packets, one or more payload packets comprising payload of the data segment, and at least a header packet comprising header information of the data segment and a header-packet indicator. The equipment can determine whether congestion is detected at the receiver device based on a number of sender devices sending packets to the receiver device via the equipment. Upon determining congestion at the receiver device, the equipment can perform flow trimming by forwarding the header packet to the receiver device and dropping a subset of the one or more payload packets.
In some examples, a system identifies sub-portions of a database query, assigns identifiers to the identified sub-portions, and adds the identifiers to a data structure. The system generates a fingerprint representing the database query based on applying a fingerprint function on the data structure including the identifiers.
A network interface controller (NIC) capable of facilitating fine-grain flow control (FGFC) is provided. The NIC can be equipped with a network interface, an FGFC logic block, and a traffic management logic block. During operation, the network interface can determine that a control frame from a switch is associated with FGFC. The network interface can then identify a data flow indicated in the control frame for applying the FGFC. The FGFC logic block can insert information from the control frame into an entry of a data structure stored in the NIC. The traffic management logic block can identify the entry in the data structure based on one or more fields of a packet belonging to the flow. Subsequently, the traffic management logic block can determine whether the packet is allowed to be forwarded based on the information in the entry.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
In some examples, a memory device includes a plurality of rows of memory cells, a plurality of victim counters associated with respective rows of memory cells of the plurality of rows of memory cells, and a plurality of aggressor counters associated with the respective rows of memory cells. A first victim counter of the plurality of counters is associated with a first row of the plurality of rows of memory cells, the first victim counter to advance in response to advances in counts of aggressor counters associated with neighboring rows of memory cells that are neighbors of the first row.
Systems, devices, and methods are provided for all-optical reconfigurable activation devices for realizing various activations functions using low input optical power. The device and systems disclosed herein include a directional coupler comprising a first phase-shift mechanism and an interferometer coupled to the directional coupler. The interferometer comprises at least one microring resonator and a second phase-shift mechanism coupled to thereto. The interferometer and the directional coupler comprise waveguides formed of a first material, while the microring resonator comprises a waveguide formed of a second material and a third phase-shift mechanism. The second material is provided as a low-loss material having a high Kerr effect and large bandgaps, to generate various nonlinear activation functions. The first, second, and third phase-shift mechanisms are configured to control biases within the disclosed systems and devices to achieve a desired activation function.
G02F 1/21 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour by interference
G02F 1/225 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour by interference in an optical waveguide structure
G06N 3/04 - Architecture, e.g. interconnection topology
G06N 3/067 - Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
An example method of allocating channels to access points (AP) is presented. A network manager may determine weights of the APs based on a channel metric and a location metric for APs. Further, the network manager may identify a first set of APs based on the weights and allocate a dedicated first channel to each of the first set of APs for its sole use. Accordingly, each of the first set of APs may use a respective dedicated first channel thereby reducing the performance impact on other APs. Further, the network manager may identify a second set of APs based on the weights and allocate second channels for sharing among the second set of APs. Since the second set of APs does not share any channel with the first set of APs, the performance of the second set of APs may not be impacted due to the first set of APs.
One aspect can provide a system and method for configuring a plurality of branch gateway (BGW) devices coupled to a virtual private network concentrator (VPNC). The VPNC negotiates with a respective BGW device a transmission-bandwidth contract; receives, from the BGW device, a request for additional transmission bandwidth; analyzes traffic patterns to identify one or more BGW devices with unused bandwidth; allocates the requested additional transmission bandwidth to the respective BGW device by reducing transmission bandwidth allocated to the identified one or more BGW devices; and transmits contract-update notifications to the BGW devices to allow each BGW device to update a corresponding transmission-bandwidth contract, which comprises increasing the upper bandwidth limit at the respective BGW device while reducing the upper bandwidth limit at the identified BGW devices. In response to expiration of a timer, the VPNC revokes the additional transmission bandwidth allocated to the respective branch gateway device.
Some examples relate to firmware updates. In an example, a first core system determines firmware updates to be installed on the first core system and auxiliary systems of a data center. The first core system sends firmware update package generation instructions to a cloud infrastructure comprising data center firmware updates. The first core system downloads a firmware update package from the cloud infrastructure. The first core system updates firmware of the first core system, based on one or more firmware updates in the firmware update package. The first core system receives a request for a specific firmware update from an auxiliary system. Based on the request received from the auxiliary system, the first core system identifies the specific firmware update for the auxiliary system from the firmware update package. The first core system provides the specific firmware update to the auxiliary system.
A process includes, responsive to a request to load a kernel module, determining, by an operating system kernel, a hash digest for the kernel module. The kernel module is associated with a name. The process includes determining, by the operating system kernel, whether an expected kernel module list contains an entry that contains the name and associates the name with the hash digest. The process includes, responsive to the determination of whether the expected kernel module list contains the entry, generating, by the operating system kernel, an alert that is associated with the kernel module.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
A process includes coupling a push-pull driver of a first bus device to a plurality of communication lines that are associated with a first bus to allow the first bus device to access the first bus using push-pull signaling. The process includes sharing a set of communication lines with the second bus. The sharing includes coupling an open drain driver of a second bus device to the set of communication lines to allow the second bus device to access the second bus using open drain signaling. The sharing includes using the set of communication lines in first time periods in which the first bus device accesses the first bus and using the set of communication lines in second time periods other than the first time periods in which the second bus device accesses the second bus. The sharing includes isolating a push-pull driver of the first bus device from the set of communication lines responsive to the second time periods.
Examples of performing selective Fine Timing Measurement (FTM) are described. For an FTM scan cycle, a first AP (i.e., an initiator) may determine scanning parameter values of a plurality of second APs (i.e., potential responders) based on previously performed FTM scans. The first AP may determine weights of the plurality of second APs based on the scanning parameter values and select a set of target APs based on the weights. The first AP may then scan the set of target APs for the FTM scan so that the rest of the plurality of second APs are relieved from participating in the FTM scan thereby reducing the performance impact on the rest of the plurality of second APs. After the FTM scan cycle is completed, the first AP may update the weights and select another set of target APs based on the updated weights to perform another FTM scan cycle.
Power-over-Ethernet (PoE) powered devices (PD) may be coupled to a PoE power sourcing equipment (PSE). The PDs may send to the PSE a link-layer protocol communication that comprises an alternative-power field indicating whether the sending PD has an alternative power source (e.g., a battery, a local power supply). The PSE may listen for and receive the communications and read the alternative-power fields thereof. The PSE may set respective power priorities for the PDs based at least in part on whether the PDs have respective alternative power sources, as indicated by the respective alternative-power fields of their communications. The PSE may reduce the power priority of those PDs that have alternative power sources, relative to the priority the PD would otherwise have been given.
A network interface controller (NIC) capable of efficient operation management for host accelerators is provided. The NIC can be equipped with a host interface and triggering logic block. During operation, the host interface can couple the NIC to a host device. The triggering logic block can obtain, via the host interface from the host device, an operation associated with an accelerator of the host device. The triggering logic block can determine whether a triggering condition has been satisfied for the operation based on an indicator received from the accelerator. If the triggering condition has been satisfied, the triggering logic block can obtain a piece of data generated from the accelerator from a memory location and execute the operation using the piece of data.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
22.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT MESSAGE MATCHING IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of performing message passing interface (MPI) list matching is provided. The NIC can include a host interface, a network interface, and a hardware list-processing engine (LPE). The host interface can couple the NIC to a host device. The network interface can couple the NIC to a network. During operation, the LPE can receive a match request and perform MPI list matching based on the received match request.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
23.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT ADDRESS TRANSLATION IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of facilitating efficient memory address translation is provided. The NIC can be equipped with a host interface, a cache, and an address translation unit (ATU). During operation, the ATU can determine an operating mode. The operating mode can indicate whether the ATU is to perform a memory address translation at the NIC. The ATU can then determine whether a memory address indicated in the memory access request is available in the cache. If the memory address is not available in the cache, the ATU can perform an operation on the memory address based on the operating mode.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
The systems and method disclosed herein optimize the deregistration of user equipment (UE) from the 5GC. For example, when the UE operates in single-registration mode, for AMF 3GPP access registration with drFlag attribute set to false (or to be absent) in UDM/UDR, a “new attribute”, singleRegIndication, is defined. The AMF sets the singleRegIndication value to “NO_INDICATION” when there is no N26 interface connection to MME. UDM will not instruct HSS to cancel MME (and SGSN/VLR) if there is an old AMF 3GPP registration for which “NO_INDICATION” or “DEREGISTER_SN” is set as the singleRegIndication value (i.e. the old AMF was already registered with the single registration). Otherwise, the AMF sets the singleRegIndication value to “DEREGISTER_SN” when the AMF has established a N26 interface connection to the old MME. UDM can instruct HSS to cancel the MME (and SGSN/VLR).
A process incudes generating, by a canary circuit of a semiconductor package, an output value. The semiconductor package includes a hardware root-of-trust engine for an electronic system. The process includes comparing, by the canary circuit, the output value to an expected value. The process incudes, responsive to a result of the comparison, regulating, by the semiconductor package, a response of the electronic system to a reset request.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
26.
RESTORING STATES OF REAL TIME CLOCK DEVICES FOR MULTIPLE HOSTS
A process includes, responsive to a primary power source being enabled, receiving, from a real time clock (RTC) device of a computer platform, an indication of a first time. Responsive to the primary power source being enabled, the process includes storing first data in a first non-volatile storage of the computer platform representing a snapshot of the first time and repeatedly updating the snapshot to cause the snapshot to track the first time. Responsive to the primary power source being disabled to begin a power outage, the process includes providing, by a timer of the computer platform powered by a secondary power source, a timer output that represents an accumulated time that corresponds to the power outage. The process includes restoring a state of the RTC device responsive to the primary power source being reenabled to end the power outage. The restoration includes, based on the accumulated time and the snapshot, determining a second time, and storing second data in the RTC device that represents the second time.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 1/14 - Time supervision arrangements, e.g. real time clock
27.
TIMESTAMPING TAMPERING EVENTS THAT OCCUR DURING PRIMARY POWER OUTAGES
A process includes performing actions responsive to secondary power when primary power for a computer platform is unavailable. The actions include providing, by a timer of a computer platform, a timer output that is associated with a first time domain and corresponds to an accumulated time that primary power is unavailable. Moreover, these actions include using secondary power to detect tampering with the computer platform, and responsive to detecting the tampering, reading the timer output to provide a first timestamp that represents a time of detection of the tampering. The process further includes actions which are performed responsive to primary power being subsequently available. These actions include reading data from a non-volatile storage of the computer platform. The data represents a snapshot of a real time clock (RTC) device time that is provided by an RTC device that is powered by the primary power and corresponds to a second time domain. The actions further include transforming the first timestamp into a second timestamp associated with the second time domain based on the snapshot.
In some examples, a device includes a first processing core comprising a resistive memory array to perform an analog computation, and a digital processing core comprising a digital memory programmable with different values to perform different computations responsive to respective different conditions. The device further includes a controller to selectively apply input data to the first processing core and the digital processing core.
A crossbar array includes a number of memory elements. An analog-to-digital converter (ADC) is electronically coupled to the vector output register. A digital-to-analog converter (DAC) is electronically coupled to the vector input register. A processor is electronically coupled to the ADC and to the DAC. The processor may be configured to determine whether division of input vector data by output vector data from the crossbar array is within a threshold value, and if not within the threshold value, determine changed data values as between the output vector data and the input vector data, and write the changed data values to the memory elements of the crossbar array.
A system comprising processing circuitry a memory storing instructions that cause the system to detect a code change to source code included in a code repository, identify a relationship between the code change and an associated product feature, determine one or more dependent product features impacted by the code change, select a set of test cases including a subset of test cases related to the associated product feature and a subset of test cases related to the one or more dependent product features, execute the set of test cases, and update the code-to-feature mapping using results of executing the set of test case.
Systems and methods are provided for employing a current input analog content addressable memory (CI-aCAM). The CI-aCAM is particularly structured as aCAM that allows the analog signal that is input into the aCAM cell to be received as current. A larger hardware architecture that combines two core analog compute circuits, namely a dot product engine (DPE) circuit for matrix multiplications and an aCAM circuit for search operations can also be realized using the disclosed CI-aCAM. For instance, a DPE circuit, which output current signals, can be directly connected with the input of a CI-aCAM, which is designed to receive current signals in a manner that eliminates conversion steps and circuits (e.g., analog to digital and current to voltage). By leveraging CI-aCAMs, a combined DPE-aCAM hardware architecture can be a realized as a substantially compact structure.
G06F 7/544 - Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using unspecified devices for evaluating functions by calculation
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
Systems and methods are provided for performing object store offloading. A user query can be received from a client device to access a data object. The semantic structure associated with the data object can be identified, as well as one or more relationships associated with the semantic structure of the data object. A view of the data object can be determined based on the one or more relationships and said view can be provided to a user interface.
Examples provide new roaming test systems for network deployments that can be implemented remotely using a single physical AP. Examples achieve this elegant system by emulating a physical network deployment using a group of VAPs provisioned on the single physical AP (a VAP may refer to a logical or a virtual AP instance on a physical AP). Each VAP of CAP group may be configured to represent a physical AP of the physical network deployment (such a network deployment may be a prospective deployment or, an actual/set-up deployment). Examples can simulate/emulate a wireless client physically moving between physical APs of the network deployment by varying transmission power associated with each VAP as a function of time in a manner that mirrors how a wireless client would perceive transmission power varying for physical APs of the network deployment (represented by the VAPs) as the wireless client moves across the geographical site of the network deployment.
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
H04W 36/16 - Performing reselection for specific purposes
34.
GOVERNING RESPONSES TO RESETS RESPONSIVE TO TAMPERING ACTIVITY DETECTION
A process includes receiving a given reset indication to reset a semiconductor package. The given reset indication is one of a time sequence of recent indications received by the semiconductor package. The semiconductor package includes a hardware root-of-trust. The process includes detecting an activity that is associated with the semiconductor package consistent with a tampering activity. The process includes governing a response of the semiconductor package to the given reset indication responsive to the detection of the activity.
G06F 21/75 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by inhibiting the analysis of circuitry or operation, e.g. to counteract reverse engineering
G06F 1/14 - Time supervision arrangements, e.g. real time clock
G06F 21/72 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
35.
SYSTEM AND METHOD FOR FACILITATING ON-DEMAND PAGING IN A NETWORK INTERFACE CONTROLLER (NIC)
A network interface controller (NIC) capable of on-demand paging is provided. The NIC can be equipped with a host interface, an operation logic block, and an address logic block. The host interface can couple the NIC to a host device. The operation logic block can obtain from a remote device, a request for an operation based on a virtual memory address. The address logic block can obtain, from the operation logic block, a request for an address translation for the virtual memory address and issue an address translation request to the host device via the host interface. If the address translation is unsuccessful, the address logic block can send a page request to a processor of the host device via the host interface. The address logic block can then determine that a page has been allocated in response to the page request and reissue the address translation request.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 47/80 - Actions related to the user profile or the type of traffic
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
36.
OPTICAL DEVICE HAVING UNIDIRECTIONAL MICRORING RESONATOR LASER CAPABLE OF SINGLE-MODE OPERATION
Examples described herein relate to an optical device. The optical device includes a first microring resonator (MRR) laser having a first resonant frequency and a first free spectral range (FSR). The first FSR is greater than a channel spacing of the optical device. Further, the optical device includes a first frequency-dependent filter formed along a portion of the first MRR laser via a common bus waveguide to attenuate one or more frequencies different from the first resonant frequency. A length of the common bus waveguide is chosen to achieve a second FSR of the common bus waveguide to be substantially equal to the channel spacing to enable a single-mode operation for the optical device. Moreover, the optical device includes a first reflector formed at a first end of the common bus waveguide to enhance a unidirectionality of optical signal within the first MRR laser.
H01S 3/08 - Construction or shape of optical resonators or components thereof
H01S 3/10 - Controlling the intensity, frequency, phase, polarisation or direction of the emitted radiation, e.g. switching, gating, modulating or demodulating
H01S 3/106 - Controlling the intensity, frequency, phase, polarisation or direction of the emitted radiation, e.g. switching, gating, modulating or demodulating by controlling devices placed within the cavity
H01S 3/107 - Controlling the intensity, frequency, phase, polarisation or direction of the emitted radiation, e.g. switching, gating, modulating or demodulating by controlling devices placed within the cavity using electro-optic devices, e.g. exhibiting Pockels or Kerr effect
37.
OPTIMIZING THE OPERATION OF A MICROSERVICE CLUSTER
Example implementations relate to optimizing operation of microservice clusters comprising multiple nodes, each executing a common self-sufficient microservice instance. The method includes, at a first node having a database instance comprising a plurality of rows and a plurality of columns, calculating a distinct hash per row to create a hash list, each hash identifying data contained in the respective row; publishing a distinct hash and/or the hash list to one or more of the plurality of nodes, each node having respective database instances and respective hash lists; at a second node, comparing the distinct hash and/the hash list published by the first node to the hash list of the second node to identify any missing rows of data; and, in response to identifying, based on the comparison, a missing row(s) in the second node's database instance, updating the second node's database instance to include the missing row(s) of data.
One aspect described in this application provides an apparatus that includes an external tray hose with an integrated booster pump for cooling a computing device. The booster pump can boost fluid flow and fluid pressure of coolant fluid pumped toward the computing device on a server tray. The booster pump includes a stator mounted around a booster pump housing with a bore. The stator includes one or more C-shaped lamination stacks for supporting coil windings. Energized coil windings generate a varying magnetic field for rotating an impeller within a housing bore to pump fluid along the housing bore of the booster pump. The apparatus includes a monitor module to monitor a set of sensors to obtain information about one or more of fluid temperature, fluid pressure, and device temperature; and a control module to adjust performance of the booster pump based on the information obtained from the set of sensors.
Example implementations relate to a cooling module, and a method of cooling an electronic circuit module. The cooling module includes first and second cooling components fluidically connected to each other. The first cooling component includes a first fluid channel having supply, return, and body sections, and a second fluid channel. The second cooling component includes an intermediate fluid channel. The body section is bifurcated into first and second body sections, and the first and second body sections are further merged into a third body section. The supply section is connected to the first and second body sections. The return section is connected to the third body section and the intermediate fluid channel via an inlet fluid-flow path established between the first and second cooling components. The second fluid channel is connected to the intermediate fluid channel via an outlet fluid-flow path established between the first and second cooling components.
A process includes testing a motor vehicle using a distributed computing system. The distributed computing system includes a plurality of hardware components and a plurality of software components. The plurality of hardware components includes first hardware components of the vehicle and second hardware components that are separate from the vehicle. The plurality of software components includes first software components of the vehicle and second software components separate from the vehicle. The process includes, responsive to the testing, generating, by the distributed computing system, an audit record. Generating the audit record includes determining, by the distributed computing system, integrity measurements of the first hardware components, the second hardware components, the first software components and the second software components. Generating the audit record further includes comparing, by the distributed computing system, the integrity measurements to reference measurements that correspond to reference hardware configuration for the distributed computing system and a reference software configuration for the distributed computing system. Generating the audit record includes providing, by the distributed computing system, responsive to the comparison, digitally signed data for the audit record attesting to the distributed computing system having the reference hardware configuration and the reference software configuration in connection with the testing.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 11/267 - Reconfiguring circuits for testing, e.g. LSSD, partitioning
G06F 11/273 - Tester hardware, i.e. output processing circuits
41.
UNIFIED NETWORK MANAGEMENT FOR HETEROGENEOUS EDGE ENTERPRISE NETWORK
A system and method are provided which facilitate a unified network management for a heterogeneous edge enterprise network. In one aspect, the system establishes, by a cloud controller, a secure connection to an edge device in an edge network. The cloud controller deploys to the edge device a module which includes a proxy agent, thereby allowing the module to be installed on the edge device. The system monitors data obtained from the edge device based on a set of rules. Responsive to detecting a trigger condition based on the monitored data, the system performs a predetermined action corresponding to the detected trigger condition. The predetermined action can comprise sending a configuration command to be executed on the edge device or sending an action to be performed by the proxy agent on other entities communicatively coupled to the edge network (e.g., client devices and applications).
Systems and methods for checking whether training data to be inputted into a training phase of a ML model is Independent and Identically Distributed data (IID data), and taking action based on that determination. One example of the present disclosure provides a method implemented by an edge node operating in a distributed swarm learning blockchain network. The method includes receiving a smart contract including a definition of conforming data and executing the smart contract including the definition of conforming data. The method further includes receiving one or more batches of training data for training a ML model. The method further includes checking whether each batch of training data conforms to the agreed-upon definition of conforming data, tagging and isolating non-conforming batches of training data, and inputting conforming batches of training data into a training phase of the machine learning model. The conforming batches of training data are IID data.
A method for securing a plurality of compute nodes includes authenticating a hardware architecture of each of a plurality of components of the compute nodes. The method also includes authenticating a firmware of each of the plurality of components. Further, the method includes generating an authentication database comprising a plurality of authentication descriptions that are based on the authenticated hardware architecture and the authenticated firmware. Additionally, a policy for securing a specified subset of the plurality of compute nodes is implemented by using the authentication database.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/32 - User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
44.
SYSTEM AND METHOD FOR FACILITATING EFFICIENT PACKET FORWARDING USING A MESSAGE STATE TABLE IN A NETWORK INTERFACE CONTROLLER (NIC)
One embodiment provides a network interface controller (NIC). The NIC can include a storage device, a network interface, a hardware list-processing engine (LPE), and a message state table (MST) logic block. The storage device can store an MST. The network interface can couple the NIC to a network. The LPE can perform message matching on a first packet of a message received via the network interface. The MST logic block can store results of the message matching in the MST and receive a request to read the results of the message matching from the MST if the NIC receives a second packet associated with the message.
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
G06F 13/14 - Handling requests for interconnection or transfer
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/2466 - Traffic characterised by specific attributes, e.g. priority or QoS using signalling traffic
H04L 47/2483 - Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 47/52 - Queue scheduling by attributing bandwidth to queues
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/625 - Queue scheduling characterised by scheduling criteria for service slots or service orders
H04L 47/6275 - Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
H04L 47/629 - Ensuring fair share of resources, e.g. weighted fair queuing [WFQ]
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 49/9047 - Buffering arrangements including multiple buffers, e.g. buffer pools
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
45.
DETECTING AN ANOMALY EVENT IN LOW DIMENSIONAL SPACENETWORKS
Systems and methods are provided for reducing a number of performance metrics generated by network functions to a number of reduced dimension metrics, which can be used to detect anomalous behavior and generate a warning signal of the detected anomalous behavior. The disclosed systems and methods transform raw performance metrics in a high dimensionality space to a reduced number of metrics in a lower dimensionality space through dimensionality reduction techniques. Anomalous behavior in network performance is detected in the high dimensionality space using the reduced dimension metrics. The systems and methods disclosed herein convert the reduced dimension metrics back to the high dimensionality space, such that the performance metrics from network functions can be utilized to understand and address potential problems in the network.
Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.
Example implementations relate to an integrated circuit (IC) package, an electronic device having the IC package, and a method of assembling the IC package to a printed circuit board (PCB) of the electronic device. The IC package includes a substrate, a chip, and an electromagnetic shield. The chip is coupled to the substrate. The electromagnetic shield is coupled to the substrate such that the chip is enclosed between the substrate and the electromagnetic shield. The electromagnetic shield includes a ferromagnetic material. Further, the electromagnetic shield protrudes beyond the substrate and is electrically grounded to the PCB to prevent an electromagnetic interference (EMI) noise from radiating through the IC package.
H05K 9/00 - Screening of apparatus or components against electric or magnetic fields
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
H02M 7/00 - Conversion of ac power input into dc power output; Conversion of dc power input into ac power output
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
One aspect can provide a system and method for enforcing a single-domain registration of a user equipment (UE) roaming across different provider networks. During operation, the system can receive, at a subscriber-management entity (SME) from a first service node within a first provider's network, a location-update message associated with the UE. The system can identify a second service node within a second provider's network with which the UE has a previous registration and query a subscriber-information database to determine whether a single-domain-registration feature is enabled at the SME for the UE. In response to determining that the single-domain-registration feature is enabled, the system can send a location-cancellation message to the second service node to cause the second service node to cancel the previous registration of the UE and register the UE at the SME.
A method for optimizing operations of high-performance computing (HPC) systems includes collecting data associated with a plurality of workload performance profiling counters associated with a workload during runtime of the workload in an HPC system. Based on the collected data, the method includes using a machine-learning technique to classify the workload by determining a workload-specific fingerprint for the workload. The method includes identifying an optimization metric to optimize during running of the workload in the HPC system. The method includes determining an optimal setting for a plurality of tunable hardware execution parameters as measured against the optimization metric by varying at least a portion of the plurality of tunable hardware execution parameters. The method includes storing the workload-specific fingerprint, the optimization metric, and the optimal setting for the plurality of tunable hardware execution parameters as measured against the optimization metric in an architecture-specific knowledge database.
Examples of the presently disclosed technology provide automated firmware recommendation systems that inject the intelligence of machine learning into the firmware recommendation process. To accomplish this, examples train a machine learning model on troves of historical customer firmware update data on a dynamic basis (e.g., examples may train the machine learning model on weekly basis to predict accepted firmware updates made by a vendor's customers across the most recent 6 months). From this dynamic training, the machine learning model can learn to predict/recommend an optimal firmware version for a customer/network device cluster based on firmware-related features, recent customer preferences, and other customer-specific factors. Once trained, examples can deploy the machine learning model to make highly tailored firmware recommendations for individual network device clusters of individual customers taking the above described factors into account.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
One aspect provides a printed circuit board (PCB). The PCB can include a plurality of layers and a plurality of plated through-hole (PTH) vias extending through the plurality of layers. The plurality of layers can include at least a top layer for mounting components, a second surface layer, and a first power layer positioned between the top layer and the second surface layer. The plurality of PTH vias can include at least one power via coupled to the first power layer to provide power to components mounted on the top layer. A stub length of the power via can be less than a distance between the power layer and the second surface layer.
Example implementations relate to backup operations in a storage system. An example includes a medium storing instructions to: detect a trigger event to initiate a backup restoration of a data entity at a local storage system; determine a user preference between a speed priority and a cost priority; based at least on the determined user preference, select between: an indirect restoration option in which a first portion of the backup data stored on the remote storage system is combined with a second portion of backup data stored on a gateway device to restore the data entity at the local storage system; and a direct restoration option in which the backup data stored on the remote storage system is restored at the local storage system without being combined with other backup data; and restore, using the selected first restoration option, the data entity at the local storage system.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
53.
Secure configuration of a headless networking device
The secure configuration of a headless networking device is described. A label associated with the headless networking device is scanned and a public key is determined. scanning a label associated with a networking device. A configuration process is initiated for the networking device using the public key associated with the networking device that was determined based on the scanned label.
G09C 5/00 - Ciphering or deciphering apparatus or methods not provided for in other groups of this subclass, e.g. involving the concealment or deformation of graphic data such as designs, written or printed messages
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
In embodiments of the present disclosure, there is provided an approach for selecting a channel puncturing scheme based on channel qualities. A method comprises detecting channel interference between neighbor access points (APs), and determining whether a preamble puncturing needs to be enabled based on some puncturing conditions. After determining that a preamble puncturing needs to be enabled, scores for candidate puncturing patterns are calculated based on channel qualities of the sub-channels in the channel, and a proper puncturing pattern can be selected based on the respective scores. Embodiments of the present disclosure can help AP to achieve an effective and better puncturing scheme in real deployment.
Implementations of the present disclosure relate to setting a system time of an access point (AP) for server certificate validation. A method comprises obtaining a default time as a system time of the AP after an AP boots up. The method also comprises obtaining a memory time from a flash memory of the AP. The method also comprises updating the system time with the memory time obtained from the flash memory. The method also comprises validating a server certificate received from an authentication server based on the system time. The system time is synchronized with a network time if the server certificate is successfully validated based on the system time. The synchronized system time is then written into the flash memory. In this way, the authentication can be performed based on a reasonable system time even if the AP reboots.
Example implementations relate to deduplication operations in a storage system. An example includes, in response to initiation of a new backup process to store a first stream of data, initializing a temporary sparse index to be stored in a memory of a deduplication storage system; identifying a cloned portion of the first data stream; identifying at least one container index associated with the cloned portion of the first data stream; identifying a set of hook points included in the at least one container index; and populating the temporary sparse index with a set of entries, the set of entries mapping the identified set of hook points to the at least one container index.
In embodiments of the present disclosure, there is provided an approach for aligning target beacon transmission time (TBTT) for multi-link connection. A method comprises setting up a first link and a second link between an access point (AP) and a wireless device based on multi-link operation (MLO), and obtaining a first TBTT of the first link and a second TBTT of the second link. The method further comprises aligning the first TBTT and the second TBTT at a start time, and then transmitting beacon frames on the first link and the second link according to the aligning of the first TBTT and second TBTT. Embodiments of the present disclosure synchronize and align the beacon TBTTs for different links, and can reduce the wake-up time on all active links, thereby saving power for the wireless device.
Handling frequently accessed pages is disclosed. An indication is received of a stalling event caused by a requested portion of memory being inaccessible. It is determined that the requested portion of memory is a frequently updated portion of memory. The stalling event is handled based at least in part on the determination that the requested portion of memory is a frequently updated portion of memory.
Examples described herein mitigate adjacent channel interference for dual radios of a network device. Examples described herein may associate, by a network device, a first client device with a first radio of the network device, associate, by the network device, a second client device with a second radio of the network device, determine that the first and second client devices are within a steering threshold, and based on the determination that the first and second client devices are within the steering threshold, steer, by the network device, the second client device from the second radio to the first radio. Examples described herein may communicate, using the first radio of the network device, with the first and second client devices.
One aspect provides a system and method for provisioning an access point (AP) in a wireless mesh network. During operation, a controller can obtain a set of published global encryption parameters comprising a master public key, apply an identity-based encryption (IBE) scheme to encrypt a configuration message based at least on the master public key, and transmit the encrypted configuration message to a proxy device, which forwards the encrypted configuration message to the AP. The proxy device is coupled to the controller via a previously established secure communication channel and coupled to the AP via an open communication channel. The AP can decrypt the encrypted configuration message using an AP-specific secret key generated based on a unique identifier of the AP and a master private key corresponding to the master public key, thereby facilitating provisioning of the AP based on the configuration message.
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
G06F 21/71 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
64.
MACHINE LEARNING FACETS FOR DATASET PREPARATION IN STORAGE DEVICES
Examples described herein relate to preparing datasets in a storage device for machine learning (ML) applications. Examples include maintaining ML facet mappings between ML facets and dataset preparation tags, deriving ML facets of a dataset stored in the storage device, and generating filtered datasets from the datasets using the ML facets and ML facet mappings. The filtered dataset is associated with improved dataset quality compared to unfiltered dataset. The storage device transmits the filtered dataset to ML applications requesting the dataset. Some examples include recommending, by the storage device, ML facets to the ML application based on performance metrics.
Embodiments of the present disclosure relate to transmission of network access information for wireless devices. A method comprises transmitting an authorization request for the wireless device to a server upon receiving a presence announcement message from a wireless device. The method further comprises receiving an authorization response from the server including network access information and bootstrapping information of the wireless device. The method further comprise performing authentication with the wireless device based on the bootstrapping information. The method also comprises transmitting the network access information to the wireless device. The network access information includes a service set identifier (SSID) for a wireless local area network (WLAN) and credential information for the mobile device to access the WLAN. By automatically distributing network access information to the wireless device without requiring any user input, the efficiency of device provisioning and the security of WLAN can be improved.
Examples to restore a trusted backup configuration for a node. Example techniques include failover to an alternate firmware of the node, in response to an unverifiable condition of an existing firmware of the node. The node may validate a first configuration file stored in the node. The first configuration file includes a first backup configuration. The node may validate a second configuration file stored in the node based on the validation of the first configuration file. The second configuration file includes a second backup configuration. In response to the validation of at least one of the first configuration file and the second configuration file, the node may select one of the first backup configuration and the second backup configuration, and apply the selected backup configuration to the node.
G06F 21/73 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by creating or determining hardware identification, e.g. serial numbers
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/72 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
A bracket for a printed circuit board assembly (PCA) may comprise a first portion configured to be coupled to a first face of a first printed circuit board (PCB) and a second portion configured to be coupled to a first face of a second PCB. In an assembled state of the PCA, the first and second PCBs are in a stacked arrangement with the respective first faces thereof facing in a first direction. Moreover, in the assembled state, the bracket controls a distance between the respective first faces of the first and second PCBs along the first direction independent of respective thicknesses of the first and second PCBs along the first direction
H01R 12/73 - Coupling devices for rigid printing circuits or like structures coupling with the edge of the rigid printed circuits or like structures connecting to other rigid printed circuits or like structures
One aspect of the application can provide a system and method for replacing a failing node with a spare node in a non-uniform memory access (NUMA) system. During operation, in response to determining that a node-migration condition is met, the system can initialize a node controller of the spare node such that accesses to a memory local to the spare node are to be processed by the node controller, quiesce the failing node and the spare node to allow state information of processors on the failing node to be migrated to processors on the spare node, and subsequent to unquiescing the failing node and the spare node, migrate data from the failing node to the spare node while maintaining cache coherence in the NUMA system and while the NUMA system remains in operation, thereby facilitating continuous execution of processes previously executed on the failing node.
Examples described herein relate to techniques for concurrent deployment of a set of network function (NF) instances of a network slice. In some examples, each NF instance may be registered at the NRF with its status indicator set to DEPLOYING. Further, a determination may be performed if a first NF instance is waiting to be deployed and deployment of the first NF instance is dependent on a second NF instance that is pending deployment. Responsive to the determination, a status indicator of the first NF instance may be updated from DEPLOYING to WAIT_REGISTERING. Further, the first NF instance may subscribe to be notified at the NRF of a change in the status indicator of the second NF instance. Responsive to being notified, the first NF instance may update its status indicator to REGISTERED, such that the first NF instance is discoverable by other NF instances of the set of NF instances.
H04L 41/0893 - Assignment of logical groups to network elements
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
H04L 41/5051 - Service on demand, e.g. definition and deployment of services in real time
H04W 60/00 - Affiliation to network, e.g. registration; Terminating affiliation with the network, e.g. de-registration
Examples described herein relate to a scheduling assistance sub-system for deploying a container in a cluster comprising member nodes. The scheduling assistance sub-system receives a container deployment request to deploy the container and forwards it to a container scheduler after determining the resource requirements of the container. In some examples, at the time of receiving the container deployment request, the member nodes collectively host a plurality of already-deployed containers. Responsive to receiving the container deployment request, the scheduling assistance sub-system determines if the container deployment request is assigned a pending status by the container scheduler. Further, the scheduling assistance sub-system may identify a set of preemptable containers on a single member node based on the resource requirements of the container. Furthermore, the scheduling assistance sub-system may preempt the set of preemptable containers on the single member node thereby releasing resources for deployment of the container on the single member node.
In some examples, a system includes a processor, a management controller; and a programmable device to provide input/output (I/O) expansion emulation to support communication with a plurality of I/O devices of a subsystem coupled to the system, where the programmable device provides a plurality of virtual registers as part of the I/O expansion emulation, the virtual registers associated with respective I/O devices of the plurality of I/O devices. The processor writes a value to a first virtual register of the plurality of virtual registers to trigger an output event relating to a first I/O device of the plurality of I/O devices at the subsystem. The management controller reads the first virtual register and, in response to the value written to the first virtual register, interact with the subsystem to issue the output event relating to the first I/O device at the subsystem.
An electromagnetic interference (EMI) shield may include a frame configured to be coupled to a printed circuit board (PCB). The frame may include a horizontal body and a plurality of vertical walls extending perpendicularly from the horizontal body. The plurality of vertical walls defines a portion of a perimeter of the EMI shield, the perimeter including a concave corner defined virtually by a first and second vertical wall of the plurality of vertical walls. The first vertical wall does not extend all the way to the second wall and an opening if formed between. The first vertical wall includes an attached portion connected to the horizontal frame and an extension portion connected to the attached portion by way of a first fold. The extension portion at least partially overlays, abuts, and extends beyond the attached portion toward the second wall thereby at least partially covering the opening.
A system for facilitating sender-side congestion control is provided. During operation, the system, on a sender node, can determine the utilization of a buffer at a last-hop switch to a receiver node based on in-flight packets to the receiver node. The receiver node can be reachable from the sender node via the last-hop switch. The system can then determining a fraction of available space in the buffer for packets from the sender node based on the utilization of the buffer. Subsequently, the system can determine whether the fraction of the available space in the buffer can accommodate a next packet from the sender node while avoiding congestion at the buffer at the receiver node. If the fraction of the available space in the buffer can accommodate the next packet, the system can allow the sender node to send the next packet to the receiver node.
H04L 47/722 - Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
H04L 47/12 - Avoiding congestion; Recovering from congestion
Systems and methods are provided for receiving, from an access point, attributes of an Internet of Things (IoT) device connected to the access point, determining a stored device, in a database of a server, sharing a subset of the attributes of the IoT device, and generating a code bundle based on the subset of the shared attributes between the stored device and the IoT device.
H04L 43/065 - Generation of reports related to network devices
H04L 43/0811 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
G16Y 40/35 - Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives
75.
PRESSURE REGULATOR ASSEMBLY FOR A COOLANT DISTRIBUTION UNIT
Example implementations relate to a pressure regulator assembly for a closed fluid loop of a CDU. The pressure regulator assembly has a cylinder having an internal volume, and first and second hollow pistons slidably connected to the cylinder to split the internal volume into a first volume portion having cooling fluid, a second volume portion having driver fluid, and a third volume portion having compressible matter. The first volume portion is fluidically connected to the closed fluid loop. The first hollow piston is reciprocated by the compressible matter to maintain operating pressure of the cooling fluid in the closed fluid loop. The second hollow piston is driven by the driver fluid in response to predefined pressure drop of the cooling fluid during predefined time period, to inject additional cooling fluid from the first volume portion into the closed fluid loop to restore pressure level of cooling fluid to operating pressure.
One aspect of the instant application describes a system that includes a plurality of stacked mezzanine boards communicatively coupled to a motherboard and a metal enclosure enclosing the motherboard and mezzanine boards. A respective mezzanine board can include a number of solder pads, and the metal enclosure can include a plurality of metal strips, a respective metal strip to make contact with a solder pad of a corresponding mezzanine board. The system can further include a logic module positioned on the respective mezzanine board to determine a location of the respective mezzanine board based on a contact pattern between the metal strips and solder pads of the respective mezzanine board.
A system for facilitating efficient port reconfiguration at a switch is provided. During operation, the system can identify a target port of the switch for reconfiguration based on one or more reconfiguration parameters indicating how a set of logical ports are generated from the target port. The system can disable the target port at the control plane of the switch, which disables features provided to the target port from the control plane. The control plane can provide a set of features supported by the switch at a port-level granularity for facilitating operations of the switch. The system can then configure the forwarding hardware based on the reconfiguration parameters to accommodate the set of logical ports. When the reconfiguration of the target port is complete, the system can enable a respective logical port at the control plane, which enables one or more features for the logical port from the control plane.
A method and system are provided which facilitate synchronization of client IP binding databases across an extended network by leveraging the BGP control plane. During operation, a switch configures a first synchronization identifier indicating validated Internet Protocol (IP) binding information of an associated client. The switch receives a Border Gateway Protocol (BGP) update message associated with a first client, wherein the BGP update message includes a second synchronization identifier. Responsive to determining that the second synchronization identifier matches the first synchronization identifier, the switch: extracts from the BGP update message reachability information, which includes media access control (MAC) and IP information associated with the first client; validates the MAC and IP information based on security policies; and adds the MAC and IP information to a local IP binding database, thereby allowing synchronization of the validated IP binding information of the first client between the switch and other switches.
In some examples, a system computes a measure of data overwrites to a data segment stored in a storage structure, where the measure of data overwrites indicates a quantity of overwrites of data in the data segment. The system compares the measure of data overwrites to a criterion. In response to determining that the measure of data overwrites has a first relationship with respect to the criterion, the system disables data reduction for the data segment.
In some examples, a system of an enterprise network sends, in response to a request for authentication transmitted in response to a request by an electronic device to access the enterprise network, an authentication request from the system to a server that is part of a carrier network. The system receives, in response to the authentication request, an authentication response that contains a value representing a mobile number for the electronic device, and checks whether the mobile number represented by the value in the authentication response is present in a user information repository. The system performs authorization of the electronic device based on the check of whether the mobile number represented by the value in the authentication response is present in the user information repository, the authorization for the electronic device to determine an access permission of the electronic device in the enterprise network.
A switch architecture for a data-driven intelligent networking system is provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.
H04L 45/28 - Routing or path finding of packets in data switching networks using route fault recovery
H04L 45/028 - Dynamic adaptation of the update intervals, e.g. event-triggered updates
H04L 45/125 - Shortest path evaluation based on throughput or bandwidth
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/122 - Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 45/021 - Ensuring consistency of routing table updates, e.g. by using epoch numbers
H04L 47/12 - Avoiding congestion; Recovering from congestion
G06F 13/42 - Bus transfer protocol, e.g. handshake; Synchronisation
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/122 - Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 43/10 - Active monitoring, e.g. heartbeat, ping or trace-route
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
A signal cable for an AC-coupled link, may include: a signal conductor; a dielectric surrounding the signal conductor; and a ground sheath having a conductive layer disposed at least partially around the conductor such that the dielectric is positioned between the ground sheath and the signal conductor, wherein the conductive layer comprises a first portion extending in a first direction along the cable and a second portion extending in a second direction, opposite the first direction, along the cable and further wherein the first and second portions of the conductive layer are separated from each other by a gap, the gap being dimensioned to provide a determined amount of capacitance in series in the ground sheath. The gap may form a complete separation between the first and second portions of the conductive layer.
Embodiments of the disclosure provide a system, method, or computer readable medium for providing a differentiable content addressable memory (aCAM) that implements an analog input analog storage and analog output learning memory. The analog output of the differentiable CAM can provide input to a learning algorithm, which may compute the gradients in comparison to historic values and reduce data inaccuracies and power consumption.
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
G11C 7/16 - Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters
H03M 1/18 - Automatic control for modifying the range of signals the converter can handle, e.g. gain ranging
84.
Method for selectively connecting mobile devices to 4G or 5G networks and network federation which implements such method
A method of selectively connecting mobile devices to 4G and/or 5G networks includes preparing a plurality of isolated 4G and/or 5G networks configured to define a network federation and each having a Radio Access Network (RAN), a PDN Gateway node or a User Plane Function (UPF), and an application server; preparing a plurality of mobile devices; connecting the mobile devices to the networks to exchange data traffic with the application server via, at least, the RAN and the PDN Gateway or UPF node; preparing a connectivity network configured to selectively connect the networks to each other; selecting one reference PDN Gateway or UPF node associated with a network of the federation; and migrating the data traffic associated with all the mobile devices connected to the networks other than the reference network to the application server associated with the reference network that includes the previously selected PDN Gateway or UPF node.
H04W 76/16 - Setup of multiple wireless link connections involving different core network technologies, e.g. a packet-switched [PS] bearer in combination with a circuit-switched [CS] bearer
85.
ITERATIVE PROGRAMMING OF ANALOG CONTENT ADDRESSABLE MEMORY
Embodiments of the disclosure provide a system, method, or computer readable medium for programming a target analog voltage range of an analog content addressable memory (aCAM) row. The method may comprise calculating a threshold current sufficient to switch a sense amplifier (SA) on and discharge a match line (ML) connected to a cell of the aCAM; and based on calculating the threshold current, programming a match threshold value by setting a memristor conductance in association with the target analog voltage range applied to a data line (DL) input. The target analog voltage range may comprise a target analog voltage range vector.
G11C 27/00 - Electric analogue stores, e.g. for storing instantaneous values
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
86.
SIMPLIFIED RAID IMPLEMENTATION FOR BYTE-ADDRESSABLE MEMORY
One aspect of the instant application can provide a storage system. The storage system can include a plurality of byte-addressable storage devices and a plurality of media controllers. A respective byte-addressable storage device is to store a parity block or a data block of a data stripe, and a respective media controller is coupled to a corresponding byte-addressable storage device. Each media controller can include a tracker logic block to serialize critical sections of multiple media-access sequences associated with an address on the corresponding byte-addressable storage device. Each media-access sequence comprises one or more read and/or write operations, and the data stripe may be inconsistent during a critical section of a media-access sequence.
Systems and methods are provided for maintaining a desired efficiency of use of resources in a computing system, such as a high performance computing (HPC) system in conjunction with a desired quality of service (QoS) associated with performance of an application executed by the resources. Efficiency and QoS may be considered together, and the provided systems and methods optimize both during application runtime.
A data center for data backup and replication, including a pool of multiple storage units for storing a journal of I/O write commands issued at respective times, wherein the journal spans a history window of a pre-specified time length, and a journal manager for dynamically allocating more storage units for storing the journal as the journal size increases, and for dynamically releasing storage units as the journal size decreases.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
89.
CLIENT UPDATE OF DATA MODIFICATION TRACKING STRUCTURE
In some examples, a client system, in response to a request to modify a first data page at a memory server in a remote access by a client over a network, sends, to the memory server, a request to update a data modification tracking structure stored by the memory server to indicate that the first data page is modified. The client system initiates an incremental data backup from the memory server to a backup storage system of data pages indicated as modified by the data modification tracking structure stored at the memory server.
In some examples, a system determines whether a chain of functions violates a constraint, based on accessing a tracking structure populated with entries as the functions are invoked by respective server processes launched during execution of a database operation, where each entry of the entries of the tracking structure identifies a respective invoked function that is associated with a corresponding program instance, and detecting, using the tracking structure, related functions that form the chain, the related functions being identified as related if associated with a same program instance. In response to determining that the chain of the functions violates the constraint, the system blocks an invocation of a further function to be added to the chain.
In some examples, a system provides access, to a first server, a copy of a volume of data associated with a second server, where the first server is protected against unauthorized access. The first server receives first signatures generated by an agent in the second server based on applying a function on data objects of the volume. The first server generates, at the, second signatures derived based on applying the function on data objects of the copy of the volume. The first server determines whether malware that performs unauthorized data encryption of data of the second server is present, based on comparing the second signatures to the first signatures.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
92.
DATA INTAKE BUFFERS FOR DEDUPLICATION STORAGE SYSTEM
Example implementations relate to data storage. An example includes a method comprising: receiving a data stream to be stored in a persistent storage of a deduplication storage system; assigning new data units to container indexes; storing the new data units of the data stream in a plurality of intake buffers, where each new data unit is stored in the intake buffer associated with the container index it is assigned to; determining whether a cumulative amount stored in the plurality of intake buffers exceeds a first threshold; in response to a determination that the cumulative amount exceeds the first threshold, determining a least recently updated intake buffer of the plurality of intake buffers; generating a first container entity group object comprising a set of data units stored in the least recently updated intake buffer; and writing the first container entity group object from memory to the persistent storage.
In some examples, a system receives workload information of a workload collection, and applies a machine learning model on the workload information, the machine learning model trained using training information including features of different types of workloads. The system produces, by the machine learning model, an identification of a first file system from among different types of file systems, the machine learning model producing an output value corresponding to the first file system that is a candidate for use in storing files of the workload collection.
Systems and methods for determining a physical location of access points in a wireless network that include a plurality of access points, the plurality of access points in the wireless network including anchor access points with respective known locations and unanchored access points without respective known locations, may include: a first plurality of unanchored access points neighboring the anchor access points receiving known-location information from the anchor access points, performing range measurements to the anchor access points to determine their respective locations in the wireless network, thereby becoming pseudo-anchor access points; a second plurality of unanchored access points in communicative contact with a plurality of the pseudo-anchor points receiving the determined-location information from the pseudo-anchor access points, performing range measurements to the pseudo-anchor access points to determine their respective locations in the wireless network, thereby becoming pseudo-anchor access points.
Systems and methods are provided for a modular switch system that comprises disaggregated components, plugins, and managers that enable flexibility to adjust the dynamic configuration of a switch system. This can create modularity and customizability at different times of the lifecycle of the currently configured switch system.
Systems and methods are provided for in-service software upgrades using centralize database versioning and migrations. The systems and methods described herein can intercept protocol messages between a client and a network device and run a first control plane comprising an origin state database and a plurality of un-migrated services. The system can generate a target state data model, wherein an origin state data model associated with the origin state database migrates to the target state data model, and copy the origin state database. The system can migrate second control plane software to the target state database and operate un-migrated services in accordance with the first control plane software and the copied origin state database while operating migrated services in accordance with the second control plane software and the target state database.
A system for facilitating packet mirroring triggered by a hardware module of a switch is provided. During operation, the hardware module can process a received packet and determine whether the processing of the packet changes a state of the hardware module. If a change to the state is detected, the hardware module can determine whether the changed state of the hardware module satisfies a trigger condition for initiating packet mirroring, and if does, issue a hardware interrupt. The system can then identify a set of packets that are to be mirrored based on one or more mirroring parameters indicated by the trigger condition. Here, the set of packets are subsequent to the packet and to be processed by the hardware module. Accordingly, the system can mirror the set of packets to a target. If the trigger condition is expired, the system can terminate the mirroring of the set of packets.
One aspect of the instant application can provide a system and method for balancing load among multiple network sockets established between a local node and a remote node. During operation, the system can encapsulate the multiple network sockets to form a local transport-layer virtual socket comprising a write interface and a read interface. The system can receive, at the write interface of the local transport-layer virtual socket, a packet; select, based on a load-balancing policy, a network socket from the multiple network sockets; and forward the packet to a socket-specific incoming queue associated with the selected network socket to allow the packet to be sent to the read interface of a corresponding remote transport-layer virtual socket via the selected network socket.
Systems and methods are provided for executing NWDAF uses cases with a more optimal tradeoff between accuracy, latency, and resource consumption, to thereby improve upon network performance or services. For example, the systems and methods can provide smart NWDAF prediction optimization that includes grouping 3GPP subscription requests based on common attributes of analysis tasks, creating a single analysis prediction workload for each group of the analysis tasks, and selecting a forecasting algorithm for executing each single analysis prediction workload based on a prediction time window classification.
Systems and methods are provided for passing data amongst a plurality of switches having a plurality of links attached between the plurality of switches. At a switch, a plurality of load signals are received from a plurality of neighboring switches. Each of the plurality of load signals are made up of a set of values indicative of a load at each of the plurality of neighboring switches providing the load signal. Each value within the set of values provides an indication for each link of the plurality of links attached thereto as to whether the link is busy or quiet. Based upon the plurality of load signals, an output link for routing a received packet is selected, and the received packet is routed via the selected output link.
H04L 45/28 - Routing or path finding of packets in data switching networks using route fault recovery
H04L 45/028 - Dynamic adaptation of the update intervals, e.g. event-triggered updates
H04L 45/125 - Shortest path evaluation based on throughput or bandwidth
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/122 - Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 45/021 - Ensuring consistency of routing table updates, e.g. by using epoch numbers
H04L 47/12 - Avoiding congestion; Recovering from congestion
G06F 13/42 - Bus transfer protocol, e.g. handshake; Synchronisation
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/122 - Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 43/10 - Active monitoring, e.g. heartbeat, ping or trace-route
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network