One aspect provides a printed circuit board (PCB). The PCB can include a plurality of layers and a plurality of plated through-hole (PTH) vias extending through the plurality of layers. The plurality of layers can include at least a top layer for mounting components, a second surface layer, and a first power layer positioned between the top layer and the second surface layer. The plurality of PTH vias can include at least one power via coupled to the first power layer to provide power to components mounted on the top layer. A stub length of the power via can be less than a distance between the power layer and the second surface layer.
Examples of the presently disclosed technology provide automated firmware recommendation systems that inject the intelligence of machine learning into the firmware recommendation process. To accomplish this, examples train a machine learning model on troves of historical customer firmware update data on a dynamic basis (e.g., examples may train the machine learning model on weekly basis to predict accepted firmware updates made by a vendor's customers across the most recent 6 months). From this dynamic training, the machine learning model can learn to predict/recommend an optimal firmware version for a customer/network device cluster based on firmware-related features, recent customer preferences, and other customer-specific factors. Once trained, examples can deploy the machine learning model to make highly tailored firmware recommendations for individual network device clusters of individual customers taking the above described factors into account.
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
3.
DETECTING AN ANOMALY EVENT IN LOW DIMENSIONAL SPACENETWORKS
Systems and methods are provided for reducing a number of performance metrics generated by network functions to a number of reduced dimension metrics, which can be used to detect anomalous behavior and generate a warning signal of the detected anomalous behavior. The disclosed systems and methods transform raw performance metrics in a high dimensionality space to a reduced number of metrics in a lower dimensionality space through dimensionality reduction techniques. Anomalous behavior in network performance is detected in the high dimensionality space using the reduced dimension metrics. The systems and methods disclosed herein convert the reduced dimension metrics back to the high dimensionality space, such that the performance metrics from network functions can be utilized to understand and address potential problems in the network.
Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.
Example implementations relate to an integrated circuit (IC) package, an electronic device having the IC package, and a method of assembling the IC package to a printed circuit board (PCB) of the electronic device. The IC package includes a substrate, a chip, and an electromagnetic shield. The chip is coupled to the substrate. The electromagnetic shield is coupled to the substrate such that the chip is enclosed between the substrate and the electromagnetic shield. The electromagnetic shield includes a ferromagnetic material. Further, the electromagnetic shield protrudes beyond the substrate and is electrically grounded to the PCB to prevent an electromagnetic interference (EMI) noise from radiating through the IC package.
H05K 9/00 - Screening of apparatus or components against electric or magnetic fields
H01L 23/473 - Arrangements for cooling, heating, ventilating or temperature compensation involving the transfer of heat by flowing fluids by flowing liquids
H02M 7/00 - Conversion of ac power input into dc power output; Conversion of dc power input into ac power output
H05K 7/20 - Modifications to facilitate cooling, ventilating, or heating
One aspect can provide a system and method for enforcing a single-domain registration of a user equipment (UE) roaming across different provider networks. During operation, the system can receive, at a subscriber-management entity (SME) from a first service node within a first provider's network, a location-update message associated with the UE. The system can identify a second service node within a second provider's network with which the UE has a previous registration and query a subscriber-information database to determine whether a single-domain-registration feature is enabled at the SME for the UE. In response to determining that the single-domain-registration feature is enabled, the system can send a location-cancellation message to the second service node to cause the second service node to cancel the previous registration of the UE and register the UE at the SME.
A method for optimizing operations of high-performance computing (HPC) systems includes collecting data associated with a plurality of workload performance profiling counters associated with a workload during runtime of the workload in an HPC system. Based on the collected data, the method includes using a machine-learning technique to classify the workload by determining a workload-specific fingerprint for the workload. The method includes identifying an optimization metric to optimize during running of the workload in the HPC system. The method includes determining an optimal setting for a plurality of tunable hardware execution parameters as measured against the optimization metric by varying at least a portion of the plurality of tunable hardware execution parameters. The method includes storing the workload-specific fingerprint, the optimization metric, and the optimal setting for the plurality of tunable hardware execution parameters as measured against the optimization metric in an architecture-specific knowledge database.
Example implementations relate to backup operations in a storage system. An example includes a medium storing instructions to: detect a trigger event to initiate a backup restoration of a data entity at a local storage system; determine a user preference between a speed priority and a cost priority; based at least on the determined user preference, select between: an indirect restoration option in which a first portion of the backup data stored on the remote storage system is combined with a second portion of backup data stored on a gateway device to restore the data entity at the local storage system; and a direct restoration option in which the backup data stored on the remote storage system is restored at the local storage system without being combined with other backup data; and restore, using the selected first restoration option, the data entity at the local storage system.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
9.
Secure configuration of a headless networking device
The secure configuration of a headless networking device is described. A label associated with the headless networking device is scanned and a public key is determined. scanning a label associated with a networking device. A configuration process is initiated for the networking device using the public key associated with the networking device that was determined based on the scanned label.
G09C 5/00 - Ciphering or deciphering apparatus or methods not provided for in other groups of this subclass, e.g. involving the concealment or deformation of graphic data such as designs, written or printed messages
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
In embodiments of the present disclosure, there is provided an approach for selecting a channel puncturing scheme based on channel qualities. A method comprises detecting channel interference between neighbor access points (APs), and determining whether a preamble puncturing needs to be enabled based on some puncturing conditions. After determining that a preamble puncturing needs to be enabled, scores for candidate puncturing patterns are calculated based on channel qualities of the sub-channels in the channel, and a proper puncturing pattern can be selected based on the respective scores. Embodiments of the present disclosure can help AP to achieve an effective and better puncturing scheme in real deployment.
Implementations of the present disclosure relate to setting a system time of an access point (AP) for server certificate validation. A method comprises obtaining a default time as a system time of the AP after an AP boots up. The method also comprises obtaining a memory time from a flash memory of the AP. The method also comprises updating the system time with the memory time obtained from the flash memory. The method also comprises validating a server certificate received from an authentication server based on the system time. The system time is synchronized with a network time if the server certificate is successfully validated based on the system time. The synchronized system time is then written into the flash memory. In this way, the authentication can be performed based on a reasonable system time even if the AP reboots.
Example implementations relate to deduplication operations in a storage system. An example includes, in response to initiation of a new backup process to store a first stream of data, initializing a temporary sparse index to be stored in a memory of a deduplication storage system; identifying a cloned portion of the first data stream; identifying at least one container index associated with the cloned portion of the first data stream; identifying a set of hook points included in the at least one container index; and populating the temporary sparse index with a set of entries, the set of entries mapping the identified set of hook points to the at least one container index.
In embodiments of the present disclosure, there is provided an approach for aligning target beacon transmission time (TBTT) for multi-link connection. A method comprises setting up a first link and a second link between an access point (AP) and a wireless device based on multi-link operation (MLO), and obtaining a first TBTT of the first link and a second TBTT of the second link. The method further comprises aligning the first TBTT and the second TBTT at a start time, and then transmitting beacon frames on the first link and the second link according to the aligning of the first TBTT and second TBTT. Embodiments of the present disclosure synchronize and align the beacon TBTTs for different links, and can reduce the wake-up time on all active links, thereby saving power for the wireless device.
One aspect provides a system and method for provisioning an access point (AP) in a wireless mesh network. During operation, a controller can obtain a set of published global encryption parameters comprising a master public key, apply an identity-based encryption (IBE) scheme to encrypt a configuration message based at least on the master public key, and transmit the encrypted configuration message to a proxy device, which forwards the encrypted configuration message to the AP. The proxy device is coupled to the controller via a previously established secure communication channel and coupled to the AP via an open communication channel. The AP can decrypt the encrypted configuration message using an AP-specific secret key generated based on a unique identifier of the AP and a master private key corresponding to the master public key, thereby facilitating provisioning of the AP based on the configuration message.
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
G06F 21/71 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
G06F 9/50 - Allocation of resources, e.g. of the central processing unit [CPU]
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Handling frequently accessed pages is disclosed. An indication is received of a stalling event caused by a requested portion of memory being inaccessible. It is determined that the requested portion of memory is a frequently updated portion of memory. The stalling event is handled based at least in part on the determination that the requested portion of memory is a frequently updated portion of memory.
Examples described herein mitigate adjacent channel interference for dual radios of a network device. Examples described herein may associate, by a network device, a first client device with a first radio of the network device, associate, by the network device, a second client device with a second radio of the network device, determine that the first and second client devices are within a steering threshold, and based on the determination that the first and second client devices are within the steering threshold, steer, by the network device, the second client device from the second radio to the first radio. Examples described herein may communicate, using the first radio of the network device, with the first and second client devices.
Examples described herein relate to a security management system to secure a container ecosystem. In some examples, the security management system may protect one or more entities such as container management applications, container images, containers, and/or executable applications within the containers. The security management system may make use of digital cryptography to generate digital signatures corresponding to one or more of these entities and verify them during the execution so that any compromised entities can be blocked from execution and the container ecosystem may be safeguarded from any malicious network attacks.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
20.
MACHINE LEARNING FACETS FOR DATASET PREPARATION IN STORAGE DEVICES
Examples described herein relate to preparing datasets in a storage device for machine learning (ML) applications. Examples include maintaining ML facet mappings between ML facets and dataset preparation tags, deriving ML facets of a dataset stored in the storage device, and generating filtered datasets from the datasets using the ML facets and ML facet mappings. The filtered dataset is associated with improved dataset quality compared to unfiltered dataset. The storage device transmits the filtered dataset to ML applications requesting the dataset. Some examples include recommending, by the storage device, ML facets to the ML application based on performance metrics.
Embodiments of the present disclosure relate to transmission of network access information for wireless devices. A method comprises transmitting an authorization request for the wireless device to a server upon receiving a presence announcement message from a wireless device. The method further comprises receiving an authorization response from the server including network access information and bootstrapping information of the wireless device. The method further comprise performing authentication with the wireless device based on the bootstrapping information. The method also comprises transmitting the network access information to the wireless device. The network access information includes a service set identifier (SSID) for a wireless local area network (WLAN) and credential information for the mobile device to access the WLAN. By automatically distributing network access information to the wireless device without requiring any user input, the efficiency of device provisioning and the security of WLAN can be improved.
Examples to restore a trusted backup configuration for a node. Example techniques include failover to an alternate firmware of the node, in response to an unverifiable condition of an existing firmware of the node. The node may validate a first configuration file stored in the node. The first configuration file includes a first backup configuration. The node may validate a second configuration file stored in the node based on the validation of the first configuration file. The second configuration file includes a second backup configuration. In response to the validation of at least one of the first configuration file and the second configuration file, the node may select one of the first backup configuration and the second backup configuration, and apply the selected backup configuration to the node.
G06F 21/73 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information by creating or determining hardware identification, e.g. serial numbers
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
G06F 21/72 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
A bracket for a printed circuit board assembly (PCA) may comprise a first portion configured to be coupled to a first face of a first printed circuit board (PCB) and a second portion configured to be coupled to a first face of a second PCB. In an assembled state of the PCA, the first and second PCBs are in a stacked arrangement with the respective first faces thereof facing in a first direction. Moreover, in the assembled state, the bracket controls a distance between the respective first faces of the first and second PCBs along the first direction independent of respective thicknesses of the first and second PCBs along the first direction
H01R 12/73 - Coupling devices for rigid printing circuits or like structures coupling with the edge of the rigid printed circuits or like structures connecting to other rigid printed circuits or like structures
One aspect of the application can provide a system and method for replacing a failing node with a spare node in a non-uniform memory access (NUMA) system. During operation, in response to determining that a node-migration condition is met, the system can initialize a node controller of the spare node such that accesses to a memory local to the spare node are to be processed by the node controller, quiesce the failing node and the spare node to allow state information of processors on the failing node to be migrated to processors on the spare node, and subsequent to unquiescing the failing node and the spare node, migrate data from the failing node to the spare node while maintaining cache coherence in the NUMA system and while the NUMA system remains in operation, thereby facilitating continuous execution of processes previously executed on the failing node.
Examples described herein relate to a scheduling assistance sub-system for deploying a container in a cluster comprising member nodes. The scheduling assistance sub-system receives a container deployment request to deploy the container and forwards it to a container scheduler after determining the resource requirements of the container. In some examples, at the time of receiving the container deployment request, the member nodes collectively host a plurality of already-deployed containers. Responsive to receiving the container deployment request, the scheduling assistance sub-system determines if the container deployment request is assigned a pending status by the container scheduler. Further, the scheduling assistance sub-system may identify a set of preemptable containers on a single member node based on the resource requirements of the container. Furthermore, the scheduling assistance sub-system may preempt the set of preemptable containers on the single member node thereby releasing resources for deployment of the container on the single member node.
In some examples, a system includes a processor, a management controller; and a programmable device to provide input/output (I/O) expansion emulation to support communication with a plurality of I/O devices of a subsystem coupled to the system, where the programmable device provides a plurality of virtual registers as part of the I/O expansion emulation, the virtual registers associated with respective I/O devices of the plurality of I/O devices. The processor writes a value to a first virtual register of the plurality of virtual registers to trigger an output event relating to a first I/O device of the plurality of I/O devices at the subsystem. The management controller reads the first virtual register and, in response to the value written to the first virtual register, interact with the subsystem to issue the output event relating to the first I/O device at the subsystem.
Examples described herein relate to techniques for concurrent deployment of a set of network function (NF) instances of a network slice. In some examples, each NF instance may be registered at the NRF with its status indicator set to DEPLOYING. Further, a determination may be performed if a first NF instance is waiting to be deployed and deployment of the first NF instance is dependent on a second NF instance that is pending deployment. Responsive to the determination, a status indicator of the first NF instance may be updated from DEPLOYING to WAIT_REGISTERING. Further, the first NF instance may subscribe to be notified at the NRF of a change in the status indicator of the second NF instance. Responsive to being notified, the first NF instance may update its status indicator to REGISTERED, such that the first NF instance is discoverable by other NF instances of the set of NF instances.
H04L 41/0893 - Assignment of logical groups to network elements
H04L 41/122 - Discovery or management of network topologies of virtualised topologies e.g. software-defined networks [SDN] or network function virtualisation [NFV]
H04L 41/5051 - Service on demand, e.g. definition and deployment of services in real time
H04W 60/00 - Affiliation to network, e.g. registration; Terminating affiliation with the network, e.g. de-registration
An electromagnetic interference (EMI) shield may include a frame configured to be coupled to a printed circuit board (PCB). The frame may include a horizontal body and a plurality of vertical walls extending perpendicularly from the horizontal body. The plurality of vertical walls defines a portion of a perimeter of the EMI shield, the perimeter including a concave corner defined virtually by a first and second vertical wall of the plurality of vertical walls. The first vertical wall does not extend all the way to the second wall and an opening if formed between. The first vertical wall includes an attached portion connected to the horizontal frame and an extension portion connected to the attached portion by way of a first fold. The extension portion at least partially overlays, abuts, and extends beyond the attached portion toward the second wall thereby at least partially covering the opening.
Example implementations relate to a pressure regulator assembly for a closed fluid loop of a CDU. The pressure regulator assembly has a cylinder having an internal volume, and first and second hollow pistons slidably connected to the cylinder to split the internal volume into a first volume portion having cooling fluid, a second volume portion having driver fluid, and a third volume portion having compressible matter. The first volume portion is fluidically connected to the closed fluid loop. The first hollow piston is reciprocated by the compressible matter to maintain operating pressure of the cooling fluid in the closed fluid loop. The second hollow piston is driven by the driver fluid in response to predefined pressure drop of the cooling fluid during predefined time period, to inject additional cooling fluid from the first volume portion into the closed fluid loop to restore pressure level of cooling fluid to operating pressure.
A system for facilitating sender-side congestion control is provided. During operation, the system, on a sender node, can determine the utilization of a buffer at a last-hop switch to a receiver node based on in-flight packets to the receiver node. The receiver node can be reachable from the sender node via the last-hop switch. The system can then determining a fraction of available space in the buffer for packets from the sender node based on the utilization of the buffer. Subsequently, the system can determine whether the fraction of the available space in the buffer can accommodate a next packet from the sender node while avoiding congestion at the buffer at the receiver node. If the fraction of the available space in the buffer can accommodate the next packet, the system can allow the sender node to send the next packet to the receiver node.
H04L 47/722 - Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
H04L 47/12 - Avoiding congestion; Recovering from congestion
Systems and methods are provided for receiving, from an access point, attributes of an Internet of Things (IoT) device connected to the access point, determining a stored device, in a database of a server, sharing a subset of the attributes of the IoT device, and generating a code bundle based on the subset of the shared attributes between the stored device and the IoT device.
H04L 43/065 - Generation of reports related to network devices
H04L 43/0811 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
G16Y 40/35 - Management of things, i.e. controlling in accordance with a policy or in order to achieve specified objectives
One aspect of the instant application describes a system that includes a plurality of stacked mezzanine boards communicatively coupled to a motherboard and a metal enclosure enclosing the motherboard and mezzanine boards. A respective mezzanine board can include a number of solder pads, and the metal enclosure can include a plurality of metal strips, a respective metal strip to make contact with a solder pad of a corresponding mezzanine board. The system can further include a logic module positioned on the respective mezzanine board to determine a location of the respective mezzanine board based on a contact pattern between the metal strips and solder pads of the respective mezzanine board.
A system for facilitating efficient port reconfiguration at a switch is provided. During operation, the system can identify a target port of the switch for reconfiguration based on one or more reconfiguration parameters indicating how a set of logical ports are generated from the target port. The system can disable the target port at the control plane of the switch, which disables features provided to the target port from the control plane. The control plane can provide a set of features supported by the switch at a port-level granularity for facilitating operations of the switch. The system can then configure the forwarding hardware based on the reconfiguration parameters to accommodate the set of logical ports. When the reconfiguration of the target port is complete, the system can enable a respective logical port at the control plane, which enables one or more features for the logical port from the control plane.
A method and system are provided which facilitate synchronization of client IP binding databases across an extended network by leveraging the BGP control plane. During operation, a switch configures a first synchronization identifier indicating validated Internet Protocol (IP) binding information of an associated client. The switch receives a Border Gateway Protocol (BGP) update message associated with a first client, wherein the BGP update message includes a second synchronization identifier. Responsive to determining that the second synchronization identifier matches the first synchronization identifier, the switch: extracts from the BGP update message reachability information, which includes media access control (MAC) and IP information associated with the first client; validates the MAC and IP information based on security policies; and adds the MAC and IP information to a local IP binding database, thereby allowing synchronization of the validated IP binding information of the first client between the switch and other switches.
In some examples, a system computes a measure of data overwrites to a data segment stored in a storage structure, where the measure of data overwrites indicates a quantity of overwrites of data in the data segment. The system compares the measure of data overwrites to a criterion. In response to determining that the measure of data overwrites has a first relationship with respect to the criterion, the system disables data reduction for the data segment.
In some examples, a system of an enterprise network sends, in response to a request for authentication transmitted in response to a request by an electronic device to access the enterprise network, an authentication request from the system to a server that is part of a carrier network. The system receives, in response to the authentication request, an authentication response that contains a value representing a mobile number for the electronic device, and checks whether the mobile number represented by the value in the authentication response is present in a user information repository. The system performs authorization of the electronic device based on the check of whether the mobile number represented by the value in the authentication response is present in the user information repository, the authorization for the electronic device to determine an access permission of the electronic device in the enterprise network.
A switch architecture for a data-driven intelligent networking system is provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow are acknowledged after reaching the egress point of the network, and the acknowledgement packets are sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.
H04L 45/28 - Routing or path finding of packets in data switching networks using route fault recovery
H04L 45/028 - Dynamic adaptation of the update intervals, e.g. event-triggered updates
H04L 45/125 - Shortest path evaluation based on throughput or bandwidth
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/122 - Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 45/021 - Ensuring consistency of routing table updates, e.g. by using epoch numbers
H04L 47/12 - Avoiding congestion; Recovering from congestion
G06F 13/42 - Bus transfer protocol, e.g. handshake; Synchronisation
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/122 - Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 43/10 - Active monitoring, e.g. heartbeat, ping or trace-route
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
A signal cable for an AC-coupled link, may include: a signal conductor; a dielectric surrounding the signal conductor; and a ground sheath having a conductive layer disposed at least partially around the conductor such that the dielectric is positioned between the ground sheath and the signal conductor, wherein the conductive layer comprises a first portion extending in a first direction along the cable and a second portion extending in a second direction, opposite the first direction, along the cable and further wherein the first and second portions of the conductive layer are separated from each other by a gap, the gap being dimensioned to provide a determined amount of capacitance in series in the ground sheath. The gap may form a complete separation between the first and second portions of the conductive layer.
Embodiments of the disclosure provide a system, method, or computer readable medium for providing a differentiable content addressable memory (aCAM) that implements an analog input analog storage and analog output learning memory. The analog output of the differentiable CAM can provide input to a learning algorithm, which may compute the gradients in comparison to historic values and reduce data inaccuracies and power consumption.
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
G11C 7/16 - Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters
H03M 1/18 - Automatic control for modifying the range of signals the converter can handle, e.g. gain ranging
40.
Method for selectively connecting mobile devices to 4G or 5G networks and network federation which implements such method
A method of selectively connecting mobile devices to 4G and/or 5G networks includes preparing a plurality of isolated 4G and/or 5G networks configured to define a network federation and each having a Radio Access Network (RAN), a PDN Gateway node or a User Plane Function (UPF), and an application server; preparing a plurality of mobile devices; connecting the mobile devices to the networks to exchange data traffic with the application server via, at least, the RAN and the PDN Gateway or UPF node; preparing a connectivity network configured to selectively connect the networks to each other; selecting one reference PDN Gateway or UPF node associated with a network of the federation; and migrating the data traffic associated with all the mobile devices connected to the networks other than the reference network to the application server associated with the reference network that includes the previously selected PDN Gateway or UPF node.
H04W 76/16 - Setup of multiple wireless link connections involving different core network technologies, e.g. a packet-switched [PS] bearer in combination with a circuit-switched [CS] bearer
41.
ITERATIVE PROGRAMMING OF ANALOG CONTENT ADDRESSABLE MEMORY
Embodiments of the disclosure provide a system, method, or computer readable medium for programming a target analog voltage range of an analog content addressable memory (aCAM) row. The method may comprise calculating a threshold current sufficient to switch a sense amplifier (SA) on and discharge a match line (ML) connected to a cell of the aCAM; and based on calculating the threshold current, programming a match threshold value by setting a memristor conductance in association with the target analog voltage range applied to a data line (DL) input. The target analog voltage range may comprise a target analog voltage range vector.
G11C 27/00 - Electric analogue stores, e.g. for storing instantaneous values
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
42.
SIMPLIFIED RAID IMPLEMENTATION FOR BYTE-ADDRESSABLE MEMORY
One aspect of the instant application can provide a storage system. The storage system can include a plurality of byte-addressable storage devices and a plurality of media controllers. A respective byte-addressable storage device is to store a parity block or a data block of a data stripe, and a respective media controller is coupled to a corresponding byte-addressable storage device. Each media controller can include a tracker logic block to serialize critical sections of multiple media-access sequences associated with an address on the corresponding byte-addressable storage device. Each media-access sequence comprises one or more read and/or write operations, and the data stripe may be inconsistent during a critical section of a media-access sequence.
Systems and methods are provided for maintaining a desired efficiency of use of resources in a computing system, such as a high performance computing (HPC) system in conjunction with a desired quality of service (QoS) associated with performance of an application executed by the resources. Efficiency and QoS may be considered together, and the provided systems and methods optimize both during application runtime.
In some examples, a client system, in response to a request to modify a first data page at a memory server in a remote access by a client over a network, sends, to the memory server, a request to update a data modification tracking structure stored by the memory server to indicate that the first data page is modified. The client system initiates an incremental data backup from the memory server to a backup storage system of data pages indicated as modified by the data modification tracking structure stored at the memory server.
In some examples, a system determines whether a chain of functions violates a constraint, based on accessing a tracking structure populated with entries as the functions are invoked by respective server processes launched during execution of a database operation, where each entry of the entries of the tracking structure identifies a respective invoked function that is associated with a corresponding program instance, and detecting, using the tracking structure, related functions that form the chain, the related functions being identified as related if associated with a same program instance. In response to determining that the chain of the functions violates the constraint, the system blocks an invocation of a further function to be added to the chain.
In some examples, a system provides access, to a first server, a copy of a volume of data associated with a second server, where the first server is protected against unauthorized access. The first server receives first signatures generated by an agent in the second server based on applying a function on data objects of the volume. The first server generates, at the, second signatures derived based on applying the function on data objects of the copy of the volume. The first server determines whether malware that performs unauthorized data encryption of data of the second server is present, based on comparing the second signatures to the first signatures.
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/78 - Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
47.
DATA INTAKE BUFFERS FOR DEDUPLICATION STORAGE SYSTEM
Example implementations relate to data storage. An example includes a method comprising: receiving a data stream to be stored in a persistent storage of a deduplication storage system; assigning new data units to container indexes; storing the new data units of the data stream in a plurality of intake buffers, where each new data unit is stored in the intake buffer associated with the container index it is assigned to; determining whether a cumulative amount stored in the plurality of intake buffers exceeds a first threshold; in response to a determination that the cumulative amount exceeds the first threshold, determining a least recently updated intake buffer of the plurality of intake buffers; generating a first container entity group object comprising a set of data units stored in the least recently updated intake buffer; and writing the first container entity group object from memory to the persistent storage.
In some examples, a system receives workload information of a workload collection, and applies a machine learning model on the workload information, the machine learning model trained using training information including features of different types of workloads. The system produces, by the machine learning model, an identification of a first file system from among different types of file systems, the machine learning model producing an output value corresponding to the first file system that is a candidate for use in storing files of the workload collection.
Systems and methods for determining a physical location of access points in a wireless network that include a plurality of access points, the plurality of access points in the wireless network including anchor access points with respective known locations and unanchored access points without respective known locations, may include: a first plurality of unanchored access points neighboring the anchor access points receiving known-location information from the anchor access points, performing range measurements to the anchor access points to determine their respective locations in the wireless network, thereby becoming pseudo-anchor access points; a second plurality of unanchored access points in communicative contact with a plurality of the pseudo-anchor points receiving the determined-location information from the pseudo-anchor access points, performing range measurements to the pseudo-anchor access points to determine their respective locations in the wireless network, thereby becoming pseudo-anchor access points.
Systems and methods are provided for a modular switch system that comprises disaggregated components, plugins, and managers that enable flexibility to adjust the dynamic configuration of a switch system. This can create modularity and customizability at different times of the lifecycle of the currently configured switch system.
Systems and methods are provided for in-service software upgrades using centralize database versioning and migrations. The systems and methods described herein can intercept protocol messages between a client and a network device and run a first control plane comprising an origin state database and a plurality of un-migrated services. The system can generate a target state data model, wherein an origin state data model associated with the origin state database migrates to the target state data model, and copy the origin state database. The system can migrate second control plane software to the target state database and operate un-migrated services in accordance with the first control plane software and the copied origin state database while operating migrated services in accordance with the second control plane software and the target state database.
A system for facilitating packet mirroring triggered by a hardware module of a switch is provided. During operation, the hardware module can process a received packet and determine whether the processing of the packet changes a state of the hardware module. If a change to the state is detected, the hardware module can determine whether the changed state of the hardware module satisfies a trigger condition for initiating packet mirroring, and if does, issue a hardware interrupt. The system can then identify a set of packets that are to be mirrored based on one or more mirroring parameters indicated by the trigger condition. Here, the set of packets are subsequent to the packet and to be processed by the hardware module. Accordingly, the system can mirror the set of packets to a target. If the trigger condition is expired, the system can terminate the mirroring of the set of packets.
One aspect of the instant application can provide a system and method for balancing load among multiple network sockets established between a local node and a remote node. During operation, the system can encapsulate the multiple network sockets to form a local transport-layer virtual socket comprising a write interface and a read interface. The system can receive, at the write interface of the local transport-layer virtual socket, a packet; select, based on a load-balancing policy, a network socket from the multiple network sockets; and forward the packet to a socket-specific incoming queue associated with the selected network socket to allow the packet to be sent to the read interface of a corresponding remote transport-layer virtual socket via the selected network socket.
A data center for data backup and replication, including a pool of multiple storage units for storing a journal of I/O write commands issued at respective times, wherein the journal spans a history window of a pre-specified time length, and a journal manager for dynamically allocating more storage units for storing the journal as the journal size increases, and for dynamically releasing storage units as the journal size decreases.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/16 - Error detection or correction of the data by redundancy in hardware
55.
ALGORITHMS FOR USE OF LOAD INFORMATION FROM NEIGHBORING NODES IN ADAPTIVE ROUTING
Systems and methods are provided for passing data amongst a plurality of switches having a plurality of links attached between the plurality of switches. At a switch, a plurality of load signals are received from a plurality of neighboring switches. Each of the plurality of load signals are made up of a set of values indicative of a load at each of the plurality of neighboring switches providing the load signal. Each value within the set of values provides an indication for each link of the plurality of links attached thereto as to whether the link is busy or quiet. Based upon the plurality of load signals, an output link for routing a received packet is selected, and the received packet is routed via the selected output link.
H04L 45/28 - Routing or path finding of packets in data switching networks using route fault recovery
H04L 45/028 - Dynamic adaptation of the update intervals, e.g. event-triggered updates
H04L 45/125 - Shortest path evaluation based on throughput or bandwidth
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 45/122 - Shortest path evaluation by minimising distances, e.g. by selecting a route with minimum of number of hops
H04L 47/76 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
H04L 69/40 - Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
H04L 49/9005 - Buffering arrangements using dynamic buffer space allocation
H04L 47/34 - Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
G06F 13/16 - Handling requests for interconnection or transfer for access to memory bus
H04L 45/021 - Ensuring consistency of routing table updates, e.g. by using epoch numbers
H04L 47/12 - Avoiding congestion; Recovering from congestion
G06F 13/42 - Bus transfer protocol, e.g. handshake; Synchronisation
H04L 47/2441 - Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
H04L 47/30 - Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
H04L 47/62 - Queue scheduling characterised by scheduling criteria
H04L 47/24 - Traffic characterised by specific attributes, e.g. priority or QoS
H04L 47/122 - Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities
G06F 12/1036 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
G06F 15/173 - Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star or snowflake
H04L 43/10 - Active monitoring, e.g. heartbeat, ping or trace-route
G06F 12/0862 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
G06F 12/1045 - Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
H04L 47/32 - Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
H04L 47/762 - Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
An analog error correction circuit is disclosed that implements an analog error correction code. The analog circuit includes a crossbar array of memristors or other non-volatile tunable resistive memory devices. The crossbar array includes a first crossbar array portion programmed with values of a target computation matrix and a second crossbar array portion programmed with values of an encoder matrix for correcting computation errors in the matrix multiplication of an input vector with the computation matrix. The first and second crossbar array portions share the same row lines and are connected to a third crossbar array portion that is programmed with values of a decoder matrix, thereby enabling single-cycle error detection. A computation error is detected based on output of the decoder matrix circuitry and a location of the error is determined via an inverse matrix multiplication operation whereby the decoder matrix output is fed back to the decoder matrix.
H03M 13/00 - Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
H03M 13/15 - Cyclic codes, i.e. cyclic shifts of codewords produce other codewords, e.g. codes defined by a generator polynomial, Bose-Chaudhuri-Hocquenghem [BCH] codes
G11C 13/00 - Digital stores characterised by the use of storage elements not covered by groups , , or
57.
MATCHING OPERATION FOR A DEDUPLICATION STORAGE SYSTEM
Example implementations relate to metadata operations in a storage system. An example includes generating, by a storage controller of a deduplication storage system, a candidate list of container indexes for matching operations of a received data segment, each container index in the candidate list having an associated match cost; identifying, by the storage controller, a journal group associated with a first container index listed in the candidate list; reducing, by the storage controller, a match cost associated with the first container index in response to a determination that the identified journal group is in a modified state; and performing, by the storage controller, the matching operations of the received data segment based at least on the reduced match cost of the first container index.
Example implementations relate to a cooling assembly of a host circuit device, a circuit assembly including the host circuit device and a removable circuit device, and a method for thermal management of the removable circuit device removably connected to the host circuit device. The cooling assembly includes a cooling component and a thermal gap pad having an elastomer component movably connected to the cooling component and a plurality of beams embedded in the elastomer component. Each beam includes a first end portion, a second end portion, and a body portion extended between the first and second end portions. The first end portion of each of one or more beams is disposed in a first thermal contact with the cooling component and a second end portion of each of the one or more beams is disposed in a second thermal contact with a heat sink of the removable circuit device.
In some examples, a computer identifies a plurality of memory servers accessible by the computer to perform remote access over a network of data stored by the plurality of memory servers, sends allocation requests to allocate memory segments to place interleaved data of the computer across the plurality of memory servers, and receives, at the computer in response to the allocation requests, metadata relating to the memory segments at the plurality of memory servers, the metadata comprising addresses of the memory segments at the plurality of memory servers. The computer uses the metadata to access, by the computer, the interleaved data at the plurality of memory servers, the interleaved data comprising blocks of data distributed across the memory segments.
Examples described herein relate to an optical device that entails phase shifting an optical signal. The optical device includes an optical waveguide having a first semiconductor material region and a second semiconductor material region formed adjacent to each other and defining a junction therebetween. Further, the optical device includes an insulating layer formed on top of the optical waveguide. Moreover, the optical device includes a III-V semiconductor layer formed on top of the insulating layer causing an optical mode of an optical signal passing through the optical waveguide to overlap with the first semiconductor material region, the second semiconductor material region, the insulating layer, and the III-V semiconductor layer thereby resulting in a phase shift in the optical signal passing through the optical waveguide.
G02F 1/025 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on semiconductor elements with at least one potential jump barrier, e.g. PN, PIN junction in an optical waveguide structure
G02B 6/12 - Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings of the optical waveguide type of the integrated circuit kind
G02B 6/34 - Optical coupling means utilising prism or grating
Example implementations relate to storing data in a storage system. An example includes receiving, by a storage controller of a storage system, a data unit to be stored in persistent storage of the storage system. The storage controller determines maximum and minimum entropy values for the received data unit. The storage controller determines, based on at least the minimum entropy value and the maximum entropy value, whether the received data unit is viable for data reduction. In response to a determination that the received data unit is viable for data reduction, The storage controller performs at least one reduction operation on the received data unit.
Implementations disclosed herein provide semiconductor resonator based optical multiplexers that achieve enhanced bandwidth range of light emitted therefrom. The present disclosure integrates silicon devices into resonator structures, such as micro-ring resonators, that couples a side mode with a lasing mode and resonantly amplifies coupled light to output light having an enhanced bandwidth with respect to the lasing mode. In some examples, the optical multiplexers disclosed herein include a bus waveguide; a first resonator structure optically coupled to the bus waveguide and comprising an optical amplification mechanism that generates light and a single mode filter to force the generated light into single-mode operation; and a second resonator structure optically coupled to the first resonator structure and comprising a phase-tuning mechanism. The phase-tuning mechanism can be controlled to detune phase of light in the second resonator relative to the light in the first resonator.
Systems and methods are provided for ensuring client device connectivity in a deployment pursuant to regulatory updates impacting channel usage or availability. An access point (AP) or AP controller may obtain a client-supported channel list, and detect existence of a regulatory update impacting channel usage in the network device. An AP-supported channel list is compared to the client-supported channel list, and a determination is made regarding which client devices associated to an AP do not support channel usage changes resulting from the regulatory update. Distribution of channels of the AP-supported channel list is controlled among one or more radios of APs in a deployment based on AP density of the deployment, and the determination regarding which client devices do not support the channel usage changes.
Systems and methods are provided for executing NWDAF uses cases with a more optimal tradeoff between accuracy, latency, and resource consumption, to thereby improve upon network performance or services. For example, the systems and methods can provide smart NWDAF prediction optimization that includes grouping 3GPP subscription requests based on common attributes of analysis tasks, creating a single analysis prediction workload for each group of the analysis tasks, and selecting a forecasting algorithm for executing each single analysis prediction workload based on a prediction time window classification.
A network infrastructure management console configures one or more network switch stacks each comprising a plurality of switches. A monitoring component monitors a current conductor switch of the stack. A user interface (UI) backend component comprises a cache memory and receives a user request to configure the stack. The UI backend component receives from the monitoring component notification of the current conductor and stores in a cache memory segment associated with the current conductor the requested configuration changes. If the current conductor switch of the stack has changed due to a failover event, the configuration changes stored in the cache memory segment associated with the previous current conductor are written to a cache memory segment associated with the new current conductor. A configuration push component receives the configuration changes and transmits the configuration changes to the network switch stack.
H04L 41/0654 - Management of faults, events, alarms or notifications using network fault recovery
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 41/0663 - Performing the actions predefined by failover planning, e.g. switching to standby network elements
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
H04L 41/0823 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
66.
COOLING MODULE FOR A CIRCUIT MODULE HAVING A PLURALITY OF CHIPSETS
Example implementations relate to a cooling module, a circuit assembly having one or more circuit modules and the cooling module, and a method of forming the cooling module. The cooling module includes a first cooling component and a second cooling component disposed on the first cooling component. The first cooling component includes a plurality of microchannel blocks thermally coupled to a plurality of chipsets of the circuit module. The second cooling component includes an inlet port, an outlet port, and a plurality of distribution conduits fluidically coupled to the inlet port and outlet port. Each distribution conduit is disposed on one or more microchannel blocks of the plurality of microchannel blocks and directs coolant from the inlet port to the outlet port through the one or more microchannel blocks to absorb waste heat transferred to the one or more microchannel blocks from at least chipset of the plurality of chipsets.
Example implementations relate to metadata operations in a storage system. An example includes receiving, by a storage controller of a deduplication storage system, a plurality of data streams to be stored in persistent storage of the deduplication storage system; identifying, by the storage controller, a set of journals in a first journal group that are modified during a first backup process; determining, by the storage controller, a count of the set of journals that are modified during the first backup process; comparing, by the storage controller, the determined count to a migration threshold; and migrating, by the storage controller, at least one journal of the set of journals to a second journal group based at least on a comparison of the determined count to the migration threshold.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 3/06 - Digital input from, or digital output to, record carriers
Systems and methods provide for a federated workflow solution to orchestrate entire machine learning (ML) workflows comprising multiple tasks, across silos. In other words, one or more sets/pluralities of tasks making up an ML workflow, can be executed across multiple resource partitions or domains. Federated workflow state can be maintained and shared through some form of distributed database/ledger, such as a blockchain. Agents that are locally deployed locally at the silos may orchestrate an ML workflow at a particular resource domains, each such agent having access, via the blockchain (acting as a globally visible/consistent state store), to the aforementioned workflow state. Such systems are capable of operating regardless of the existence of heterogeneous resources/aspects of a silo.
Systems and methods are provided for deterministically estimating whether the location of a computing device that is fixed or mobile is inside a building or not using location data. Particularly, the system may compare 3D geoposition uncertainty region of a location to a 3D volumetric structure shape of the location, and determining a confidence value based on the comparison associated with a likelihood that the location associated with the computing device is indoors or outdoors. Various states of the computing device are supported by the substance of the disclosure, including indoor and outdoor states.
G01S 19/45 - Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
A process includes determining, by an operating system agent of a computer system, a first profile that is associated with an input/output (I/O) peripheral of the computer system. The first profile is associated with an error register of the I/O peripheral, and the first profile represents a configuration of the computer system that is associated with the I/O peripheral. The process includes, responsive to a notification of an error being associated with the I/O peripheral, determining, by the operating system agent, a second profile that is associated with the I/O peripheral. The second profile is associated with the error register. Moreover, responsive to the notification of the error, the process includes comparing, by a baseboard management controller of the computer system, the second profile to the first profile. Based on the comparison, the process includes determining, by the baseboard management controller, whether the error is attributable to a driver for the I/O peripheral.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
In some examples, a system identifies resource contention for a resource in a storage system, and determines that a workload collection that employs data reduction is consuming the resource. The system identifies relative contributions to consumption of the resource attributable to storage volumes in the storage system, where the workload collection that employs data reduction are performed on data of the storage volumes. The system determines whether storage space savings due to application of the data reduction for a given storage volume of the storage volumes satisfy a criterion, and in response to determining that the storage space savings for the given storage volume do not satisfy the criterion, the system indicates that the data reduction is not to be applied for the given storage volume.
Examples described herein relate to a centralized service discovery management (CSDM) system and method for distributing service discovery records to access points (APs). The CSDM system receives a service discovery record from a source AP of a plurality of APs deployed in an information technology (IT) infrastructure. The CSDM system is deployed outside the IT infrastructure. The service discovery record comprises information about an application server. Further, the CSDM system generates a neighbor AP group for the application server based on nearness between the source AP and the rest of the plurality of APs, wherein the neighbor AP group includes a set of neighbor APs identified from the plurality of APs. Moreover, the CSDM system transmits the service discovery record to the set of neighbor APs so that the application server is discoverable by client devices connected to the set of neighbor APs.
A first electronic device may comprise a chassis and first fins. The chassis may be configured to removably couple with a second electronic device. The first fins are configured to interleave with second fins of the second electronic device in a coupled state of the first and second electronic devices. A corrugated thermal interface device comprises folded fins. The folded fins are coupled to the first fins and are also removably couplable to the second fins in the coupled state of the first and second electronic devices. Each folded fin comprises one or more lateral walls, and the corrugated thermal interface device further comprises a plurality of spring fingers coupled to and extending at least partially in a lateral direction from the lateral walls. The spring finger contacts may be contacted and displaced by the second fins in the coupled state of the first and second electronic devices.
A heat removal apparatus may comprise a vapor chamber device and a cover coupled to the vapor chamber device. The vapor chamber device comprises a base, a folded fin structure coupled to the base, with the base and folded fin structure defining a vapor chamber containing a wick and a working fluid. The cover and the vapor chamber device define a liquid chamber configured to receive liquid coolant. The folded fin structure comprises a plurality of folded fins defining a first plurality of grooves on a first side of the folded fin structure and defining a second plurality of grooves on a second side of the folded fin structure. The first plurality of grooves are part of the vapor chamber. The second plurality of grooves are part of the liquid chamber.
Systems and methods are provided for discoverability detection of network services. The present disclosure provides for a cloud-based network insight server that collects performance information of a network and a network agent, communicating with the cloud-based network insight server, that monitors discoverability of network services hosted by devices on the network. The network agent receives configuration information from the cloud-based network insight server and transmits discoverability states of the devices to the cloud-based network insight server based on executing a service discovery process through an access point on the network.
A system for facilitating remote reachability checks for a switch. During operation, the system can receive one or more control messages from a management platform. Here, a respective control message can include one or more type-length-value (TLV) data structures. If the system identifies a first TLV data structure associated with validation in a first control message, the system can determine a validating plane based on a value of the first TLV data structure. The system can then validate the first control message at the validating plane. Upon identifying, in a second control message, a second TLV data structure associated with a plurality of parameters for a request in the second control message, the system can determine a subset of active parameters from the plurality of parameters based on an indicator in the second TLV data structure. The system can then process the request based on the subset of active parameters.
Examples increase precision for aCAMs by converting an input signal (x) received by a circuit into a first analog voltage signal (V(xMSB)) representing the most significant bits of the input signal (x) and a second analog voltage signal (V(xLSB)) representing the least significant bits of the input signal (x). By dividing the input signal (x) bit-wise into the first analog voltage signal (V(xMSB)) and the second analog voltage signal (V(xLSB)), the circuit can utilize aCAM sub-circuits implementing a combination of Boolean operations to search the input signal (x) against 22*M programmable levels, where “M” represents the number of programmable bits for each aCAM sub-circuit. Thus, using similar circuit hardware, example circuits square the number of programmable levels of conventional aCAMs (which generally only have 2M programmable levels). Accordingly, examples provide new aCAMs that can carry out more complex computations than conventional aCAMs of comparable cost, size, and power consumption.
G11C 15/04 - Digital stores in which information comprising one or more characteristic parts is written into the store and in which information is read-out by searching for one or more of these characteristic parts, i.e. associative or content-addressed stores using semiconductor elements
G11C 16/12 - Programming voltage switching circuits
Examples described herein relate to an optical resonating device. The optical resonating device includes a primary waveguide, a microring resonator, and a microring resonator photodiode. The primary waveguide allows a passage of an optical signal. The microring resonator is formed adjacent to the primary waveguide to couple therein a portion of the optical signal passing through the primary waveguide. Furthermore, the microring resonator photodiode is formed adjacent to the microring resonator to measure an intensity of the portion of the optical signal coupled into the microring resonator.
G02B 6/293 - Optical coupling means having data bus means, i.e. plural waveguides interconnected and providing an inherently bidirectional system by mixing and splitting signals with wavelength selective means
Examples described herein relate to selection and variation of extent size thresholds in a storage volume. Examples may select an extent size threshold for a volume according to a type of application that is to store data on the volume. Examples may dynamically vary the extent size threshold based on data reduction metrics, such as deduplication ratio and/or compression ratio. Examples may dynamically vary the extent size threshold for the volume based on cache performance metrics, such as cache miss rate. Examples may dynamically vary the extent size threshold for the volume based on an amount of dead storage space corresponding to partially overwritten extents in the volume.
Examples of cloud-based provisioning of a computing system are disclosed. In an example, a baseboard management controller (BMC) of the computing system may be configured to establish a secure cloud provisioning connection between a cloud manager and the BMC. UEFI configuration may be received from the cloud manager over the secure cloud provisioning connection. A UEFI shell may be executed during a startup of the computing system initiated by the cloud manager. Based on the UEFI configuration, a provisioning proxy server communicatively coupled to a cloud repository may be identified. A startup script may be requested from the cloud repository over a network connection using a UEFI network stack. The startup script may download an image file via the provisioning proxy server from the cloud repository over the network connection and provision the computing system from the image file.
Examples for analyzing the usage of virtualized network instances within a communication network, are described. In an example, a usage monitoring service may receive a data change notification from a unified data repository. The data change notification is generated by the unified data repository in response to a data modification in subscriber data of a virtualized network instance. In an example, the data modification retrieved from the generated data change notification is utilized to obtain usage indication information for the virtualized network instance.
Examples of implementing a communication session within a cluster computing environment are described. In an example, a communication session, initially being established by a first application server instance is continued through a second application server instance. Thereafter, a mid-session request received from a communication network may be directed to the second application server instance, wherein the mid-session request pertains to the communication session.
Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.
Examples described herein relate to a centralized overlay multicast orchestrator in a software-defined wide area network (SD-WAN). The overlay multicast orchestrator receives and maintains state information from multicast agents deployed on overlay network nodes. Based on the state information, the overlay multicast orchestrator identifies a first set of overlay network nodes connected to a source and a second set of overlay network nodes connected to hosts requesting a multicast stream. The overlay multicast orchestrator computes and distributes a multicast tree representing a path for transmission of the multicast stream to the requesting hosts.
In some examples, a computing device includes a first reset domain including a test controller and a configurable test logic. The computing device includes a second reset domain including a subsystem to be measured by the configurable test logic. The first reset domain is to enter a reset mode, and after exiting the reset mode, receive configuration information that configures the configurable test logic. The test controller of the first reset domain is to maintain the second reset domain in a reset mode after the first reset domain has exited the reset mode of the first reset domain, and responsive to the received configuration information for configuring the configurable test logic, provide a reset release indication to the second reset domain to allow the second reset domain to exit the reset mode of the second reset domain.
In an example, a network switch is to receive a loop detect packet from an access netwssork connected to a Data center network (DCN). The DCN includes a VXLAN overlay and the network switch is configured as a VTEP. The network switch compares the VNI of a source VTEP from which the loop detect packet originates with a locally configured VNI. In response to a match, it is determined that the network switch is configured as a peer VTEP. Import RT in the loop detect packet is compared with an export RT of the peer VTEP and the export RT in the loop detect packet is compared with an import RT of the peer VTEP. Based on the comparison, it is determined whether a VXLAN tunnel is configured between the peer and the source VTEPs. In response to the VXLAN tunnel being configured, the switch may determine that a network loop is present.
H04L 45/645 - Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
Systems and methods are provided for using historic input power periodic data from a server in an IT data center to train a machine learning (ML) model to obtain forecasted power consumption data of the server for a future time period. Time windows of hotspots or coldspots are then identified in the forecasted power consumption data, hotspots being defined as areas or regions of over-utilization in a time series data, and coldspots being defined as areas or regions of under-utilization in a time series data. The hotspots and coldspots are identified by calculating an exponential mean average (EMA) of the forecasted power consumption data, taking points above the EMA as hotspots and points below the EMA as coldspots. The identified hotspots and coldspots can be used to schedule workloads for a server or a data center, to more efficiently plan existing workloads, or to introduce new workloads at more optimal time periods.
Implementations of the present disclosure relate to a method for decoding error correction. The method comprises detecting a failure of decoding a received frame. After the failure of decoding the received frame is detected, a type of the received frame is determined based on a probability that the received frame follows a prior frame of the received frame in a frame sequence. The method further comprises obtaining a template corresponding to the type of the received frame, and decoding the received frame based on the fixed values in the template. The template includes fixed values corresponding to the type of the received frame. With these implementations, correction ability of the decoding can be obviously improved with assistance of the constructed template.
Bias in Machine Learning (ML) is when an ML algorithm tends to incompletely learn relevant and important patterns from a dataset, or learns the patterns from data incorrectly. Such inaccuracy can cause the algorithm to miss important relationships between patterns and features in data, resulting in inaccurate algorithm predictions. Systems and methods for detecting potential ML bias in input image datasets are described herein. After a target image is received, a subset of images related to the target image is extracted. The target image and subset of images are analyzed under an imbalance assessment and data bias assessment to determine the presence of any potential data bias in a ML training pipeline. If any data bias is determined, one or more messages summarizing the assessments and including explanations to enable more accurate predictions in image assessments are sent to the user.
G06V 10/762 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
90.
MONITORING LICENSE CONSTRAINTS IN A CONTAINER ORCHESTRATION SYSTEM
Embodiments described herein are generally directed to a cloud-native approach to software license enforcement in a container orchestration system. According to an example, information indicative of a number of sets of containers that are making use of one or more components of an application is received. The application is licensed by the tenant and the sets of containers are running within a namespace of the tenant within a cluster of a container orchestration system. Overuse of the application by the tenant is determined based on whether the number exceeds a licensing constraint for the application specified within an Application Programming Interface (API) object of the container orchestration system corresponding to the application. Responsive to a determination that the application is being overused by the tenant, the tenant is caused to be notified regarding the overuse.
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
In some examples, a system identifies sub-portions of a database query, assigns identifiers to the identified sub-portions, and adds the identifiers to a data structure. The system generates a fingerprint representing the database query based on applying a fingerprint function on the data structure including the identifiers.
One aspect provides a method and system for saturation of multiple I/O slots by multiple testing ports and verification of link health in between. During operation, the system detects a testing card with a plurality of test ports which are coupled to a plurality of input/output (I/O) slots of a computing device. The system communicates with the plurality of test ports via the plurality of I/O slots. The system generates, by the computing device, a script for each test port, wherein the script comprises a series of read and write operations to be executed by the testing card on a memory device associated with the computing device. The system allows the plurality of test ports to execute the script and perform the corresponding read operations and write operations, thereby facilitating testing of the I/O slots of the computing device in parallel by the test ports of the single testing card.
Example implementations relate a system and method for storing configuration files of a host computing device in a secure storage of a Baseboard Management Controller (BMC). The secure storage includes configuration files associated with the host computing device. The BMC is communicatively connected to the host computing device using a communication link. The secure storage is emulated as a storage device to the host computing device. The BMC monitors the secure storage to detect changes in the configuration files. When there is a change in a configuration file, the BMC performs a security action in the host computing device.
Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
G06F 11/34 - Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
A dynamic, context-specific power dormancy and management architecture for virtualized RANs that include a PHY layer on an expansion card. The architecture includes (1) a power control agent in a programmable environment on the expansion card that obtains data from subcomponents in the programmable environment on the expansion card, correlates the data to at least a first power control policy stored at the expansion card, implements the correlated first power control policy on the expansion card; and facilitates communication of the selected correlation data and/or raw data to a non-transitory computer-readable medium at a data center; (2) a Power Control Policy Function at the data center where data is obtained from vRAN infrastructure and optimized power control policies that may be shared with the vRANs are developed; and (3) an out of band management channel that allows for direct communication between the power control agent and the data center.
A memristor-integrated Mach-Zehnder Interferometer (MZI) device is implemented having the capability to function as a new type of photonic device that can be further leveraged to implement a wide-range of photonic applications, such as photonic chips, PICs, optical FPGAs, and the like. The memristor-integrated MZI device distinctly incorporates the photonic capabilities of an MZI with the resistive memory capabilities of a memristor, in order to create a photonic device that supports optical/photonic functions on a component-level. For example, MZI circuitry can include two waveguides coupled to an output terminal, wherein the MZI circuitry produces an optical signal as output and propagates the output optical signal to the optical terminal; and a memristor integrated on one or the two waveguides of the MZI circuitry, wherein the memristor receives an electrical signal as input and causes a phase shift in the output optical signal from the MZI circuitry.
G02F 1/225 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour by interference in an optical waveguide structure
G02F 1/21 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour by interference
G02F 1/025 - Devices or arrangements for the control of the intensity, colour, phase, polarisation or direction of light arriving from an independent light source, e.g. switching, gating or modulating; Non-linear optics for the control of the intensity, phase, polarisation or colour based on semiconductor elements with at least one potential jump barrier, e.g. PN, PIN junction in an optical waveguide structure
Examples for identification and authentication of hardware. Techniques may include receiving a node identifier during an initial phase of the node. The node identifier may include an initial unique identifier of the node. The node may receive a latest change identifier during a phase change of the node, wherein the phase change may cause a hierarchical change of the node. The latest change identifier is configured to incorporate a latest unique identifier corresponding to a latest system and one or more unique identifiers corresponding to one or more earlier systems of the node. Further, responsive to the reception of the latest change identifier, delete an earlier change identifier, and the node may send the second change identifier to a management service, in response to a request for authentication of the node by the management service.
Systems and methods are provide for automatically constructing data lineage representations for distributed data processing pipelines. These data lineage representations (which are constructed and stored in a central repository shared by the multiple data processing sites) can be used to among other things, clone the distributed data processing pipeline for quality assurance or debugging purposes. Examples of the presently disclosed technology are able to construct data lineage representations for distributed data processing pipelines by (1) generating a hash content value for universally identifying each data artifact of the distributed data processing pipeline across the multiple processing stages/processing sites of the distributed data processing pipeline; and (2) creating an data processing pipeline abstraction hierarchy for associating each data artifact to input and output events for given executions of given data processing stages (performed by the multiple data processing sites).
G06F 16/215 - Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
An apparatus in a first computing device is provided. During operation, the apparatus can present, to a processor of the first computing device, a virtual interface switch (VIS) coupled to an interface port of the processor. The apparatus can present to the processor that a target device, which is reachable via a remote apparatus of a second computing device, is coupled to the VIS. The apparatuses can be coupled via at least a first fabric and a second fabric. A respective fabric may facilitate communication based on a fabric switching protocol. The apparatus can obtain a set of packets, which can be issued from the interface port via the VIS and directed to the target device. The apparatus can then forward, to the remote apparatus, a first subset of the set of packets via the first fabric and a second subset of the set of packets via the second fabric.
Examples of sensor-based roaming issue identification in a wireless network are disclosed. In an example, a wireless roam for a sensor may be initiated from a source AP to a target AP. Management traffic between the sensor and one of the source AP or the target AP during the wireless roam may be monitored. In response to detecting an anomaly in the management traffic, the wireless roam for the sensor from the source AP to the target AP may be re-initiated. Management traffic capture on the sensor during the reinitiated wireless roam may be initiated. A roaming issue for the sensor may be identified based on analysis of the captured management traffic and the same may be reported.