Systems and methods are provided for performing a dynamic decoupled controlled-Z gate operation. A superconducting circuit of an exemplary system can include a first qubit and a second qubit transversely coupled to the first qubit, lire system can apply an external magnetic flux to the second qubit to bring a frequency of the second qubit into resonance with a frequency of the first qubit. The system can apply a continuous alternating drive with continuous phase to the second qubit, a duration and a magnitude of the continuous alternating drive configured to synchronize agate time of the dynamic decoupled controlled-Z gate operation to an integer number of Rabi oscillation periods. The system can read out a state of the quantum computing system, after providing the continuous alternating drive.
H03K 19/20 - Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits characterised by logic function, e.g. AND, OR, NOR, NOT circuits
2.
NETWORK DEVICE MANAGEMENT SYSTEM, METHOD, AND APPARATUS, DEVICE, AND STORAGE MEDIUM
Embodiments of the present invention provide a network device management system, method, and apparatus, a device, and a storage medium. The system comprises: a configuration device, and a first device and a second device between which a connection is to be established. The connection establishment process may comprise that: a first device sends to a configuration device a first connection request comprising its own attribute information. In response to the connection request, the configuration device determines a second device and attribute information thereof from devices registered on the configuration device, and sends the attribute information of the second device to the first device. The first device can use, according to the attribute information of the second device, the attribute information of the second device as configuration information for establishment of a communication connection, thereby finally establishing the communication connection between the first device and the second device. In the process, the attribute information of the device used when the first device establishes the communication connection is automatically fed back by the configuration device; and no manual process exists in the entire connection establishment process, thereby ensuring the efficiency of the establishment of the communication connection.
Zero skipping sparsity techniques for reduced data movement between memory and accelerators and reduced computational workload of accelerators. The techniques include detection of zero and near-zero values on the memory. The non-zero values are transferred to the accelerator for computation. The zero and near-zero values are written back within the memory as zero values.
Disclosed in the present application are a virtualization processing system, method and apparatus, and an electronic device. The system comprises: a virtualization infrastructure and a management and control virtual machine, wherein the virtualization infrastructure is deployed on a virtual machine management and control board side and is used for constructing a virtualization system, so as to manage a user virtual machine; and the management and control virtual machine is deployed on a host machine side and is used for managing and controlling the user virtual machine by using host machine resources. The system uses a management and control manner based on a virtual node, such that management and control are encapsulated inside a management and control virtual machine, which can be deployed on a host machine side and can also be deployed on a management and control board side. In this way, when resources of a management and control board are insufficient due to user access to the virtual machine surging, one or more virtual management and control nodes can be dynamically started on the host machine side, and the node can share some management and control tasks of the management and control board, and perform virtual machine management and control by using host machine resources. Therefore, the dynamic scalability of management and control resources can be continuously ensured.
A data processing method and apparatus. The data processing method comprises: processing input data on the basis of a data processing network to obtain intermediate data; obtaining a fixed attention feature output from an attention network, the fixed attention feature being obtained after a model is trained by using at least an initialized attention feature, and attention weights comprised in the initialized attention feature being not completely the same; and processing the intermediate data according to the fixed attention feature on the basis of a data aggregation network in the model to obtain output data. A fixed attention network does not receive any input previously, that is, the attention feature output from the attention network is irrelevant to the input data. The fixed attention feature can not only allow a more important parameter in the model to play a greater role to improve the accuracy of the model and determine the importance of the parameter, but also be conducive to further compress the model and also avoid the problem of the same input feature brought by some normalization layers to a traditional attention network.
Disclosed are a multi-core processor task scheduling method and apparatus, and a device and a storage medium. The method comprises: acquiring a target task to be executed; selecting a target processor from a multi-core processor on the basis of attribute information of the target task, wherein the attribute information comprises binding relationship information and priority information, the binding relationship information is used for describing whether the target task needs to be run on processors having a binding relationship, and the priority information is used for describing the priority of the target task; scheduling the target task to the target processor; and running the target task on the target processor. The present invention solves the technical problem of it not being possible for an existing multi-core scheduling algorithm to accurately determine a target processor for running a task to be executed.
Provided in the embodiments of the present application are an information processing method and apparatus, and a computing device. The method comprises: in response to a parameter optimization request for a parameter to be optimized, when global feature information of said parameter satisfies a preset local switching rule, determining local feature information of said parameter; collecting and obtaining a first candidate solution of said parameter according to the local feature information; and if the first candidate solution satisfies a parameter use condition, determining the first candidate solution to be a target solution of said parameter. By means of the embodiments of the present application, the effectiveness of parameter optimization is improved.
Disclosed are an improved application management method and apparatus. The method comprises: collecting objects in application instances for serverless computing, and generating a reference tree according to a reference hierarchical relationship between the objects; performing memory object rearrangement on a plurality of application instances according to the reference tree; and performing memory merging on the plurality of application instances subjected to rearrangement, so that similar application instances enter a low-power state. In the present invention, with regard to application instances, especially the rearrangement and merging of heap memory areas, maximum memory sharing can be realized among a plurality of similar running instances, thereby reducing the number of system resources occupied by instances in a standby state, and improving the average utilization rate of the resources. Thus, with the same system resources, more running instances can be started to be in a standby state, so as to cope with a scenario of rapid elastic capacity expansion.
A storage engine may obtain one or more object access properties of an object to be received, and determine a type of storage device that is suitable or desirable for storing the object from among different types of storage devices based at least in part on the one or more object access properties of the object to be received. In response to determining the type of storage device, the storage engine may allocate a storage device of such type for the object. The storage engine may then receive the object, and store the object into the allocated storage device.
A storage engine may be configured to receive data including a plurality of records from a client device, and generate a plurality of record headers for the plurality of records. The storage engine may then transfer a group of record headers of the plurality of record headers to a storage device to cause the storage device to store the group of record headers consecutively in a data sector of the storage device, and further transfer a subset of payload data of one or more records associated with the group of record headers to the storage device to cause the storage device to store the one or more records after the group of record headers in the data sector of the storage device.
An audio signal processing method, a conference recording and presentation method, a device, a system, and a medium. The audio signal processing method comprises: performing sound source positioning on an audio signal collected in a multi-person speaking scenario, so as to obtain a change point of a sound source position (101b); according to the change point of the sound source position, segmenting the audio signal into a plurality of audio segments, and extracting voiceprint features of the plurality of audio segments (102b); according to the duration, the voiceprint features and sound source positions of the plurality of audio segments, performing hierarchical clustering on the plurality of audio segments, so as to obtain audio segments corresponding to the same speaker (103b); and adding the same user tag to the audio segments corresponding to the same speaker, so as to obtain an audio signal to which the user tag is added (104b).
Provided in the embodiments of the present application are an edge cloud system, an edge management and control method, a management and control node, and a storage medium. In the embodiments of the present application, a cloud-edge fusion architecture is provided. For the cloud-edge fusion architecture, management and control over both a center and an edge can greatly improve the edge autonomy capability of the cloud-edge fusion architecture and greatly improve the service capability of an edge containerized application. Furthermore, after a center management and control node satisfies a management and control condition again, a containerized application in an edge cluster is managed and controlled again according to a management and control status of the center management and control node for the edge cluster before the center management and control node does not satisfy the management and control condition, instead of a management and control status of an edge management and control node for the edge cluster. In this way, the two management and control nodes are loosely coupled to each other and independently perform management and control, and the edge autonomy capability is more flexible.
The embodiments of the present disclosure relate to an object identification code processing method and apparatus, an object publishing method and apparatus, and a device and a medium. In at least one embodiment of the present disclosure, by means of object information, of an original market, that is associated with an object identification code, the object identification code is aggregated with at least some identification codes in an identification code library of a target market, such that when objects are published, a plurality of aggregated objects can be published on the basis of one object identification code, thereby improving the object publishing efficiency; in addition, the object information, of the original market, that is associated with the object identification code is mapped into object information of the target market, and association relationships between the object identification code, aggregation information, and the object information, in the target market, of the object identification code are thus stored in the identification code library of the target market, such that when objects are published, the object information, in the target market, of the object identification code can be found by searching the identification code library of the target market, thereby ensuring successful object publishing.
The systems and methods are configured to efficiently and effectively prime and initialize a memory. A memory controller (130) includes a normal data path (131) and a priming path (132). The normal data path (131) directs storage operations during a normal memory read/write operation after power startup of a memory chip. The priming path (132) includes a priming module (133), wherein the priming module (133) directs memory priming operations during a power startup of the memory chip, including forwarding a priming pattern for storage in an a write pattern mode register of a memory chip and selection of a memory address in the memory chip for initialization with the priming pattern. The priming pattern includes information corresponding to proper initial data values. The priming pattern can also include proper corresponding error correction code (ECC) values. The priming module (133) can include a priming pattern register that stores the priming pattern.
The present disclosure provides methods for performing training and executing of a multi-density neural network in video processing. An exemplary method comprises: receiving a video stream comprising a plurality of pictures; processing the plurality of pictures using a first branch of a first block in the neural network, wherein the neural network is configured to reduce blocking artifacts in video compression of the video stream and the first branch comprises one or more residual blocks; and processing the plurality of pictures using a second branch of the first block in the neural network, wherein the second branch comprises a down-sampling processing, an up-sampling processing, and one or more residual blocks.
Core-aware caching systems and methods for non-inclusive non-exclusive shared caching based on core sharing behaviors of the data and/or instructions. The caching between a shared cache level and a core specific cache level can be based on physical page number (PPN) and core identifier sets for previous accesses to the respective physical page numbers. The caching between a shared cache level and a core specific cache level can also be based on physical page number and core valid bit vector sets for previous accesses to the respective physical page numbers by each of the plurality of cores.
G06F 12/128 - Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
17.
QUALITY ESTIMATION FOR AUTOMATIC SPEECH RECOGNITION
A method, a system, and a computer-readable storage medium are provided for implementing quality estimation for automatic speech recognition, and more specifically training an ASR model, and training a QE model to perform word error rate prediction upon the trained ASR model. The ASR model may be a transformer learning model having an architecture including an encoder including multi-head attention layers, and a memory encoder including a masking multi-head attention layer. The QE model may include a binary classification model and a regression model, where the binary classification model is based on a discrete statistical distribution, and the regression model is based on a continuous statistical distribution. Training the ASR model may produce output having variable word error rates, and the QE model may be trained based on empirical word error rates of the ASR model. The QE model may predict performance of the ASR model without labor-intensive labeling to generate ground truth.
Embodiments of the present application provide an information processing method and device, a server and a user equipment. The method comprises: detecting a parameter optimization request initiated by a target user, and determining a parameter sampling algorithm matching the target user; in response to a sample acquisition request initiated by the target user, calling the parameter sampling algorithm to generate a test sample; determining a simulation result of the test sample on the basis of a preset target function; and outputting the simulation result of the test sample for the target user. The embodiments of the present disclosure improve the parameter optimization efficiency.
A client device may transmit a data stream including an object (such as a data file or record) to a storage system for storing the object in a storage device. In response to receiving the data stream, the storage system may store or write data of the object into a plurality of logical blocks of the storage device with an end-to-end data protection based at least in part on a comparison of a combination of check codes of a header of the object, the object and padding data with a combination of a plurality of check codes that are generated separately for metadata associated with the object and corresponding parts of the object stored in the plurality of logical blocks. The storage system may further provide an end-to-end data protection for reading data of an object stored in multiple logical blocks of a storage device.
A roadside terminal, a traffic light control method, and a related system. The roadside terminal comprises a housing (10) and a circuit board (11) disposed inside the housing (10); the circuit board (11) is provided with a video interface (12), a processor (14), and a first communication interface (15); the video interface (12) is configured to receive the intersection video data captured by a roadside camera device; the processor (14) is configured to perform analysis according to the intersection video data to obtain traffic parameters of the intersection, generate, on the basis of the traffic parameters, a control instruction for controlling a traffic light at the intersection, and send the control instruction to a traffic light control device of the intersection by means of the first communication interface (15). Compared with an existing traffic light management and control system, the roadside terminal locally performs real-time processing on obtained data, so that the real-time response is improved, and the real-time control of a local traffic light can be realized.
Methods and systems are provided for localizing subjects captured in image data by non-stereo vision and localizing source of captured audio by a computing system, to implement responsive conducting of transactional services by an interactive terminal. Prior to routine operation, the computing system of the interactive terminal may be calibrated by fitting subject bearing angles and subject distances relative to the interactive terminal, from captured images, to two respective regression models. During routine operation, the computing system may perform regression computations upon measurements of detected subjects from captured images to localize subject bearing angles and subject distances. Such image localizations may be further enhanced based on audio localizations, further based on calibration. In these manners, tradeoffs are made to accept margins of error to reduce computational complexity, enabling the interactive terminal as a whole to be implemented and deployed with greater flexibility in components and cost than stereo vision systems.
Disclosed are a data processing method, apparatus and system. The method comprises: separating a preset data processing module from a machine learning model to generate a security application module, wherein the security application module is used for performing encryption calculation on data input into the machine learning model; taking, by means of a preset operator, an output value of an operation layer in the machine learning model after separation as an input value that is input into the security application module, and inputting the input value into the security application module; performing, by means of the security application module and according to the input value, subgraph calculation in an isolated operation environment, so as to obtain a calculation result; and returning the calculation result to the preset operator. The present invention solves the technical problem of the operation pressure of a TEE model being poor caused by different reasoning frameworks, in the related art, needing to be adapted in the TEE model according to different customer requirements.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
23.
DISEASE TYPE IDENTIFICATION METHOD, DEVICE AND SYSTEM, AND STORAGE MEDIUM
Provided are a disease type identification method, device and system, and a storage medium. The method comprises: performing disease feature extraction on medical record data of a patient to be reviewed, and performing disease type identification according to the disease features of the patient to be reviewed, so as to determine the disease type of the patient to be reviewed, thereby achieving automated identification of the disease type of a patient, and facilitating improving the disease type identification efficiency.
G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
24.
OBJECT TRACKING METHOD, GROUND OBJECT TRACKING METHOD, DEVICE, SYSTEM, AND STORAGE MEDIUM
Embodiments of the present application provide an object tracking method, a ground object tracking method, a device, a system, and a storage medium. In the object tracking method, by performing instance segmentation on a bi-temporal image, a tracked object in the bi-temporal image can be finely detected, and by performing change detection on pixel coordinates corresponding to the bi-temporal image, a pixel-level change state detection result can be obtained. On the basis of the tracked object obtained by segmentation and the pixel-level change state detection result, a change state of the tracked object in the bi-temporal image can be accurately obtained, and the accuracy and reliability of an object tracking result can be improved.
Embodiments of the present application provide an information flow identification method, a network chip, and a network device. In the embodiments of the present application, an information flow is identified by using an on-chip memory in combination with an off-chip memory. First, the identification of a potential flow is performed by means of the advantage of a greater access bandwidth of the on-chip memory, such that the speed of identifying a potential flow can be improved. Second, only the potential flow, instead of all information flows, is further identified by means of the advantage of the larger storage space of the off-chip memory, such that the number of accesses to the off-chip memory can be reduced, thereby facilitating the improvement of the efficiency and accuracy of identifying a target flow.
Provided are a data processing method and apparatus, a computing device, and a test simplification device. The data processing method comprises: in response to a parameter optimization request, determining a target function corresponding to the parameter optimization request (101); during the process of performing a parameter test on any candidate parameter in the target function, calling a test simplification module to determine whether the parameter test satisfies simplification conditions, and obtaining a determination result (102); if the determination result is that the parameter test satisfies the simplification conditions, stopping the parameter test of the candidate parameters (103); and if the determination result is that the parameter test does not satisfy the simplification conditions, continuing to execute the parameter test of the candidate parameters to obtain a test result of the candidate parameters (104). The parameter testing efficiency is improved.
When performing a recycling operation on a storage device, a storage system may use or create a data buffer in the storage device, and designate the data buffer to temporarily store data of data blocks to be recycled in the storage device using direct memory access (DMA) operations that are performed internally in the storage device, without the need of reading the data of the data blocks from the storage device and writing the data into a host memory of the storage system, thereby saving or reducing the consumptions of the communication bandwidth of a communication channel between the storage system and the storage device, and the memory bandwidth of the host memory.
A storage record engine implemented on a storage system is provided. The storage record engine further organizes hosted storage of the storage system into superblocks and chunks organized by respective metadata, the chunks being further organized into chunk segments amongst superblocks. Persistent storage operations may cause modifications to the metadata, which may be recorded in a transaction log, records of which may be replayed to commit the modifications to hosted storage. The replay functionality may establish recovery of data following a system failure, wherein replay of records of transaction logs in a fashion interleaved with checkpoint metadata avoids preemption of normal storage device activity during a recovery process, and improves responsiveness of the storage system from the perspective of end devices.
Methods and systems for in-memory metadata reduction in cloud storage system are provided. According to an aspect, a method comprises receiving a first command to write a data stream to a storage device; writing the data stream into a plurality of fragments having logical addresses corresponding to physical addresses on the storage device; and generating an index for individual fragment of the plurality of fragments, the index indicating information to locate the physical addresses of the individual fragment. Individual records in the individual fragment have a same pre-set logical size and all individual records in the individual fragment are continuous, and the index indicates the information including at least: an offset value of the individual record in the individual fragment; the pre-set logical size of the individual record; and a pre-set physical size of the individual record.
A storage engine may be configured to employ different formats for index fragments and index entries of respective records in the index fragments based at least in part on record properties of the respective records in the index fragments, to reduce an amount of memory space that is consumed or used for storing the index fragments in a memory associated with the storage engine, without compromising the efficiency of searching the records stored in a storage device. Using different formats for index fragments covering records of different record properties, the storage engine may further be configured to create, maintain, and update index mappings for records stored or included in the storage device, to provide functionalities of point-lookup, range query, deletion, and additions of the records in the storage device.
A visual tracking system helps to detect or locate an active object (such as an active sound source) in a live video or recorded video, and provides a way to help a user who views the live video or recorded video to focus on the active object, without easily being disturbed or distracted by other objects that are present with the active object. The visual tracking system may further track the active object, or detect a new active object when the live video or recorded video is played.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G01S 5/18 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
32.
SERVICE REQUEST AND PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
Through further capability exposure of a network slice management system, an application server is enabled to request, from the network slice management system, a network slice template service or a network slice capability service in addition to the network slice service, thus making full use of the slice management capabilities and resources in the slice management system, which in turn enhances the efficiency in using slice management capabilities and resources.
A method for obtaining trademark similarity is disclosed in the present application, and includes: obtaining character information of a first trademark and character information of a second trademark; constructing a feature information set according to the character information of the first trademark and the character information of the second trademark; and obtaining a degree of similarity between the first trademark and the second trademark based on the feature information set. By automatically constructing multiple pieces of feature information for evaluating trademark similarity, this method can quickly and accurately obtain a degree of similarity between trademarks, and at the same time, can also avoid the problems of manual design rules or inaccurate calculation of manual design rules.
A number of domain specific accelerators (DSA1-DSAn) are integrated into a conventional processing system (100) to operate on the same chip by adding additional instructions to a conventional instruction set architecture (ISA), and further adding an accelerator interface unit (130) to the processing system (100) to respond to the additional instructions and interact with the DSAs.
G06F 5/06 - Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising
35.
HYBRID MEMORY MANAGEMENT SYSTEMS AND METHODS WITH IN-STORAGE PROCESSING AND ATTRIBUTE DATA MANAGEMENT
A memory module can include a hybrid media controller coupled to a volatile memory (105), a non-volatile memory (110), a non-volatile memory buffer (145) and a set of memory mapped input/output (MMIO) register. The hybrid media controller (155) can be configured for reading and writing data to a volatile memory (105) of a memory mapped space of a memory module. The hybrid media controller (155) can also be configured for reading and writing bulk data to a non-volatile memory (110) of the memory mapped space. The hybrid media controller (155) can also be configured for reading and writing data of a random-access granularity to the non-volatile memory (110) of the memory mapped space. The hybrid media controller (155) can also be configured for self-indexed moving data between the non-volatile memory (110) and the volatile memory (105) of the memory module.
A computer-implemented method includes: obtaining a model trained based at least on knowledge distillation between a teacher network and a student network according to a modified triplet loss; obtaining an image and a plurality of videos; feeding the image and the plurality of videos to the model to obtain one or more first features of the image and one or more second features of each of the plurality of videos; and determining one or more of the plurality of videos that match the image according to the one or more first features and the one or more second features.
An information provision method, comprising: receiving a reference notification message that a target account is referred, the reference notification message being sent by a first information system after receiving a request initiated by a first user for first information content by referring to the target account, the first information content comprising information content generated in the first information system, and the target account being associated with a second information system; obtaining the first information content; obtaining, from an information library associated with the second information system, second information content related to the first information content; and providing the second information content for the first user. The method can conveniently provide associated commodity object information for content of interest to users of a social network system.
A video data processing method includes: receiving a bitstream; decoding a first index from the bitstream; determining a maximum number of an adaptive loop filter (ALF) for a component of a picture based on the first index; and processing pixels in the picture with the ALF.
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
39.
PACKET PROCESSING METHOD, DEVICE, SYSTEM, AND STORAGE MEDIUM
A packet processing method, a device, a system, and a storage medium. A programmable device may provide, to a CPU to perform processing, a packet header of a packet to be processed; and splicing the packet header processed by the CPU and a payload part of the packet to be processed to obtain a target packet. Not only the payload part of the packet is processed with high performance using programmable device hardware, but the complicated business logic in packet header processing is flexibly processed using software in the CPU, and because the length of the packet header is relatively short, performance loss in CPU software processing due to long-packet copy processing will not occur, thus facilitating improvement of network forwarding performance.
Near memory processing systems for graph neural network processing can include a central core coupled to one or more memory units. The memory units can include one or more controllers and a plurality of memory devices. The system can be configured for offloading aggregation, concentrate and the like operations from the central core to the controllers of the one or more memory units. The central core can sample the graph neural network and schedule memory accesses for execution by the one or more memory units. The central core can also schedule aggregation, combination or the like operations associated with one or more memory accesses for execution by the controller. The controller can access data in accordance with the data access requests from the central core. One or more computation units of the controller can also execute the aggregation, combination or the like operations associated with one or more memory access. The central core can then execute further aggregation, combination or the like operations or computations of end use applications on the data returned by the controller.
Disclosed are a commodity object settlement processing method and apparatus, and an electronic device. The method comprises: determining a plurality of commodity objects to be settled, wherein price attribute information of at least two commodity objects is associated with different first currency types; determining a second currency type required to be used when a payment operation is executed; creating at least two transaction orders according to the plurality of commodity objects, and determining settlement result information respectively corresponding to the at least two transaction orders, wherein the settlement result information is described by means of the second currency type; and providing a combined payment operation option for performing combined payment on the at least two transaction orders. By means of the embodiments of the present application, for a scenario of multi-currency quotation and multi-currency payment, a user operation can be simplified.
A video data processing method for cross-component sample adaptive offset is provided. The method includes receiving a bitstream; determining a category index of a target chroma sample, wherein the category index is determined based on a first reconstructed value associated with a co-located luma sample and a second reconstructed value associated with the target chroma sample; decoding an index indicating an offset corresponding to the category index from the bitstream; determining the offset based on the index; and adding the offset to a third reconstructed value associated with the target chroma sample.
Methods and apparatuses for video processing include: in response to receiving a video sequence, determining a plurality of reconstructed pictures before a target picture of the video sequence in a temporal order; determining a prediction picture based on an optical flow between two of the plurality of reconstructed pictures; determining a virtual reference picture by inputting the plurality of reconstructed pictures and the prediction picture into a prediction model determined using a machine learning technique; and encoding or decoding the target picture using the virtual reference picture as a reference picture.
Methods and techniques are provided for simulating a quantum circuit. A system can perform operations including generating a transformed Hamiltonian corresponding to a quantum circuit. The transformed Hamiltonian can include transformed local and coupling Hamiltonians. Generation of the transformed Hamiltonian can include obtaining a charge coupling matrix and a flux coupling matrix of an original Hamiltonian corresponding to the quantum circuit and at least partially diagonalizing the charge coupling matrix and the flux coupling matrix. The operations can further include determining a limited eigenbasis including a number of eigenvectors of the transformed local Hamiltonian, projecting the transformed coupling Hamiltonian and the transformed local Hamiltonian onto the limited eigenbasis, and generating an at least partially decoupled Hamiltonian by combining the projection of the transformed coupling and local Hamiltonians. The operations can further include simulating a behavior of the quantum circuit using the at least partially decoupled Hamiltonian.
The present disclosure provides optimizing a causal additive model conforming to structural constraints of directedness and acyclicity, and also encoding both positive and negative relationship constraints reflected by prior knowledge, so that the model, during fitting to one or more sets of observed variables, will tend to match expected observations as well as domain-specific reasoning regarding causality, and will conform to directedness and acyclicity requirements for Bayesian statistical distributions. Computational workload is decreased and computational efficiency is increased due to the implementation of causal additive model improvements to reduce search space and enforce directedness, while intuitive correctness of the outcome causality is ensured by prioritizing encoding of prior knowledge over optimizing a loss function.
A method for optimizing a quantum circuit is disclosured. The method comprises acquiring a representation of a quantum circuit comprising one or more qubits, transforming, by linear transformation, first Hamiltonian corresponding to the quantum circuit to generate a second Hamiltonian in which free modes are decoupled from non-free modes, generating a third Hamiltonian by removing the free modes from the second Hamiltonian, simulating a behavior of the quantum circuit using the third Hamiltonian, and adjusting a design of the quantum circuit based on the simulated behavior of the quantum circuit.
One embodiment described herein provides a system for monitoring performance of fibers in an optical transport network. The system can include a plurality of optical time-domain reflectometer (OTDR) modules and an OTDR control-and-management module coupled to the plurality of OTDR modules. A respective OTDR module is embedded in a network element of the optical transport network and is coupled to an optical fiber span. The OTDR control-and-management module configures the respective OTDR module to monitor, in real time, performance of the coupled optical fiber span, which can include detecting a fault in the coupled optical fiber span and identifying a location of the detected fault.
H04B 10/071 - Arrangements for monitoring or testing transmission systems; Arrangements for fault measurement of transmission systems using a reflected signal, e.g. using optical time domain reflectometers [OTDR]
48.
ERROR DETECTION, PREDICTION AND HANDLING TECHNIQUES FOR SYSTEM-IN-PACKAGE MEMORY ARCHITECTURES
A system-in-package including a logic die and one or more memory dice can include a reliability availability serviceability (RAS) memory management unit (MMU) for memory error detection, memory error prediction and memory error handling. The RAS MMU can receive memory health information, on-die memory error information, system error information and read address information for the one or more memory dice. The RAS MMU can manage the memory blocks of the one or more memory dice based on the memory health information, on-die memory error type, system error type and read address. The RAS MMU can also further manage the memory blocks based on received on-die memory temperature information and or system temperature information.
The present disclosure provides methods and apparatuses for applying intra prediction refinement to intra predicted samples. An exemplary method includes: determining a filter based on neighboring samples of intra predicted samples of a picture; generating an offset value based on the neighboring samples; refining the intra predicted samples by adding the offset value; and applying the filter to the intra predicted samples.
A graphics card memory management method and apparatus, a device, and a system. The method comprises: determining the priorities of a plurality of machine learning tasks run by means of a graphics processing unit (GPU); if graphics card memory resources are to be allocated to a high-priority task, and allocatable graphics card memory resources are less than a graphics card memory resource demand of the high-priority task, releasing at least some of graphics card memory resources occupied by a low-priority task; and allocating the graphics card memory resources to the high-priority task, so as to run the high-priority task at least according to tensor data of a graphics card memory space. By using such a processing method, when allocatable graphics card memory resources are insufficient, graphics card memory resources occupied by a low-priority task are allocated to a high-priority task for use, thereby performing dynamic scaling optimization on GPU graphics card memory resources occupied by a plurality of parallel machine learning tasks on a GPU. In this way, insofar as the performance of the high-priority task is guaranteed, the resource utilization rate of the whole cluster can be improved.
Methods and systems implement a virtual copy operation which improves support of preemptive garbage collection policies at a distributed file system. A garbage collection process preemptively performs copying upon data in a log-structured file system. So that thread-blocking write operations are not distributed across nodes of a distributed file system node cluster, and thus avoid degradation of computational performance, a virtual copy operation is provided which, based on master node metadata, locates each chunk node storing blocks to be virtual copied and calls a remap API of a logical address mapper provided by a local file system of the chunk node. The logical address mapper of each chunk node performs remaps of disk addresses from source logical block addresses to destination block addresses, without relocating data from one disk address to another. The results of these remaps may be stored as metadata at the master node, replacing previously mapped metadata.
The present disclosure provides a superconducting qubit. The superconducting qubit includes: a Josephson junction and a non-Josephson junction area, wherein the non-Josephson junction area includes a first layer of superconducting material, the first layer of superconducting material being superconducting material deposited on the non-Josephson junction area before ion milling on the Josephson junction and the non-Josephson junction area during preparation of the superconducting qubit.
H01L 39/24 - Processes or apparatus specially adapted for the manufacture or treatment of devices provided for in group or of parts thereof
H01L 27/18 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including components exhibiting superconductivity
H01L 39/02 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof - Details
Methods, apparatuses, and devices for Josephson junction preparation includes: obtaining a first pattern structure for generating a first Josephson junction of a first type and a plurality of second pattern structures for generating a plurality of second Josephson junctions of a second type; evaporating a material on the first pattern structure and the plurality of second pattern structures based on a first evaporation direction to generate a first electrode layer for implementing information transmission; forming an insulating layer on the first electrode layer, the insulating layer including a compound corresponding to the material; evaporating the material on the first pattern structure and the plurality of second pattern structures based on a second evaporation direction to generate a second electrode layer for implementing information transmission; and forming the first Josephson junction and the plurality of second Josephson junctions.
Systems and methods for exchanging synchronization information between processing units using a synchronization network are disclosed. The disclosed systems and methods include a device including a host and associated neural processing units. Each of the neural processing units can include a command communication module and a synchronization communication module. The command communication module can include circuitry for communicating with the host device over a host network. The synchronization communication module can include circuitry enabling communication between neural processing units over a synchronization network. The neural processing units can be configured to each obtain a synchronized update for a machine learning model. This synchronized update can be obtained at least in part by exchanging synchronization information using the synchronization network. The neural processing units can each maintain a version of the machine learning model and can synchronize it using the synchronized update.
Disclosed are a container-based application management method and apparatus. A serverless computing system achieved on the basis of a container is configured to allow application instances to be in one of an online state and a low power consumption state at runtime. In response to performing a capacity reduction process on an application, at least one first application instance of the application in the online state is brought into the low power consumption state; and in response to performing a capacity expansion process on the application, at least one second application instance of the application in the low power consumption state is brought into the online state. Thus, application instances can be rapidly and elastically scaled while the costs of the application instances are reduced.
A data transmission method, a device, a network system, and a storage medium. According to the method, a PCIe request end requests an application scenario of a data fragment from a PCIe destination end according to a Non-Posted data transmission mode; and on a PCIe link, the Non-Posted data transmission mode is converted, without the perception of the PCIe request end and PCIe destination end, into a Posted data transmission mode by means of pre-reading, so that the delay of the PCIe link is reduced, and the utilization of bandwidth of the PCIe link is improved.
A computer system includes a memory-side component that executes instructions to implement memory-side pointer chasing in, for example, a graph neural network applied to the analysis of graphs.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing embedding operations, are described. An exemplary method may be implemented on a computing system comprising a plurality of interconnected electronic cards each storing at least a portion of one or more embedding tables, and the method comprising: obtaining a machine-learning job for a machine-learning model, the machine-learning job comprises a plurality of features, wherein the plurality of features comprise a plurality of sparse features; performing, by the plurality of electronic cards, a plurality of embedding tasks based on the one or more embedding tables to obtain a plurality of embedding results; aggregating the obtained embedding results to obtain input for the machine-learning model to process the machine-learning job. By implementing near-memory computation in a memory pool for large embedding processing, the method provides technical benefits by solving memory capacity and bandwidth challenges and reducing latency.
The embodiments of the present disclosure relate to a short message processing method and apparatus, a device, and a storage medium. Said method comprises: acquiring a short message received by a terminal device; then performing type identification processing and key information extraction processing on the short message, so as to obtain the type and key information of the short message; according to the identified short message type, acquiring rich media resources matching the short message type; and generating a corresponding intelligent short message according to the acquired key information and rich media resources. The embodiments of the present disclosure can make the display of a short message more interesting and attractive, being suitable for short messages for marketing and promotion and new client acquisition.
H04M 1/72436 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
A sound source tracking method and apparatus, and a device, a system and a storage medium. The method comprises: acquiring an acoustic signal stream collected by a microphone array under at least one time frame (100); performing sound source orientation estimation on the basis of the acoustic signal stream, so as to obtain an information stream that includes sound source orientation information under the at least one time frame (101); converting the information stream into visualization data that describes an orientation distribution state of a sound source (102); and performing sound source tracking according to the visualization data (103). In the method, an information stream that includes sound source orientation information is converted into visualization data that describes an orientation distribution state of a sound source, and sound source tracking is performed on the basis of the visualization data. By means of the method, the accuracy of sound source tracking can be effectively improved, and the adaptability to various complicated environments can be improved.
G01S 5/18 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
61.
COMMODITY OBJECT PROCESSING METHOD, COMMODITY OBJECT INFORMATION PROCESSING METHOD, PAGE, APPARATUS, AND ELECTRONIC DEVICE
A commodity object processing method, a commodity object information processing method, a page, an apparatus, and an electronic device. The method comprises: determining a plurality of commodity objects in a target set associated with a user (S201); determining, from the target set, at least one target commodity object supporting a same target logistics scheme (S202); and generating a page corresponding to the target set, and aggregating the at least one target commodity object into a same area for display (S203). The method allows users to better enjoy a higher-quality logistics service provided by the target logistics scheme.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06Q 10/08 - Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
G06Q 30/06 - Buying, selling or leasing transactions
62.
SUPPLEMENTAL ENHANCEMENT INFORMATION MESSAGE IN VIDEO CODING
The present disclosure provides methods, apparatus and non-transitory computer readable medium for processing video data. According to certain disclosed embodiments, a method for determining an object in a picture includes: decoding a message from a bitstream including: decoding a first list of labels; and decoding a first index, to the first list of labels, of a first label associated with the object; and determining the object based on the message.
H04N 19/136 - Incoming video signal characteristics or properties
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
A method for cache management. The method includes predicting, based on a prediction model, eviction time offset values respectively for items stored in a cache; storing, in a queue, predicted lifetime values respectively for the items in the cache according to the eviction time offset values; and in response to an eviction operation, selecting a target item to be removed from the cache based on the predicted lifetime values in the queue.
The present disclosure relates to a system and method for video coding. In some embodiments, an exemplary video coding system includes: a region of interest (ROI) detector having circuitry configured to determine a plurality of regions in a frame of a video; a rate controller communicatively coupled with the ROI detector and having circuitry configured to perform bit allocation for the plurality of regions based on demanded quality information for the plurality of regions and generate region bit allocation information; and a video encoder communicatively coupled with the ROI detector and the rate controller and having circuitry configured to encode the frame based on the region bit allocation information.
A method, an apparatus, an electronic device, and a storage medium for network communication are provided by the embodiments of the present invention. The method includes: a first network entity sending a first request message to a second network entity, wherein the first request message includes at least one piece of first network parameter type information, the first request message is used to enable the second network entity to be triggered to send a first message to a third network entity according to a preset event, the first message includes at least one portion of network parameter information corresponding to the first network parameter type information; and the first network entity receiving a first reply message returned by the second network entity in response to the first request message. Using the embodiments of the present invention, the first network entity can achieve a dynamic management of an instantiation of the third network entity. Furthermore, the first network entity can dynamically manage the authority of the third network entity to obtain network parameters, thus improving the flexibility of deployment of the third network entity.
Aspects of the present technology are directed toward three-dimensional (3D) stacked processing systems characterized by high memory capacity, high memory bandwidth, low power consumption and small form factor. The 3D stacked processing systems include a plurality of processor chiplets and input/output circuits directly coupled to each of the plurality of processor chiplets.
H01L 25/18 - Assemblies consisting of a plurality of individual semiconductor or other solid state devices the devices being of types provided for in two or more different subgroups of the same main group of groups , or in a single subclass of ,
H01L 23/60 - Protection against electrostatic charges or discharges, e.g. Faraday shields
G11C 11/4074 - Power supply or voltage generation circuits, e.g. bias voltage generators, substrate voltage generators, back-up power, power control circuits
H01L 25/065 - Assemblies consisting of a plurality of individual semiconductor or other solid state devices all the devices being of a type provided for in the same subgroup of groups , or in a single subclass of , , e.g. assemblies of rectifier diodes the devices not having separate containers the devices being of a type provided for in group
H01L 23/52 - Arrangements for conducting electric current within the device in operation from one component to another
A configurable processing unit including a core processing element and a plurality of assist processing elements can be coupled together by one or more networks. The core processing element can include a large processing logic, large non-volatile memory, input/output interfaces and multiple memory channels. The plurality of assist processing elements can each include smaller processing logic, smaller non-volatile memory and multiple memory channels. One or more bitstreams can be utilized to configure and reconfigure computation resources of the core processing element and memory management of the plurality of assist processing elements.
A coding scheduling method, a server and a client terminal, and a system for acquiring a remote desktop: performing scheduling of hardware coding resources on the basis of coding requirement information of a coding task determined by information reflecting remote desktop creation requirements; when unnecessary hardware coding resources are occupied, resources are released promptly, saving hardware coding resources; and the released hardware coding resources can be provided for other coding tasks, increasing the number of concurrent channels and improving the rate of utilisation of hardware coding resources.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
69.
TRUSTED VERIFICATION SYSTEM AND METHOD, MOTHERBOARD, MICRO-BOARD CARD, AND STORAGE MEDIUM
Provided in the embodiments of the present application are a trusted verification system and method, a motherboard, a micro-board card, and a storage medium. According to the solution provided in the embodiments of the present application: when the system is powered on, performing trusted verification of a micro-board card on the basis of a first trusted platform control module TPCM on the micro-board card and, after verification is successful, controlling the other components in the micro-board card to disengage from a reset state and performing trusted verification on the motherboard by means of a motherboard verification component used for performing trusted verification on the motherboard.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
70.
METHOD AND APPARATUS FOR STARTING DEVICE, ELECTRONIC DEVICE, AND COMPUTER STORAGE MEDIUM
A method and apparatus for starting a device, an electronic device, and a computer storage medium. A BIOS of a device can read first startup configuration information in a personality configuration table in the startup stage, the first startup configuration information is compared with corresponding second startup configuration information in a management configuration table, and startup configuration information is determined according to the comparison result so as to start the device according to the determined startup configuration information.
A model processing method and apparatus, a device, and a computer-readable storage medium. The method comprises: obtaining a first computational graph corresponding to a model to be trained and a parallelization strategy of said model, the parallelization strategy of said model comprising at least one of pipeline parallelism, model parallelism, data parallelism, and operator splitting (S601); adding parallelization information in the first computational graph according to the parallelization strategy of said model to obtain a second computational graph (S602); determining a distributed computational graph according to the second computational graph and computing resources (S603); and training said model according to the distributed computational graph (S604). Multiple parallelization strategies are supported by a computational graph-based graph editing technique, so that multiple parallelization strategies can be integrated into a system, thereby realizing a distributed training framework capable of supporting multiple parallelization strategies.
A processing-in-memory (PIM) device includes a memory array configured to store data and a computing circuit. The computing circuit is configured to execute a set of instructions to cause the PIM device to: select between multiple computation modes, including a first and a second sorting modes, based on a configuration from a host communicatively coupled to the PIM device; access data elements in a memory array of the PIM device; and in the first or the second sorting modes, output top K data elements among the data elements to the memory array or to the host. K is an integer greater than a threshold value when the first sorting mode is selected and is an integer smaller than or equal to the threshold value when the second sorting mode is selected.
One embodiment described herein provides an optical add-drop multiplexer. The optical add-drop multiplexer can include a plurality of network interface modules and an optical cross connect module coupling the plurality of network interface modules. The plurality of network interface modules can include at least a first network interface module comprising a first optical component from a first vendor and a second network interface module comprising a second optical component from a second vendor. The first and second optical components have different optical parameters. One more embodiment provides an optical network that includes a plurality of interconnected optical add-drop multiplexers, including both homogeneous optical add-drop multiplexers and heterogeneous optical add-drop multiplexers. A heterogeneous optical add-drop multiplexer can include at least a first network interface module comprising a first optical component from a first vendor and a second network interface module comprising a second optical component from a second vendor.
A database system may include at least one load balancing node and a plurality of computing nodes. The load balancing node may receive a distributed transaction that includes a stored procedure, and select a predictor function for the stored procedure among from a plurality of predictor functions. The load balancing node may extract one or more input parameters from the stored procedure, and determine a computing node from among the plurality of computing nodes based at least in part on the one or more input parameters using the predictor function. The load balancing node may then forward the stored procedure to the computing node for processing the stored procedure.
Disclosed in the present invention are an image reconstruction method and apparatus, and a computer readable storage medium, and a processor. The method comprises: upon detecting that an interaction operation occurs on an operation interface, and obtaining a target position after displacement of a virtual viewpoint occurs on the operation interface; controlling transition of the virtual viewpoint from the target position to a predetermined position, wherein the predetermined position is a camera position that coincides with the position of the virtual viewpoint in spatial degree of freedom; and processing, by using an image deformation model, an image when the virtual viewpoint is located at the predetermined position, and obtaining a reconstructed image. The present invention solves the technical problem of poor timeliness in reconstructing an image at a virtual viewpoint.
The embodiments of the present application provide a data query method and apparatus. Said method comprises: acquiring a query statement, the query statement comprising parameter information; searching for a preset template statement matching the query statement, and acquiring preprocessed information corresponding to the template statement; using the preprocessed information and the parameter information to generate a target execution statement; and using the target execution sentence to query a preset database for target data. In this way, a query statement sent by a user can be converted into a target execution statement, and the target execution statement is used to query a database for target data, so that the database can directly execute a query task without a statement parsing optimization process, thereby increasing the query efficiency of the database.
Video coding techniques can include determining one or more regions-of-interest in a plurality of levels of interest. Encoding bitrates can be determined for the regions-of-interest in each of the plurality of levels of interest. A compressed bitstream can be generated based on the regions-of-interest in the plurality of levels of interest using the corresponding encoding bitrate of the regions-of-interest.
Provided in embodiments of the present description are conference processing methods and an apparatus. A specific embodiment of the method comprises: receiving conference information of a conference to be scheduled from a transit servicer, the conference information being submitted by an initiator of the conference to be scheduled by means of a target window; in response to the conference information of the conference to be scheduled comprising attendee information and attendance time, and the attendee information showing a plurality of attendees, confirming whether an invited attendee among the plurality of attendees can attend the conference to be scheduled at the attendance time; and in response to an invited attendee having a time conflict being present among the plurality of attendees, performing a conference time coordination operation for the invited attendee by using an interactive means, and determining the attendance time according to the coordination result.
Embodiments of the present invention provide a memory smart card, a device, a network, a method, and a computer storage medium. The memory smart card comprises: a processing module and a memory interconnection interface; at least one memory slot is provided on the memory smart card, and is used for allowing for insertion of a memory bank; the memory interconnection interface is used for communicating with other memory smart cards, and is further used for encapsulating or decapsulating data according to a memory interconnection protocol; the processing module is used for reading data from the memory bank or writing data to the memory bank by means of the memory slot. Data transmission is performed among the memory smart cards by means of the memory interconnection interface, thereby increasing the data transmission rate, and also reducing performance overhead of a processor.
The present disclosure provides intra prediction methods for video or image coding. An exemplary method includes: performing an intra predicting process for a target block, wherein performing the intra predicting process comprises: determining an intra prediction mode for the target block; in response to the intra prediction mode is an angular mode, determining a filtered value by applying an N-tap interpolation filter to a plurality of reference samples surrounding the target block based on the angular mode, wherein N is an integer greater than 4; and determining a predicted value of a sample of the target block based on the filtered value.
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
81.
SYSTEMS AND METHODS FOR INTRA PREDICTION SMOOTHING FILTER
Methods, apparatus and non-transitory computer readable medium for video processing are provided. The method for video processing includes: dividing an intra prediction block into one or more sub-blocks; performing padding process for the one or more sub-blocks; and filtering the one or more sub-blocks with a parallel intra prediction smoothing (IPS) process.
Network communication methods and apparatuses are provided in the embodiments of the present invention. The user equipment receives an update message sent by a second network entity through a third network entity. The update message is used to update a terminal route selection policy rule. The terminal route selection policy rule includes a network identification, and the network identification is used to enable the user equipment to select a corresponding network according to an target application. The user equipment sends an update reply message for the update message to the third network entity, to cause the third network entity to send the update reply message to the second network entity. Through the embodiments of the present invention, the user equipment can determine a corresponding private network and access the network through the application, and obtain a service of a specified application, thus improving the efficiency of network services.
Systems and methods provide a multi-version shard plan to improve horizontal scaling and load balancing across distributed shards while avoiding downtime resulting from data migration. According to serverless architectures, application hosting services providers may provision customers with distributed computing resources partitioned into shards, which may be dynamically assigned, scaled, and load-balanced for architecture-agnostic workflow applications. A versioned shard plan may be implemented at a database, storing key ranges which enable workflow application executions to be uniquely mapped to particular shards. The database controller may immutably assign a shard to each execution of a workflow application, and publish a new version of the shard plan accessible to both a frontend and a backend of the customer's hosted applications. By these solutions, in the course of horizontal scaling of computing resources assigned to workflow applications, data migration between shards is avoided and downtime may not be incurred as a result.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
84.
JOSEPHSON JUNCTION, JOSEPHSON JUNCTION PREPARATION METHOD AND APPARATUS, AND SUPERCONDUCTING CIRCUIT
A superconducting circuit having a Josephson junction includes a first electrode layer for signal transmission; a second electrode layer for signal transmission; and an insulating layer arranged between the first electrode layer and the second electrode layer to form a Josephson junction, wherein, the first electrode layer and the second electrode layer are composed of a preset material, the insulating layer is composed of a compound corresponding to the preset material, and the preset material includes a non-aluminum superconducting material to prolong a coherence time of superconducting qubits.
H01L 39/02 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof - Details
G06N 10/00 - Quantum computing, i.e. information processing based on quantum-mechanical phenomena
H01L 39/08 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof - Details characterised by the shape of the element
H01L 39/10 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof - Details characterised by the means for switching
H01L 39/12 - Devices using superconductivity or hyperconductivity; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof - Details characterised by the material
85.
IMAGE FRAME CODING METHOD, OBJECT SEARCH METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM
Disclosed by embodiments of the present application are an image frame coding method, an object search method, a computer device, and a storage medium. The image frame coding method comprises: dividing a target image frame into a plurality of first data blocks; splitting the plurality of first data blocks into batches and inputting said blocks into a plurality of coding units which operate in sequence; at least two coding units synchronously perform processing within a portion of time segments when the plurality of coding units are performing processing; and performing coding on the target image frame according to coding parameters respectively corresponding to the plurality of first data blocks. Compared to a manner of sequential operation using video frames as units, the present method can greatly reduce time spent on coding parameter determination, improve video coding efficiency, and also reduce data block processing delay. Due to the manner in which the plurality of coding units run in parallel allowing for resource reuse on a hardware unit, chip size can be reduced, hardware overhead can be lowered, hardware costs can be reduced, and hardware implementation is facilitated.
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
86.
NOISE CONTROL IN PROCESSOR-IN-MEMORY ARCHITECTURES
Runtime hardware configuration based on multiple electronic noise models is provided to control electronic noise emitted by a processor-in-memory architecture which may be harvested as entropy for a random number generator. Outside of PIM runtime, one or more noise models calibrates parameters of the noise model based on runtime feedback from the PIM; a configuration solver determines a runtime hardware configuration for the PIM to satisfy per-layer randomness requirement (s) of a learning model; the configuration solver transmits the runtime hardware configuration to a noise control module; the noise control module receives real-time runtime feedback regarding the PIM; the noise control module determines, based on the real-time runtime feedback, configuration instructions of the PIM for runtime conditions to approach configuration parameters of the runtime hardware configuration; and the noise control module performs runtime hardware configuration by outputting determined configuration instructions to one or more hardware modules of the PIM.
Apparatus, method, and system provided herein are directed to prioritizing cache line writing of compressed data. The memory controller comprises a cache line compression engine that receives raw data, compresses the raw data, determines a compression rate between the raw data and the compressed data, determines whether the compression rate is greater than a predetermined rate, and outputs the compressed data as data-to-be-written if the compression rate is greater than the predetermined rate. In response to determining that the compression rate is greater than the predetermined rate, the cache line compression engine generates a compression signal indicating the data-to-be-written is the compressed data and sends the compression signal to a scheduler of a command queue in the memory controller where writing of compressed data is prioritized.
The present disclosure provides video decoding method. An exemplary method includes: decoding a first parameter for a coding unit from a bitstream, and determining a candidate for the coding unit based on the first parameter; determining a value of a second parameter associated with the coding unit based on a value of a second parameter associated with the candidate, wherein the second parameter indicates whether a bi-directional prediction correction is enabled; and in response to the value of the second parameter associated with the coding unit indicating the bi-directional prediction correction being enabled, performing the bi-directional prediction correction on the coding unit.
H04N 19/189 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
89.
SPEAKER ROLE INFORMATION PROCESSING METHOD AND APPARATUS
Provided are a speaker role information processing method and apparatus, one specific embodiment of the method comprising: receiving, from a speech processing system, first prompt information for prompting a role presentation, the first prompt information comprising identity information and position information of a current speaker; according to the first prompt information, displaying role information of the current speaker on a target interface, the role information at least comprising the identity information and the position information.
Provided are a network reconnection method, and a device, a system and a storage medium. In the embodiments of the present application, a network element in a mobile network and a central management and control node cooperate with each other to realize the change in an edge cloud node and the migration of an edge cloud service, and a session connection between a user terminal and the changed edge cloud node is established by means of the network element in the mobile network. In this way, a changed edge cloud node can continue to provide a service for a user terminal, such that the probability of service interruption can be reduced or the quality of service can be prevented from deteriorating, thereby facilitating an improvement in the quality of service of a network system.
PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL (China)
Inventor
Wang, Ronggang
Cai, Yangang
Gu, Song
Sheng, Xiaojie
Abstract
A video processing method, an apparatus, an electronic device, and a storage medium, wherein the video processing method comprises: stitching frame-synchronized multi-viewpoint texture maps into a texture map stitched image; stitching depth maps corresponding to the frame-synchronized multi-viewpoint texture maps into a depth map stitched image; separately coding the texture map stitched image and the depth map stitched image, and obtaining a corresponding texture map compressed image and depth map compressed image. The described solution is able to lower the requirement on the decoding capability of a decoding end, and reduces compression loss.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
92.
FREE VIEWPOINT VIDEO RECONSTRUCTION AND PLAYING PROCESSING METHOD, DEVICE, AND STORAGE MEDIUM
PEKING UNIVERSITY SHENZHEN GRADUATE SCHOOL (China)
Inventor
Wang, Ronggang
Cai, Yangang
Gu, Song
Sheng, Xiaojie
Abstract
Free viewpoint video reconstruction and playing processing methods, a device, and a storage medium. The video reconstruction method comprises: acquiring a free viewpoint video frame, wherein the video frame comprises synchronous original texture images of a plurality of original viewpoints, and original depth images of the corresponding viewpoints; acquiring a target video frame corresponding to a virtual viewpoint; using original texture images of the plurality of original viewpoints and corresponding original depth images in the target video frame to synthesize a texture image of the virtual viewpoint; acquiring background texture images and background depth images of the corresponding viewpoints in the target video frame, and acquiring a background texture image of the virtual viewpoint according to the background texture images and the background depth images of the corresponding viewpoints; and using the background texture image of the virtual viewpoint to perform hole filling on a hole region in the texture image of the virtual viewpoint, and then performing processing to obtain a reconstructed image of the virtual viewpoint. By means of the scheme, the hole filling quality can be improved, thereby improving the image quality of a free viewpoint video.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/111 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
A video compression method and apparatus, a video decompression method and apparatus, an electronic device, and a storage medium. The video compression method comprises: obtaining a free viewpoint video frame, the video frame comprising a stitched image formed by synchronized texture maps of multiple viewpoints and depth maps of the corresponding viewpoints; identifying a texture map region and a depth map region in the stitched image; and coding the texture map region by using a preset first coding mode and coding the depth map region by using a preset second coding mode to obtain a compressed video frame, wherein the second coding mode is a region of interest-based coding mode. The solution can reduce compression loss of an image, and then can improve the image quality of a free viewpoint video.
The embodiments of the present application disclose a live streaming method and apparatus, and an electronic device. Said method comprises: a first server receiving a request for creating multi-language live streaming submitted by a first client; after the multi-language live streaming is successfully created, obtaining, according to a source live stream acquired by the first client, a translated target live stream corresponding to at least one target language; and after a request for pulling the live stream submitted by a second client is received, determining a target language required by a user associated with the second client, and providing the target live stream corresponding to the target language to the second client for playback. By means of the embodiments of the present application, live streaming techniques can be better applied in systems such as a cross-border commodity object information service system.
An embodiment of the present application provides a network data processing system and method, a network element device, and a server. Said system comprises: a first network element device, configured to send first data to a second network element device, replicate the first data to obtain second data, and send the second data to a target server; and the target server, configured to execute a corresponding network traffic monitoring task according to the received second data. According to the technical solution provided in the embodiments of the present application, in cases where there is no need to connect a plurality of complex auxiliary devices, such as an optical splitter and a shunting switch, to a network interface, the first network element device can directly complete the replication operation of data traffic, and then send the replicated data traffic to the target server for executing a monitoring task. The network data processing system of the present solution has a simple structure, low costs and high construction efficiency.
Disclosed in the embodiments of the present application are a user group interaction method and apparatus, and an electronic device. The method comprises: providing an operation option for creating a subgroup to a target group; after a request for creating a subgroup is received by means of the operation option, creating a subgroup, and establishing the association relationship between the subgroup and the target group; after the subgroup is created, providing a label option for the subgroup in an interaction window associated with the target group; and after a switching operation request is received by means of the label option of the subgroup, switching to the subgroup for group interaction. By means of the embodiments of the present application, the problem of excessive difficulty in management or search caused by the excessive number of groups in a conversion list can be avoided, thereby improving the flexibility and convenience of group information interaction.
A data processing method and apparatus, and an intelligent network card and a server. On the basis of the method, before upgrading an auxiliary processor, a process corresponding to a dynamic area in the auxiliary processor can be preset in a memory of a load device connected to the auxiliary processor; during the specific upgrading of the auxiliary processor, a dynamic area to be upgraded in the auxiliary processor can first be determined to be a target dynamic area, target data which is input into the target dynamic area is determined, and a transmission path of the target data is then modified from originally pointing to the target dynamic area to pointing to a target process corresponding to the target dynamic area, such that during the upgrading process, the processing of the target data can be continued by means of calling the target process; and after the transmission path of the target data is modified in this manner, the target dynamic area is upgraded, such that the upgrading process of the auxiliary processor does not affect the data processing of the target data on the auxiliary processor, thereby ensuring the stability of the data processing on the auxiliary processor.
Provided are a content provision method and apparatus, a content display method and apparatus, and an electronic device and a storage medium. The content display method can comprise: acquiring target content, the target content matching a target behavior of a subject in a target video; and in response to the target behavior, displaying the target content in an associated manner in a display interface of the target video.
The present disclosure provides methods, apparatus and non-transitory computer readable medium for processing video data. According to certain disclosed embodiments, a method includes: determining a set of parameters from a plurality of sets of parameters wherein the set of parameters includes a scaling factor; determining a predicted sample value of a first chroma component based on the set of the parameters, a reconstructed sample value of a luma component and a reconstructed sample value of a second chroma component; and signaling an index associated with the set of parameters in a bitstream.
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
100.
QUANTUM CHIP PREPARATION METHOD, APPARATUS, AND DEVICE AND QUANTUM CHIP
Methods, apparatuses, and devices for quantum chip preparation include acquiring a coplanar waveguide in a quantum chip; and establishing a connecting bridge on the coplanar waveguide using a bonding machine, wherein the connecting bridge is configured to connect a first reference ground and a second reference ground located on two sides of the coplanar waveguide to change tire chip electromagnetic resonance frequency. A quantum chip includes a transmission line configured tor signal transmission; and a resonant cavity coupled to the transmission line and configured to regulate an operating state of qubits on the quantum chip, wherein the transmission line and the resonant cavity are both composed of a coplanar waveguide, the coplanar waveguide is provided with a connecting bridge, and the connecting bridge is configured to connect a first reference ground and a second reference ground on two sides of the coplanar waveguide to change the chip electromagnetic resonance frequency.