A system and method for multimodal video segmentation in a multi-speaker scenario are provided. A transcript of a video with a plurality of speakers is segmented into a plurality of sentences. Speaker change information is detected between each two adjacent sentences of the plurality of sentences based on at least one of audio content or visual content of the video. The video is segmented into a plurality of video clips based on the transcript of the video and the speaker change information.
G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
G10L 17/14 - Par catégorisation phonémique ou reconnaissance de la parole avant identification ou vérification du locuteur
G10L 25/60 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour mesurer la qualité des signaux de voix
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence
2.
SYSTEM AND METHOD FOR UNSUPERVISED SUPERPIXEL-DRIVEN INSTANCE SEGMENTATION OF REMOTE SENSING IMAGE
A system and method for unsupervised superpixel-driven instance segmentation of a remote sensing image are provided. The remote sensing image is divided into one or more image patches. The one or more image patches are processed to generate one or more superpixel aggregation patches based on a graph-based aggregation model, respectively. The graph-based aggregation model is configured to learn at least one of a spatial affinity or a feature affinity of a plurality of superpixels from each image patch and aggregate the plurality of superpixels based on the at least one of the spatial affinity or the feature affinity of the plurality of superpixels. The one or more superpixel aggregation patches are combined into an instance segmentation image.
The present disclosure describes a computer-implemented method for image landmark detection. The method includes receiving an input image for the image landmark detection, generating a feature map for the input image via a convolutional neural network, initializing an initial graph based on the generated feature map, the initial graph representing initial landmarks of the input image, performing a global graph convolution of the initial graph to generate a global graph, where landmarks in the global graph move closer to target locations associated with the input image, and iteratively performing a local graph convolution of the global graph to generate a series of local graphs, where landmarks in the series of local graphs iteratively move further towards the target locations associated with the input image.
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
The present application relates to artificial intelligence, and discloses a text classification method, including: preprocessing original text data to obtain a text vector; matching a tag to the text vector to obtain a tagged text vector and an untagged text vector; inputting the tagged text vector into a BERT model to obtain a word vector feature; training the untagged text vector with a convolution neural network model according to the word vector feature to obtain a virtually tagged text vector; and using a random forest model to perform multi-tag classification on the tagged text vector and the virtually tagged text vector to obtain a text classification result. The present application also provides a text classification apparatus and a computer-readable storage medium. The present application can realize accurate and efficient text classification.
The present disclosure provides a system, a method, an electronic device, and a storage medium for identifying risk event based on social information. The system includes an obtaining module configured for obtaining social information released by various predetermined social accounts from a predetermined social server; an analysis module, configured for analyzing the social information to obtain a company name and/or a product name contained in the social information; a resolution module configured for, after the company name and/or product name contained in the social information are obtained, resolving the social information to obtain key point information corresponding to the social information; and an identifying module configured for identifying an information directing classification corresponding to the key point information using a pre-trained classifier such that the social information corresponding to the predetermined information directing classification and the social account releasing the social information are sent to a predetermined terminal.
G06Q 10/0635 - Analyse des risques liés aux activités d’entreprises ou d’organisations
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
A crop disease diagnosis system is disclosed. The crop disease diagnosis system includes a communication module, a crop disease database and a crop feature classification module. The communication module is configured to receive a crop image. The crop disease database stores at least one crop disease sample case. The crop feature classification module is configured to extract a feature vector representation of the crop image, compare the feature vector representation of the crop image with the at least one crop disease sample case, and classify a crop disease associated with the crop image. The feature vector representation of the crop image is extracted by a feature extraction network, and a fully connected layer is removed from the feature extraction network during classification of the crop disease.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
7.
Method for positioning vertebra in CT image, apparatus, device, and computer readable storage medium
The present disclosure provides a method of positioning vertebra in a CT image, an apparatus, a computer device, and a computer readable storage medium. The method includes: pre-processing vertebra CT image data; inputting the pre-processed vertebra CT image data into a pre-trained neural network to obtain regression results of heat maps of key points corresponding to the pre-processed vertebra CT image data; regressing of 3D heat maps corresponding to the positions of the key points of the vertebra mass center based on the regression results of the heat maps of the key points and the pre-processed vertebra CT image data; serving 3D heat maps corresponding to the positions of the key points of the vertebra mass center as labels, and networked regressing 3D heat map information to position the vertebra. Effects caused by scanning machine difference and scanning noise are avoided, and the vertebra with complex forms is accurately positioned.
A system and a method for monitoring emission of a greenhouse gas are disclosed. A plurality of satellite observations associated with the emission of the greenhouse gas in a first region of interest are received from a plurality of satellite data sources, respectively. The plurality of satellite observations are fused to generate a fused input data set. An emission estimation model is used to generate a first emission estimate of the greenhouse gas in the first region of interest based on the fused input data set.
G01N 33/00 - Recherche ou analyse des matériaux par des méthodes spécifiques non couvertes par les groupes
G06F 30/27 - Optimisation, vérification ou simulation de l’objet conçu utilisant l’apprentissage automatique, p.ex. l’intelligence artificielle, les réseaux neuronaux, les machines à support de vecteur [MSV] ou l’apprentissage d’un modèle
9.
FUNDUS COLOR PHOTO IMAGE GRADING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
A fundus color photo image grading method and apparatus, a computer device, and a storage medium are provided. The method comprises: obtaining an original image, enhancing the original image, and obtaining a target image (S100); performing color processing on the original image and the target image, and respectively obtaining a first processed image and a second processed image (S200); using a pre-trained grading model to perform processing on the first processed image and the second processed image, and obtaining a target grading result (S300). Multi-color space first and second processed images act as model inputs, and prediction is performed by means of fused features on a whole image scale to implement the classification and grading of common fundus color photo diseases, so as to automatically screen fundus color photos with pathological changes, achieve a pre-screening effect, and improve operational efficiency.
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
10.
SYSTEMS AND METHODS FOR ESTIMATING MONETARY LOSS TO AN ACCIDENT DAMAGED VEHICLE
Embodiments of the disclosure provide systems and methods for estimating an amount of monetary loss to an accident damaged vehicle. An exemplary system includes a communication interface configured to receive one or more accident images taken of an accident damaged vehicle. It also includes a database for storing loss data of one or more past vehicles, and each past vehicle is associated with a historical accident. It further includes a processor coupled to the communication interface and the database. The processor is configured to detect the accident damaged vehicle in the one or more accident images, identify one or more most similar past vehicles, and estimate the amount of monetary loss to the accident damaged vehicle based on the loss data of the one or more past vehicles.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
11.
SYSTEMS AND METHODS FOR DETERMINING FAULT FOR A VEHICLE ACCIDENT
Embodiments of the disclosure provide systems and methods for determining fault for a vehicle accident. An exemplary system includes a communication interface configured to receive a video signal from a camera. The video signal includes a sequence of image frames. The system further includes at least one processor coupled to the communication interface. The at least one processor detects one or more vehicles and one or more road identifiers in the image frames, transforms a perspective of each image frame from a camera view to a top view, determines a trajectory of each detected vehicle in the transformed image frames, identifies an accident based on the determined trajectory of each vehicle, and determines a type of the accident and a fault of each vehicle involved in the accident.
B60W 30/095 - Prévision du trajet ou de la probabilité de collision
B60W 30/12 - Maintien de la trajectoire dans une voie de circulation
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
12.
SYSTEM AND METHOD FOR REMOVING HAZE FROM REMOTE SENSING IMAGES
A system and a method for removing haze from remote sensing images are disclosed. One or more hazy input images with at least four spectral channels and one or more target images with the at least four spectral channels are generated. The one or more hazy input images correspond to the one or more target images, respectively. A dehazing deep learning model is trained using the one or more hazy input images and the one or more target images. The dehazing deep learning model is provided for haze removal processing.
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
13.
Method, Device, Electronic Equipment and Storage Medium for Positioning Macular Center in Fundus Images
The application relates to the technical field of artificial intelligence, and provides a method, device, electronic equipment and storage medium for positioning macular center in fundus images. The method comprises: acquiring a detection result of the fundus image detection model, wherein the detection result includes an optic disc area, and a first detection block and a first confidence score corresponding to the optic disc area, and a macular area, and a second detection block and a second confidence score corresponding to the macular area; calculating a center point coordinate of the optic disc area according to the first detection block, and calculating a center point coordinate of the macular area according to the second detection block; identifying whether the to-be-detected fundus image is a left eye fundus image or a right eye fundus image, and correcting a center point of the macular area using different correction models.
G06V 10/98 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos Évaluation de la qualité des motifs acquis
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
14.
METHOD, DEVICE, EQUIPMENT AND MEDIUM FOR DETERMINING CUSTOMER TABS BASED ON DEEP LEARNING
Disclosed are a method, device, equipment and medium for determining customer tads based on deep learning. The method comprises the following steps: acquiring a conversation content between a customer and a robot customer service, inputting the conversation content into a preset multi-factor intent classifier to obtain a recognition result of product purchase intention output by the preset multi-factor intent classifier, setting a customer tab for the customer according to the recognition result of product purchase intention, and determining whether to provide manual service for the customer; acquiring a result of the manual service and conversation data of the customer in the manual service if the manual service is provided for the customer; and updating the customer tab of the customer according to the result of the manual service, and updating the preset multi-factor intent classifier according to the conversation data of the customer.
A system and a method for detecting animals in a region of interest are disclosed. An image that captures a scene in the region of interest is received. The image is fed to an animal detection model to produce a group of probability maps for a group of key points and a group of affinity field maps for a group of key point sets. One or more connection graphs are determined based on the group of probability maps and the group of affinity field maps. Each connection graph outlines a presence of an animal in the image. One or more animals present in the region of interest are detected based on the one or more connection graphs.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
The present disclosure provides a method, device, computer apparatus, and storage medium for training recognition model and recognizing fundus features. The method includes: obtaining a color fundus image sample associated with a label value, inputting the color fundus image sample into a preset recognition model containing initial parameters; extracting a red channel image; inputting the red channel image into the first convolutional neural network to obtain a first recognition result and a feature image of the red channel image; combining the color fundus image sample with the feature image to generate a combined image, and inputting the combined image into the second convolutional neural network to obtain a second recognition result; obtaining a total loss value through a loss function, and when the total loss value is less than or equal to a preset loss threshold, ending the training of the preset recognition model.
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
The application belongs to the field of big data, and particularly relates to an official document processing method, device, computer equipment and storage medium. The method includes the following steps of: performing format analysis on the to-be-reviewed official document, then acquiring the to-be-reviewed official document of standard file type, and identifying all file components and contents in the to-be-reviewed official document of standard file type; performing text format detection, text content detection and frame layout detection synchronously by a preset text processing model, obtaining a format detection result, a content detection result and a layout detection result; generating a detected error content according to the format detection result, content detection result and layout detection result, calling out a standard writing rule corresponding to the detected error content, marking the detected error content and the standard writing rule in the to-be-reviewed official document.
G06F 40/40 - Traitement ou traduction du langage naturel
G06F 40/103 - Mise en forme, c. à d. modification de l’apparence des documents
G06V 30/418 - Appariement de documents, p.ex. d’images de documents
G06V 30/412 - Analyse de mise en page de documents structurés avec des lignes imprimées ou des zones de saisie, p.ex. de formulaires ou de tableaux d’entreprise
18.
System and method for super-resolution image processing in remote sensing
A system and a method for super-resolution image processing in remote sensing are disclosed. One or more sets of multi-temporal images with an input resolution and one or more first target images with a first output resolution are generated from one or more data sources. The first output resolution is higher than the input resolution. Each set of multi-temporal images is processed to improve an image match in the corresponding set of multi-temporal images. The one or more sets of multi-temporal images are associated with the one or more first target images to generate a training dataset. A deep learning model is trained using the training dataset. The deep learning model is provided for subsequent super-resolution image processing.
A system and a method for image-based crop identification are disclosed. The image-based crop identification system includes a database, a communication module and a model library. The database stores sample aerial data and annotated aerial data. The communication module is coupled to the database, and is configured to provide the sample aerial data to a user and receive the annotated aerial data from the user. The model library is coupled to the database, and is configured to obtain the annotated aerial data, train a crop classification model based on the annotated aerial data, and provide the trained crop classification model for subsequent crop identification. The annotated aerial data include determination of the type of the crop appearing in the sample aerial data.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06F 16/58 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
20.
Method, device, and storage medium for weakly-supervised universal lesion segmentation with regional level set loss
The present disclosure provides a computer-implemented method, a device, and a storage medium. The method includes inputting an image into an attention-enhanced high-resolution network (AHRNet) to extract feature maps for generating a first feature map; generating a first probability map which is concatenated with the first feature map to form a concatenated first feature map, and updating the AHRNet using the first segmentation loss; generating a second feature map, and scaling the second feature map to form a third feature map; generating a second probability map which is concatenated with the third feature map to form a concatenated third feature map, and updating the AHRNet using the second segmentation loss; generating a fourth feature map, and scaling the fourth feature map to form a fifth feature map; updating the AHRNet using the third segmentation loss and the regional level set loss; and outputting the third probability map.
G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
G06F 18/213 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace
A method for cutting or extracting video clips from a video, including the audio content relevant to points of particular interest, and combining the same for instruction or training on particular points; a computing device applying the method extracts text information from the spoken audio content of a video to be cut and obtains multiple paragraph segmentation positions as candidates for inclusion in a desired and finished presentation by analyzing the information from text representing the spoken audio content, the analysis being carried out by a semantic segmentation model. Candidate items of text are obtained by isolating pieces of text according to the paragraph segmentation positions. Time stamps of the candidate text segments are acquired, and candidate video clips are obtained by cutting the video according to the acquired time stamps.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
G10L 21/10 - Transformation en information visible
G06K 9/72 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant une analyse de contexte basée sur l'identité provisoire attribuée à une série de formes successives, p.ex. d'un mot
The present disclosure provides a method, a device, and a storage medium for prior-guided dual-path network (PDNet). The method includes inputting an image into a split-attention network to extract a feature map at each scale and compressing the feature map to form a compressed feature map of each scale, by an image encoder, inputting the compressed feature map and a three-channel image into a prior encoder to generate an attention enhanced feature map of each scale, and outputting the attention enhanced feature map to a decoder; concatenating, by the decoder, an attention enhanced feature map at a current scale, in combination with up-sampled feature maps and/or down-sampled feature maps from other scales, to form a concatenated feature map of the current scale; and attaching a deconvolutional layer to a highest-level scale SA to segment a lesion and predict a RECIST diameter based on concatenated feature maps.
An intelligent text cleaning method includes: acquiring a text set, and preprocessing the text set to obtain a word vector text set; subjecting the word vector text set to a full-text matrix numeralization to generate a principal word vector matrix and a text word vector matrix; inputting the principal word vector matrix to a BiLSTM model to generate an intermediate text vector; inputting the text word vector matrix to a convolution neural network model to generate a target text vector; and concatenating the intermediate text vector and the target text vector to obtain combined text vectors, inputting the combined text vectors to a pre-constructed semantic recognition classifier model, outputting an aggregated text vector, subjecting the aggregated text vector to reverse recovery using a word2vec reverse algorithm, and outputting a standard text. The present application realizes accurate text cleaning.
G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
Chang Gung Memorial Hospital, Linkou (Taïwan, Province de Chine)
Inventeur(s)
Wang, Fakai
Kuo, Chang-Fu
Zheng, Kang
Miao, Shun
Wang, Yirui
Lu, Le
Abrégé
A method of opportunistic screening of osteoporosis includes obtaining a plain film chest X-ray (CXR); extracting regions of interest (ROIs) from the plain film CXR; and providing individual bone mineral density (BMD) scores corresponding to the extracted ROIs and a joint BMD corresponding to the plain film CXR based on a multi-ROI model by performing: inputting the extracted ROIs into a backbone network to generate individual feature vectors, each individual feature vector corresponding to one of the extracted ROIs; concatenating the individual feature vectors into a joint feature vector; individually decoding the individual feature vectors by a shared fully connected (FC) layer to generate the individual BMDs, each individual BMD corresponding to one of the individual feature vectors; and decoding the joint feature vector by a separate FC layer to generate the joint BMD.
A method for estimating bone mineral density (BMD) includes obtaining an image and cropping one or more regions-of-interest (ROIs) in the image, taking the one or more ROIs as input to a network model for estimating BMDs, training the network model on the labeled one or more ROIs with one or more loss functions to obtain a pre-trained model in a supervised pre-training stage, and fine-tuning the pre-trained model on a first plurality of data representing the labeled one or more ROIs and a second plurality of data representing unlabeled region to determine a fine-tuned network model for estimating BMDs in a semi-supervised self-training stage. The one or more loss functions includes a specific adaptive triplet loss (ATL) configured to encourage distances between one or more feature embedding vectors correlated to differences among the BMDs.
The present disclosure relates to an artificial intelligence field using a neural network, and publishes a method for confirming a cup-disc ratio based on a neural network, an apparatus, a computer device, and a computer readable storage medium. The method includes: obtaining a retinal image, and detecting an optical disc region in the retinal image to obtain the optical disc region; inputting the optical disc region into a pre-trained neural network to obtain a prediction cup-disc ratio and segment images of an optical cup and an optical disc; computing a cup-disc ratio based on the segment images of the optical cup and the optical disc; and confirming a practical cup-disc ratio based on the prediction cup-disc ratio and the computed cup-disc ratio.
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
A61B 3/00 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux
A61B 3/12 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour examiner le fond de l'œil, p.ex. ophtalmoscopes
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
27.
PDAC IMAGE SEGMENTATION METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
A Pancreatic Ductal Adenocarcinoma (PDAC) image segmentation method, an electronic device, and a storage medium are provided. In the PDAC image segmentation method, a first model is trained using a first data set; and a second model is trained using a second data set. A third data set is obtained by annotating a to-be-annotated data set using the first model and the second model and a third model is trained using a fourth data set. A training set is obtained by modifying the first data set and the third data set using the third model and a segmentation model is obtained by training an nnUNet using the training set. A to-be-segmented PDAC image is input into the segmentation model, and a segmentation result is obtained. By utilizing the PDAC image segmentation method, a more accurate PDAC image segmentation is achieved.
A liver fibrosis recognition method based on medical images and a computing device using thereof obtains a plurality of first binary images by segmenting a region of interest in each of a plurality of medical images of a liver. A rectangular region is created for each first binary image, and a plurality of second binary images is obtained by generating a second binary according to each rectangular region and the first binary image. A feature map is obtained from each liver medical image and images are generated according to the second binary images and corresponding to the plurality of feature maps. A model for recognition is iteratively trained based on the plurality of final images and recognition of liver fibrosis in patients is then achievable using the model.
Disclosed are a method and apparatus for selecting answers to idiom fill-in-the-blank questions, a computer device, and a storage medium. The method includes: obtaining a question text of idiom fill-in-the-blank questions, the question text including a fill-in-the-blank text and n candidate idioms, and the fill-in-the-blank text including m fill-in-the-blanks to be filled in with the candidate idioms; obtaining an explanatory text of all the candidate idioms; obtaining, through an idiom selection fill-in-the-blank model, a confidence that each fill-in-the-blank is filled in with each candidate idiom; selecting m idioms from the n candidate idioms to form multiple groups of answers; calculating a sum of the confidences that the fill-in-the-blanks are filled in with the candidate idioms in each group of answers; and obtaining a group of answers with the highest confidence sum as answers to the idiom fill-in-the-blank questions. The present application implements answers to idiom fill-in-the-blank questions with high accuracy.
A voiceprint recognition method includes: obtaining a target speech information set to be recognized that includes speech information corresponding to at least one object; extracting target feature information from the target speech information set by using a preset algorithm, and optimizing the target feature information based on a first loss function to obtain a first voiceprint recognition result; obtaining target speech channel information of a target speech channel, where the target speech channel information includes channel noise information, and the target speech channel is used to transmit the target speech information set; extracting target feature vectors in the channel noise information, and optimizing the target feature vectors based on a second loss function to obtain a second voiceprint recognition result; and fusing the first voiceprint recognition result and the second voiceprint recognition result to determine a final voiceprint recognition result.
G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
G10L 17/10 - Systèmes multimodaux, c. à d. basés sur l’intégration de moteurs multiples de reconnaissance ou de fusion de systèmes experts
An incidence rate monitoring method, apparatus and device based on historical disease information, and a computer-readable storage medium, wherein the incidence rate monitoring method based on historical disease information includes: forming a prediction model of incidence rate monitoring based on historical disease information through continuous and autonomous learning of historical medical record data based on a combination of a preset gated recurrent neural network and an ensemble learning algorithm, and then inputting disease data based on the to-be-predicted disease into the prediction model for prediction and monitoring. The prediction model is formed by capturing a certain pattern from the historical medical record data through the combination of the above-mentioned algorithm and neural network.
A method for voiceprint recognition of an original speech is used to reduce information losses and system complexity of a model for data recognition of a speaker's original speech. The method includes: obtaining original speech data, and segmenting the original speech data based on a preset time length to obtain segmented speech data; performing tail-biting convolution processing and discrete Fourier transform on the segmented speech data through a preset convolution filter bank to obtain voiceprint feature data; pooling the voiceprint feature data through a preset deep neural network to obtain a target voiceprint feature; performing embedded vector transformation on the target voiceprint feature to obtain corresponding voiceprint feature vectors; and performing calculation on the voiceprint feature vectors through a preset loss function to obtain target voiceprint data, where the loss function includes a cosine similarity matrix loss function and a minimum mean square error matrix loss function.
G10L 17/06 - Techniques de prise de décision; Stratégies d’alignement de motifs
G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
G10L 25/18 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information spectrale de chaque sous-bande
G10L 25/21 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information sur la puissance
33.
IMAGE ENHANCEMENT PROCESSING METHOD, DEVICE, EQUIPMENT, AND MEDIUM BASED ON ARTIFICIAL INTELLIGENCE
An image enhancement processing method includes: acquiring an initial image, preprocessing the initial image, and acquiring an original feature image containing a target feature; performing an edge detection on the original feature image using an edge detection algorithm to obtain an original gradient image, obtaining a statistics ring based on the original feature image, and performing an iterative process on the statistics ring; obtaining a to-be-processed image based on an inner diameter of on the statistics ring, and determining to-be-processed parameters of the to-be-processed image: acquiring a standard image corresponding to the target feature, determining a standard area corresponding to the standard image, and acquiring standard image parameters corresponding to the standard area; performing a migration process on the to-be-processed image to obtain a migration image; and performing a restricted contrast adaptive histogram equalization process on the migration image to obtain a target enhanced image.
A preoperative survival prediction method and a computing device applying the method include constructing a data seta according to a plurality of enhanced medical images and a resection margin of each enhanced medical image and obtaining a plurality of training data sets from the constructed data set. For each training data set, multi-task prediction models are trained. A target multi-task prediction model is selected from the plurality, and a resection margin prediction value and a survival risk prediction value are obtained by predicting an enhanced medical image to be measured through the target multi-task prediction model. The multi-task prediction model more effectively captures the changes over time of the tumor in multiple stages, so as to enable a joint prediction of a resection margin prediction value and a survival risk prediction value.
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06F 18/213 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
35.
METHOD FOR SELECTING IMAGE SAMPLES AND RELATED EQUIPMENT
The present disclosure relates to a technology field of artificial intelligence and provides a method for selecting image samples and related equipment. The method trains an instance segmentation model with first image samples and trains a score prediction model with third image samples. An information quantum score of second image samples is calculated through the score prediction model and feature vectors extracted. The second image samples are clustered according to the feature vectors of the second image samples and sample clusters of the second image samples are obtained. Target image samples are selected from the second image samples according to the information quantum score of the second image samples and the sample clusters. Target image samples from the image samples are selected for labelling, improving an accuracy of sample selection.
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
Knowledge distillation method for fracture detection includes obtaining medical images including region-level labeled images, image-level diagnostic positive images, and image-level diagnostic negative images, in chest X-rays; performing a supervised pre-training process on the region-level labeled images and the image-level diagnostic negative images to train a neural network to generate pre-trained weights; and performing a semi-supervised training process on the image-level diagnostic positive images using the pre-trained weights. A teacher model is employed to produce pseudo ground-truths (GTs) on the image-level diagnostic positive images for supervising training of a student model, and the pseudo GTs are processed by an adaptive asymmetric label sharpening (AALS) operator to produce sharpened pseudo GTs to provide positive detection responses on the image-level diagnostic positive images.
A method for detecting groceries in corridor is provided, this method includes: obtaining an image collected from a corridor, performing pedestrian detection and grocery detection on the image collected from the corridor to obtain a pedestrian detection result and a grocery detection result; performing, if there is a pedestrian image in the pedestrian detection result, an image processing on the image collected from the corridor according to the pedestrian image; comparing the image that is collected from the corridor and has been processed with a preset corridor image to obtain an image similarity; generating, if there is a grocery image in the grocery detection result, or if the image similarity is less than or equal to a similarity threshold, a grocery cleaning instruction according to an identifier of corridor in the image collected from the corridor; and sending a grocery cleaning prompt according to the grocery cleaning instruction.
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
38.
Method, device, and storage medium for pancreatic mass segmentation, diagnosis, and quantitative patient management
A method for pancreatic mass diagnosis and patient management includes: receiving CT images of a pancreas of a patient, the pancreas of the patient including a mass; performing a segmentation process on the CT images of the pancreas and the mass to obtain a segmentation mask of the pancreas and the mass of the patient; performing a mask-to-mesh process on the segmentation mask of the pancreas and the mass of the patient to obtain a mesh model of the pancreas and the mass of the patient; performing a classification process on the mesh model of the pancreas and the mass of the patient to identify a type and a grade of a segmented pancreatic mass; and outputting updated CT images of the pancreas of the patient, the updated CT images including the segmented pancreatic mass highlighted thereon and the type and the grade of the segmented pancreatic mass annotated thereon.
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G16H 20/00 - TIC spécialement adaptées aux thérapies ou aux plans d’amélioration de la santé, p.ex. pour manier les prescriptions, orienter la thérapie ou surveiller l’observance par les patients
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
39.
Method, device, and computer program product for self-supervised learning of pixel-wise anatomical embeddings in medical images
The present disclosure provides a method, a device, and a computer program product using a self-supervised anatomical embedding (SAM) method. The method includes randomly selecting a plurality of images; for each image of the plurality of images, performing random data augmentation to obtain a patch pair, generating global and local embedding tensors for each patch of the patch pair, and selecting positive pixel pairs from the patch pair and obtaining positive embedding pairs; for each positive pixel pair, computing global and local similarity maps, finding global hard negative embeddings, selecting global random negative embeddings, pooling the global hard negative embeddings and the global random negative embeddings to obtain final global negative embeddings, and finding local hard negative embeddings using the global and local similarity maps, and randomly sampling final local negative embeddings from the local hard negative embeddings; and minimizing a final info noise contrastive estimation (InfoNCE) loss.
G06V 30/262 - Techniques de post-traitement, p.ex. correction des résultats de la reconnaissance utilisant l’analyse contextuelle, p.ex. le contexte lexical, syntaxique ou sémantique
G06F 18/213 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace
40.
Method, device, and computer program product for deep lesion tracker for monitoring lesions in four-dimensional longitudinal imaging
The present disclosure provides a computer-implemented method, a device, and a computer program product for deep lesion tracker. The method includes inputting a search image into a first three-dimensional DenseFPN (feature pyramid network) of an image encoder and inputting a template image into a second three-dimensional DenseFPN of the image encoder to extract image features; encoding anatomy signals of the search image and the template image as Gaussian heatmaps, and inputting the Gaussian heatmap of the template image into a first anatomy signal encoders (ASE) and inputting the Gaussian heatmap of the search image into a second ASE to extract anatomy features; inputting the image features and the anatomy features into a fast cross-correlation layer to generate correspondence maps, and computing a probability map according to the correspondence maps; and performing supervised learning or self-supervised learning to predict a lesion center in the search image.
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
41.
Method and device for vertebra localization and identification
A vertebra localization and identification method includes: receiving one or more images of vertebrae of a spine; applying a machine learning model on the one or more images to generate three-dimensional (3-D) vertebra activation maps of detected vertebra centers; performing a spine rectification process on the 3-D vertebra activation maps to convert each 3-D vertebra activation map into a corresponding one-dimensional (1-D) vertebra activation signal; performing an anatomically-constrained optimization process on each 1-D vertebra activation signal to localize and identify each vertebra center in the one or more images; and outputting the one or more images, wherein on each of the one or more outputted images, a location and an identification of each vertebra center are specified.
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
42.
DEVICE AND METHOD FOR GLAUCOMA AUXILIARY DIAGNOSIS, AND STORAGE MEDIUM
A device and method for glaucoma auxiliary diagnosis, and a non-transitory storage medium are provided. The device includes an obtaining unit and a processing unit. The obtaining unit is configured to obtain a color fundus image of a patient. The processing unit is configured to perform feature extraction on the color fundus image to obtain a first feature map. The processing unit is further configured to perform image segmentation on the color fundus image according to the first feature map to obtain an optic disc image in the color fundus image, where the optic disc image corresponds to an optic disc area in the color fundus image. The processing unit is further configured to perform feature extraction on the optic disc image and the color fundus image according to the first feature map to obtain a probability that the patient has glaucoma.
This application discloses a knowledge graph-based case retrieval method, device and equipment, and a storage medium. The method includes: constructing a legal case knowledge graph based on text information; performing random-walk sampling on node set data constructed based on the legal case knowledge graph, so as to obtain a plurality of pieces of sequence data; training a model by using a word2vec algorithm based on the plurality of pieces of sequence data, so as to obtain an updated target model; obtaining target text information, and analyzing the target text information by using the target model, so as to construct a to-be-retrieved knowledge graph; retrieving the legal case knowledge graph based on the to-be-retrieved knowledge graph, so as to obtain case information associated with the to-be-retrieved knowledge graph; and obtaining outputted case information based on a first similarity and a second similarity of the case information.
A method, device, equipment for user grouping, and a non-transitory computer-readable storage medium are provided, which are applicable to the field of medical technology. The method includes the following. Net benefits of multiple users in a target project are obtained. According to the net benefits of the multiple users in the target project and a solution of the target project, a net-benefit coefficient corresponding to the solution is determined. For each grouping variable of the target project, a fluctuation value corresponding to the grouping variable is determined according to the net-benefit coefficient. According to a grouping variable with the largest fluctuation value, the multiple users are divided into multiple user groups. For each user group obtained by division, users in the user group are divided according to a fluctuation value corresponding to each grouping variable of the target project, until a user group meeting a preset condition is obtained.
G16H 50/70 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour extraire des données médicales, p.ex. pour analyser les cas antérieurs d’autres patients
G06F 16/28 - Bases de données caractérisées par leurs modèles, p.ex. des modèles relationnels ou objet
G16H 10/20 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des essais ou des questionnaires cliniques électroniques
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
45.
Data detection method and device, computer equipment and storage medium
Disclosed are a data detection method and device, a computer equipment, and a storage medium. The method includes: obtaining a designated identification picture including a human face; correcting the designated identification picture to be placed in a preset standard posture to obtain an intermediate picture; inputting the intermediate picture into a preset face feature point detection model to obtain multiple face feature points; calculating a cluster center position of the face feature points, and generating a minimum bounding rectangle of the face feature points; retrieving a standard identification picture from a preset database; scaling the standard identification picture in proportion to obtain a scaled picture; overlapping a reference center position in the scaled picture and a cluster center position in the intermediate picture, so as to obtain an overlapping part in the intermediate picture; and marking the overlapping part as an identification body of the designated identification picture.
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
46.
METHOD AND DEVICE FOR NEURAL NETWORK-BASED OPTICAL COHERENCE TOMOGRAPHY (OCT) IMAGE LESION DETECTION, AND MEDIUM
A method and device for neural network-based optical coherence tomography (OCT) image lesion detection, and a medium are provided. The method includes the following. An OCT image is obtained. The OCT image is inputted into a lesion-detection network model. A position, a category score, and a positive score of each lesion box in the OCT image are outputted through the lesion-detection network model. A lesion detection result of the OCT image is obtained according to the position, the category score, and the positive score of each lesion box. The lesion-detection network model includes a category detection branch configured to obtain, for each of the anchor boxes, a position and a category score of the anchor box, and a lesion positive score regression branch configured to obtain, for each of the anchor boxes, a positive score of whether the anchor box belongs to a lesion, to reflect severity of lesion positive.
G16H 50/30 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour l’évaluation des risques pour la santé d’une personne
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
47.
METHOD FOR DRUG CLASSIFICATION, TERMINAL DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A method for drug classification, a terminal device, and a non-transitory computer-readable storage medium are provided. An attribute feature vector of each of n atoms in a drug molecule to be detected and an attribute feature vector of a virtual atom are obtained. An adjacency matrix is constructed according to a connection relationship between the virtual atom and each of the n atoms and between the n atoms. An atom attribute feature matrix is constructed according to the attribute feature vector of each atom. The adjacency matrix and the atom attribute feature matrix are inputted into a graph neural network to determine a transfer feature matrix of the n atoms and the virtual atom. A molecular feature vector corresponding to the drug molecule to be detected is determined according to the transfer feature matrix. The molecular feature vector is inputted into a classifier to output a drug category.
G16C 20/20 - Identification d’entités moléculaires, de leurs parties ou de compositions chimiques
G16H 70/40 - TIC spécialement adaptées au maniement ou au traitement de références médicales concernant des médicaments, p.ex. leurs effets secondaires ou leur usage prévu
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
A method for model deployment, a terminal device, and a non-transitory computer-readable storage medium are provided. The method includes the following. A to-be-deployed model and an input/output description file of the to-be-deployed model are obtained. Output verification is performed on the to-be-deployed model based on the input/output description file. If the output verification of the to-be-deployed model passes, an inference service resource is determined from multiple running environments and the inference service resource is allocated to the to-be-deployed model. An inference parameter value of executing an inference service by the to-be-deployed model based on the inference service resource is determined. A resource configuration file and an inference service interface of the to-be-deployed model are generated according to the inference service resource, if the inference parameter value is greater than or equal to a preset inference parameter threshold.
A method, applied to an apparatus for mammographic multi-view mass identification, includes receiving a main image, a first auxiliary image, and a second auxiliary image. The main image and the first auxiliary image are images of a breast of a person, and the second auxiliary image is an image of another breast of the person. The method further includes detecting the nipple location based on the main image and the first auxiliary image; generating a first probability map of the main image based on the main image, the first auxiliary image, and the nipple location; generating a second probability map of the main image based on the main image, the second auxiliary image, and the nipple location; and generating and outputting a fused probability map based on the first probability map and the second probability map.
G06T 7/30 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
50.
Method and device for image generation and colorization
A method for image generation and colorization includes displaying a drawing board interface; obtaining semantic labels of an image to be generated based on user input on the drawing board interface, each semantic label indicating a content of a region in the image to be generated; obtaining a color feature of the image to be generated; and automatically generating the image using a generative adversarial network (GAN) model according to the semantic labels and the color feature. The color feature is a latent vector input to the GAN model.
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 30/262 - Techniques de post-traitement, p.ex. correction des résultats de la reconnaissance utilisant l’analyse contextuelle, p.ex. le contexte lexical, syntaxique ou sémantique
51.
User-guided domain adaptation for rapid annotation from user interactions for pathological organ segmentation
The present disclosure provides a computer-implemented method, a device, and a computer program product using a user-guided domain adaptation (UGDA) architecture. The method includes training a combined model using a source image dataset by minimizing a supervised loss of the combined model to obtain first sharing weights for a first FCN and second sharing weights for a second FCN; training a discriminator by inputting extreme-point/mask prediction pairs for each of the source image dataset and a target image dataset and by minimizing a discriminator loss to obtain discriminator weights; and finetuning the combined model by predicting extreme-point/mask prediction pairs for the target image dataset to fool the discriminator by matching a distribution of the extreme-point/mask prediction pairs for the target image dataset with a distribution of the extreme-point/mask prediction pairs for the source image dataset.
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
A method and device for image generation are provided. The method includes: obtaining a text describing a content of an image to be generated; extracting, using a text encoder, a text feature vector from the text; determining a semantic mask as spatial constraints of the image to be generated; and automatically generating the image using a generative adversarial network (GAN) model according to the semantic mask and the text feature vector.
The present disclosure provides a method for responding to video call service and system, including: receiving a video call service request by the video call device; calling a video call connection process to establish a video call data transmission link with the call peer based on a communication address; locally acquiring a target file as indicated by the file transmission request, and determining a link number of the file transmission link for transmitting the target file according to the communication address and a file type of the target file, if a file transmission request sent by the call peer is received; uploading the target file to a file push server through a file uplink if the link number is not included in a local link list; and transmitting the target file to the call peer through the file transmission link corresponding to the link number.
H04L 65/402 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles non-temps-réel, p.ex. le téléchargement d’un fichier lors d’une session FTP parallèle, l’introduction d’un courriel ou d
H04L 67/06 - Protocoles spécialement adaptés au transfert de fichiers, p.ex. protocole de transfert de fichier [FTP]
An image processing method for performing image alignment includes: acquiring a moving image generated by a first imaging modality; acquiring a fixed image generated by a second imaging modality; jointly optimizing a generator model, a register model, and a segmentor model applied to the moving image and the fixed image according to a plurality of cost functions; and applying a spatial transformation corresponding to the optimized register model to the moving image to align the moving image to the fixed image; wherein: the generator model generates a synthesized image from the moving image conditioned on the fixed image; the register model estimates the spatial transformation to align the synthesized image to the fixed image; and the segmentor model estimates segmentation maps of the moving image, the fixed image, and the synthesized image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
An exclusive agent pool allocation method including collecting business data of agents; grouping agents according to the business data of the agents and forming multiple exclusive agent pools; calculating business skill values of agents according to the business data of the agents and classifying priorities of the agents; classifying priorities of agent pools according to the business data of the exclusive agent pools; and allocating calling user to the corresponding agent in the exclusive agent pool according to predetermined allocation strategy. The method solves the matching of the user and the agent in the region and the business level, allocates the agent resource according to the priority of the business skill, realizes the high match between the business skill of the agent and the business handled by the user, improves the pertinence and effectiveness of the agent service and promotes the satisfaction of the users.
H04M 3/523 - Dispositions centralisées de réponse aux appels demandant l'intervention d'un opérateur avec répartition ou mise en file d'attente des appels
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
H04M 3/51 - Dispositions centralisées de réponse aux appels demandant l'intervention d'un opérateur
56.
CLAIM SETTLEMENT ANTI-FRAUD METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM BASED ON GRAPH COMPUTATION TECHNOLOGY
A claim settlement anti-fraud method, an apparatus, a computer device, and a storage medium are provided. The method includes generating a sub-graph of doctor and patient, a sub-graph of doctor and medical advice, and a fused large graph according to medical data. A patient relationship network with several community close loops is generated by mapping the sub-graph of doctor and patient according to the fuses large graph. A similarity between any two vertexes in the patient relationship network are computed. An average similarity of each community close loop is computed. The insurance fraud actions are confirmed based on the average similarity.
G06Q 20/40 - Autorisation, p.ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasin; Examen et approbation des payeurs, p.ex. contrôle des lignes de crédit ou des listes négatives
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p.ex. pour des dossiers électroniques de patients
G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santé; TIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p.ex. pour la gestion du personnel hospitalier ou de salles d’opération
G16H 50/70 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour extraire des données médicales, p.ex. pour analyser les cas antérieurs d’autres patients
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
G06F 16/901 - Indexation; Structures de données à cet effet; Structures de stockage
The present application relates to a system language switching method, a computer readable storage medium, a terminal device, and a device. The method includes first obtaining a preset image for setting a system language of a target terminal, then extracting text information in the image and determining a target language corresponding to the text information, and finally switching the system language of the target terminal to the target language. Through the present application, the user only needs to prepare an image for setting the system language of the target terminal in advance, for example, a piece of paper with Chinese written, and a system can obtain the text information on the image through the processes of image acquisition, text information extraction, and the like, determine that the text message is Chinese, and finally switch the system language of the target terminal to Chinese.
G06F 16/583 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 20/62 - Texte, p.ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
58.
Method for synthesizing image based on conditional generative adversarial network and related device
A method includes: obtaining a plurality of clinical red blood cell images, dividing red blood cells of different shapes at different positions in each of the red blood cell images into a plurality of submasks, and synthesizing the submasks corresponding to each of the red blood cell images to generate one mask to obtain a plurality of masks corresponding to the red blood cell images; collecting shape data of a plurality of red blood cells from the masks to obtain a training data set, calculating a segmentation boundary of each red blood cell in the training data set, and establishing a red blood cell shape data set based on the segmentation boundary of each red blood cell; collecting distribution data of each red blood cell in the red blood cell shape data set; and synthesizing the red blood cell shape data set into a plurality of red blood cell images.
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G16H 10/40 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données relatives aux analyses de laboratoire, p.ex. pour des analyses d’échantillon de patient
G06F 18/21 - Conception ou mise en place de systèmes ou de techniques; Extraction de caractéristiques dans l'espace des caractéristiques; Séparation aveugle de sources
G06F 18/2132 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace basée sur des critères de discrimination, p.ex. l'analyse discriminante
G06F 18/2137 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace basée sur des critères de préservation de la topologie, p.ex. positionnement multidimensionnel ou cartes auto-organisatrices
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06F 18/23213 - Techniques non hiérarchiques en utilisant les statistiques ou l'optimisation des fonctions, p.ex. modélisation des fonctions de densité de probabilité avec un nombre fixe de partitions, p.ex. K-moyennes
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
59.
Face recognition method, device and electronic equipment, and computer non-volatile readable storage medium
A face recognition method includes: detecting keypoints when receiving a first face image; acquiring a recognition score of each detectable keypoint and serial numbers of missing keypoints; acquiring a plurality of target keypoints in the plurality of detectable keypoints having a predetermined face feature association relationship with the missing keypoints when the influence score is higher than a predetermined score threshold; acquiring a target face feature template having a degree of position combination with the plurality of target keypoints greater than a predetermined combination degree threshold; and stitching the target face feature template and the plurality of target keypoints on the first face image to obtain a second face image so as to detect all the keypoints according to the second face image for performing the face recognition.
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
Systems and methods for characterizing a region of interest (ROI) in a medical image are provided. An exemplary system may include a memory storing instructions and at least one processor communicatively coupled to the memory to execute the instructions which, when executed by the processor, may cause the processor to perform operations. The operations may include detecting one or more candidate ROIs from the medical image using a three-dimensional (3D) machine learning network. The operations may also include determining a key slice for each candidate ROI. The operations may further include selecting a primary ROI from the one or more candidate ROIs based on the respective key slices. In addition, the operations may include classifying the primary ROI into one of a plurality of categories using a texture-based classifier based on the key slice corresponding to the primary ROI.
A long short-term memory (LSTM) model-based disease prediction method and apparatus, a computer device, and a storage medium are provided. The method includes: obtaining first medical data of a target object and second medical data of an associated object; inputting the first medical data and the second medical data into a first LSTM network in the LSTM model, to obtain a hidden state vector sequence in the first LSTM network; inputting the hidden state vector sequence into a second LSTM network for operation, to obtain a disease prediction result; selecting a predicted disease with an incidence rate higher than a preset threshold, and recording the predicted disease as a designated disease, and obtaining, based on a preset disease association network, an associated disease directly connected to the designated disease; and outputting the disease prediction result and the associated disease, thereby improving the prediction accuracy.
G16H 50/50 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour la simulation ou la modélisation des troubles médicaux
G06N 3/0442 - Réseaux récurrents, p.ex. réseaux de Hopfield caractérisés par la présence de mémoire ou de portes, p.ex. mémoire longue à court terme [LSTM] ou unités récurrentes à porte [GRU]
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
G16H 50/30 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour l’évaluation des risques pour la santé d’une personne
A neural network model training method and apparatus, a computer device, and a storage medium are provided. The method includes: obtaining a model prediction value of each of all reference samples based on a trained deep neural network model, calculating a difference measurement index between the model prediction value of each reference sample and a real annotation corresponding to the reference sample, and using a target reference sample whose difference measurement index is less than or equal to a preset threshold as a comparison sample; using a training sample whose similarity with the comparison sample meets a preset augmentation condition as a to-be-augmented sample; and performing data augmentation on the to-be-augmented sample, and using the obtained target training sample as a training sample to train the trained deep neural network model until model prediction values of all verification samples in a verification set meet a preset training ending condition.
A data storage method includes: acquiring target data to be stored, and classifying refresh rates of the target data to be stored according to a front-end system; subjecting the target data to be stored with high refresh rates as classified and the target data to be stored with low refresh rates as classified to a Hash calculation to obtain a first type Hash value and a second type Hash value; determining storage data segments corresponding to the first type Hash value and the second type Hash value according to a preset storage data segment determination relationship, and storing the target data to be stored with high refresh rates and the target data to be stored with low refresh rates into the storage data segments corresponding to the first type Hash value and the second type Hash value, respectively.
A method for tracking a target includes: acquiring original position information of an original target point selected by a user contained in a locating request if the locating request for tracking a target is received; carrying out target prediction on a current frame image according to a preset target prediction model to obtain a target prediction result; calculating an Euclidean distance between each of the targets to be tracked and the original target point according to the target position information and original coordinates of each of the target regions to obtain N distances; selecting a distance with the smallest numerical value from the N distances as a target distance, acquiring target position information corresponding to the target distance, and determining a target to be tracked in a target region corresponding to the obtained target position information as a tracked target corresponding to an original target point.
An image segmentation method includes generating a CTN (contour transformer network) model for image segmentation, where generating the CTN model includes providing an annotated image, the annotated image including an annotated contour, providing a plurality of unannotated images, pairing the annotated image to each of the plurality of unannotated images to obtain a plurality of image pairs, feeding the plurality of image pairs to an image encoder to obtain a plurality of first-processed image pairs, and feeding the plurality of first-processed image pairs to a contour tuner to obtain a plurality of second-processed image pairs.
Embodiments of the present application disclose a traffic data self-recovery processing method, including: monitoring an operation result of traffic data synchronization operation of a target system; repeatedly performing the traffic data synchronization operation of the target system until the traffic data synchronization is successful or cumulative number of traffic data synchronization failures exceed a failure frequency threshold, if the monitored operation result is that the traffic data synchronization is failed; clearing the cumulative number if the monitored operation result is that the traffic data synchronization is successful; stopping the traffic data synchronization operation of the target system and sending out a message indicative of the traffic data synchronization failure if the cumulative number of traffic data synchronization failures exceeds the failure frequency threshold, wherein the failure frequency threshold is determined by current network signal intensity of the target system and is in a positive correlation with current network signal intensity.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
H04L 43/04 - Traitement des données de surveillance capturées, p.ex. pour la génération de fichiers journaux
The present disclosure describes a computer-implemented method for processing clinical three-dimensional image. The method includes training a fully supervised segmentation model using a labelled image dataset containing images for a disease at a predefined set of contrast phases or modalities, allow the segmentation model to segment images at the predefined set of contrast phases or modalities; finetuning the fully supervised segmentation model through co-heterogenous training and adversarial domain adaptation (ADA) using an unlabelled image dataset containing clinical multi-phase or multi-modality image data, to allow the segmentation model to segment images at contrast phases or modalities other than the predefined set of contrast phases or modalities; and further finetuning the fully supervised segmentation model using domain-specific pseudo labelling to identify pathological regions missed by the segmentation model.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
68.
Method, apparatus, computer device and storage medium of page displaying
A method of page displaying includes: obtaining page data of a current page of an application; the page data includes a screenshot and view identifiers and view names of a plurality of views; adding the plurality of view identifiers to a plurality of arrays having different levels according to a preset rule; building a multi-fork tree corresponding to the current page of the application using the array; generating hierarchical paths corresponding to the plurality of views according to the multi-fork tree, adding corresponding burial point frames to the corresponding views according to the hierarchical path, and transmitting the screenshot provided with burial point frames to the preset terminal, so that the preset terminal displays the screenshot with burial point frames.
G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
G06F 16/958 - Organisation ou gestion de contenu de sites Web, p.ex. publication, conservation de pages ou liens automatiques
G06F 16/901 - Indexation; Structures de données à cet effet; Structures de stockage
The present application relates to a blockchain system based on Ethereum, including a master node configured to receive a transaction request transmitted by a client terminal, perform transaction processing by calling a smart contract deployed in a consortium blockchain according to the transaction request to obtain transaction data; and use the transaction data to generate a block, and broadcast the block is to the plurality of backup nodes; backup node configured to receive the block and verify the transaction data of the block; the master node is further configured to generate a first-stage certificate using complete block information, and transmit the first-stage certificate to the plurality of backup nodes; the backup node is further configured to respectively generate a second-stage certificate and a third-stage certificate according to a block hash value in the first-stage certificate, and the second-stage certificate and the third-stage certificate are respectively used to negotiate on the block to obtain a negotiation result; and when the block verification is passed and the negotiation result is a successful negotiation, the master node and the plurality of backup nodes are configured respectively to add the block to the copy of the local consortium blockchain.
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 29/08 - Procédure de commande de la transmission, p.ex. procédure de commande du niveau de la liaison
H04L 12/24 - Dispositions pour la maintenance ou la gestion
H04L 29/06 - Commande de la communication; Traitement de la communication caractérisés par un protocole
H04L 67/60 - Ordonnancement ou organisation du service des demandes d'application, p.ex. demandes de transmission de données d'application en utilisant l'analyse et l'optimisation des ressources réseau requises
H04L 41/0663 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant la reprise sur incident de réseau en réalisant des actions prédéfinies par la planification du basculement, p.ex. en passant à des éléments de réseau de secours
70.
MACHINE LEARNING BASED MEDICAL DATA CLASSIFICATION METHOD, COMPUTER DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A machine learning based medical data classification method is provided. The method includes: a medical data classification request including medical record information is received; a preset medical term base is obtained, and word segmentation is performed on the medical record information according to medical terms in the medical term base to obtain multiple text vectors; features of the multiple text vectors are extracted to obtain multiple text vectors and corresponding feature dimension values; a target classifier is trained with multiple pieces of medical data, and the multiple text vectors and the corresponding feature dimension values are traversed and calculated; until a target node corresponding to the multiple text vectors is traversed, class probabilities corresponding to the multiple text vectors are calculated according to the target node, and a class result corresponding to the medical record information is obtained according to the class probabilities and is pushed to a terminal.
G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p.ex. pour des dossiers électroniques de patients
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
G16H 70/20 - TIC spécialement adaptées au maniement ou au traitement de références médicales concernant des pratiques ou des directives
G16H 50/70 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour extraire des données médicales, p.ex. pour analyser les cas antérieurs d’autres patients
The present application is applicable to the technical field of insurance type information processing, and provides a method for conducting statistics on insurance type state information of a policy, a terminal device, and a storage medium. The method includes receiving a unique identifier of an insurance type of a policy; searching for, in a log table, all state change records corresponding to the unique identifier of the insurance type of the policy; sorting all the found state change records in chronological order; determining whether two adjacent state change records are the same; when the two adjacent state change records are different, subtracting the time point of the previous state from the time point of the latter state change record to obtain a time interval; and determining the duration of a valid state based on the time interval. Through the above method, the data processing efficiency can be greatly improved.
G06F 7/08 - Tri, c. à d. rangement des supports d'enregistrement dans un ordre de succession numérique ou autre, selon la classification d'au moins certaines informations portées sur les supports
72.
Device and method for detecting clinically important objects in medical images with distance-based decision stratification
A method for performing a computer-aided diagnosis (CAD) includes: acquiring a medical image set; generating a three-dimensional (3D) tumor distance map corresponding to the medical image set, each voxel of the tumor distance map representing a distance from the voxel to a nearest boundary of a primary tumor present in the medical image set; and performing neural-network processing of the medical image set to generate a predicted probability map to predict presence and locations of oncology significant lymph nodes (OSLNs) in the medical image set, wherein voxels in the medical image set are stratified and processed according to the tumor distance map.
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06F 18/21 - Conception ou mise en place de systèmes ou de techniques; Extraction de caractéristiques dans l'espace des caractéristiques; Séparation aveugle de sources
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
73.
Data noise reduction method, device, computer apparatus and storage medium
A data noise reduction method based on data resource. The method includes: acquiring a corresponding characteristic combination according to a received request for noise reduction; acquiring corresponding initial data according to the characteristic combination; calculating a discrimination degree of the characteristic combination; screening the discrimination degree of the characteristic combination using a preset initial discrimination degree threshold, and acquiring a characteristic combination corresponding to the discrimination degree that meets a preset requirement; generating an initial characteristic combination according to the corresponding characteristic combination; extracting an available characteristic combination from the initial characteristic combination according to a preset evaluation index; performing a noise reduction process to the initial data according to the available characteristic combination, deleting noise data from the initial data and acquires available data, and sending the available data to the terminal.
G06F 16/00 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet
G06F 16/215 - Amélioration de la qualité des données; Nettoyage des données, p.ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
G06F 16/28 - Bases de données caractérisées par leurs modèles, p.ex. des modèles relationnels ou objet
A method of harvesting lesion annotations includes conditioning a lesion proposal generator (LPG) based on a first two-dimensional (2D) image set to obtain a conditioned LPG, including adding lesion annotations to the first 2D image set to obtain a revised first 2D image set, forming a three-dimensional (3D) composite image according to the revised first 2D image set, reducing false-positive lesion annotations from the revised first 2D image set according to the 3D composite image to obtain a second-revised first 2D image set, and feeding the second-revised first 2D image set to the LPG to obtain the conditioned LPG, and applying the conditioned LPG to a second 2D image set different than the first 2D image set to harvest lesion annotations.
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
75.
User permission data query method and apparatus, electronic device and medium
A user permission data query method which includes obtaining a first data table including staff identification numbers and departments corresponding to the staff identification numbers, and obtaining a second data table including a correspondence relationship among the staff identification numbers, roles, and administration authority information; obtaining, from the second data table, a plurality of data records having the same staff identification number and the same role, calculating an MD5 value corresponding to the staff identification number and the role; screening various MD5 values that are different from each other, and obtaining the management departments and the management staffs respectively corresponding to the various MD5 values obtaining a MD5 value corresponding to the permission query request and determining the management departments and the management staffs corresponding to the MD5 value as permission data of a user, when a permission query request is received.
A method for topic early warning includes: acquiring a self-defined keyword; calculating similarity between the self-defined keyword and each word in a corpus, and acquiring extended keywords related to the self-defined keyword from the corpus according to the similarity; selecting a target keyword from the extended keywords according to a type of the extended keywords and similarity between the extended keywords and the self-defined keyword, and adding the target keyword to a target keyword list; performing real-time monitoring according to the target keyword in the target keyword list; and performing topic early warning when it is monitored that the number of topics corresponding to the target keyword reaches a preset threshold.
A deep learning based license plate identification method, device, equipment, and storage medium. The deep learning based license plate identification method comprises: extracting features of an original captured image by using a single shot multi-box detector to obtain a target license plate image; correcting the target license plate image to obtain a corrected license plate image; identifying the corrected license plate image by using a bi-directional long short-term memory model to obtain target license plate information. When the deep learning based license plate identification method performs license plate identification, the identification efficiency is high and the accuracy is higher.
A method of scanning website vulnerability comprising: reading a vulnerability scan task in a scan task pool; finding a website corresponding to the vulnerability scan task, acquiring access data of the website, and obtaining a popularity coefficient of the website according to the access data; acquiring historical vulnerability scan data and a vulnerability risk level table, and obtaining a security risk coefficient of the vulnerability scan task according to the historical vulnerability scan data and the vulnerability risk level table; acquiring update time data of the vulnerability scan task, and calculating a time coefficient of the vulnerability scan task according to the update time data; inputting the popularity coefficient, the security risk coefficient, and the time coefficient into a preset priority evaluation model for processing, and obtaining an execution priority weight of the vulnerability scan task; and executing vulnerability scan tasks in the scan task pool in descending order according to the execution priority weights.
A method and device for stratified image segmentation are provided. The method includes: obtaining a three-dimensional (3D) image data set representative of a region comprising at least three levels of objects; generating a first segmentation result indicating boundaries of anchor-level objects in the region based on a first neural network (NN) model corresponding to the anchor-level objects; generating a second segmentation result indicating boundaries of mid-level objects in the region based on the first segmentation result and a second NN model corresponding to the mid-level objects; and generating a third segmentation result indicating small-level objects in the region based on the first segmentation result, a third NN model corresponding to the small-level objects, and cropped regions corresponding to the small-level objects.
A method for performing a computer-aided diagnosis (CAD) for universal lesion detection includes: receiving a medical image; processing the medical image to predict lesion proposals and generating cropped feature maps corresponding to the lesion proposals; for each lesion proposal, applying a plurality of lesion detection classifiers to generate a plurality of lesion detection scores, the plurality of lesion detection classifiers including a whole-body classifier and one or more organ-specific classifiers; for each lesion proposal, applying an organ-gating classifier to generate a plurality of weighting coefficients corresponding to the plurality of lesion detection classifiers; and for each lesion proposal, performing weight gating on the plurality of lesion detection scores with the plurality of weighting coefficients to generate a comprehensive lesion detection score.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux
The present disclosure provides a computer-implemented method, a device, and a computer program product for radiographic bone mineral density (BMD) estimation. The method includes receiving a plain radiograph, detecting landmarks for a bone structure included in the plain radiograph, extracting an ROI from the plain radiograph based on the detected landmarks, estimating the BMD for the ROI extracted from the plain radiograph by using a deep neural network.
G16H 30/40 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le traitement d’images médicales, p.ex. l’édition
G16H 50/30 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour l’évaluation des risques pour la santé d’une personne
G06F 18/213 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
82.
Method, equipment, computing device and computer-readable storage medium for knowledge extraction based on TextCNN
The application discloses a method for knowledge extraction based on TextCNN, comprising: S10, collecting first training data, and constructing a character vector dictionary and a word vector dictionary; S20, constructing a first convolutional neural network, and training the first convolutional neural network based on a first optimization algorithm, the first convolutional neural network comprises a first embedding layer, a first multilayer convolution, and a first softmax function connected in turn; S30, constructing a second convolutional neural network, and training the second convolutional neural network based on a second optimization algorithm, the second convolutional neural network comprises a second embedding layer, a second multilayer convolution, a pooling layer, two fully-connected layers and a second softmax function, the second embedding layer connected in turn; S40, extracting a knowledge graph triple of the to-be-predicted data according to an entity tagging prediction output by the first trained convolutional neural network and an entity relationship prediction output by the second trained convolutional neural network.
A method for performing computer-aided diagnosis (CAD) based on a medical scan image includes: pre-processing the medical scan image to produce an input image, a flipped image, and a spatial alignment transformation corresponding to the input image and the flipped image; performing Siamese encoding on the input image to produce an encoded input feature map; performing Siamese encoding on the flipped image to produce an encoded flipped feature map; performing a feature alignment using the spatial alignment transformation on the encoded flipped feature map to produce an encoded symmetric feature map; and processing the encoded input feature map and the encoded symmetric feature map to generate a diagnostic result indicating presence and locations of anatomical abnormalities in the medical scan image.
A target cell marking method, including: determining an original image format of the original scanned image, and converting the original scanned image into a first image in a preset image format; segmenting the first image into a plurality of image blocks and recording arrangement positions of the image blocks in the first image; respectively inputting the image blocks into a preset deep learning detection model to obtain first position information of target cells in the image blocks; determining second position information of the target cells in the first image according to the first position information and the corresponding arrangement positions; integrating the image blocks according to the arrangement positions to obtain a second image, and marking the target cells in the second image; and converting the second image marked by the target cells into a third image in the original image format, and displaying the third image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G09G 5/37 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire - Détails concernant le traitement de dessins graphiques
G06F 3/14 - Sortie numérique vers un dispositif de visualisation
85.
Cultivated land recognition method in satellite image and computing device
A cultivated land recognition method in a satellite image includes: segmenting a satellite image of the Earth into a plurality of standard images; and recognizing cultivated land area in each of the standard images using a cultivated land recognition model to obtain a plurality of first images. Edges of ground level entities in each of the standard images are detected using an edge detection model to obtain a plurality of second images. Each of the first images and a corresponding one of the second images is merged to obtain a plurality of third images; and cultivated land images is obtained by segmenting each of the third images using a watershed segmentation algorithm. Not only can a result of recognizing cultivated land in satellite images of the Earth be improved, but an efficiency of recognizing the cultivated land also be improved. A computing device employing the method is also disclosed.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
In a face swap method carried out by an electronic device, a first head image is segmented from a destination image. First facial landmarks and a first hair mask are obtained according to the first head image. A second head image is segmented from a source image. Second facial landmarks and a second hair mask are obtained according to the second head image. If at least one eye landmark in the second facial landmarks is covered by hair, the second head image and the second hair mask are processed and repaired so as to obtain a swapped-face image with eyes not covered by hair.
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/34 - Découpage des formes se touchant ou se chevauchant dans la zone image
A device providing intelligent assistance in mobility for disabled people and others includes a mobility device and a lifting device detachably mounted on the mobility device. The lifting device includes a base frame, a retractable bracket structure, several wheels, a sitting pad, and a backrest. The wheels are mounted on a lower surface of the base frame and drive the lifting device to move. The retractable bracket structure is mounted on an upper surface of the base frame. The sitting pad is detachably mounted on the retractable bracket structure, and the backrest is rotatably mounted on the retractable bracket structure.
An environment monitoring method and an electronic device are provided, the method divides the satellite image into a plurality of first divided images with overlapping areas, a first multi-dimensional feature map is obtained by inputting the plurality of first divided images into an environment monitoring model, the environmental monitoring model fully combines the correlation between the environmental information of different dimensions, the environmental features of a plurality of different dimensions are correlated through an association network. By utilizing the environment monitoring method, a large area of the environment monitoring effectively is realized, and accuracy of environmental detection is improved.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
In a crop identification method, multi-temporal sample remote sensing images labeled with first planting blocks of a specific crop are acquired. NDVI data of the sample remote sensing images are calculated. Noise of the NDVI data is reduced. A first multivariate Gaussian model is fitted based on de-noised NDVI data of the sample remote sensing image. Multi-temporal target remote sensing images are acquired. An NDVI time series of each pixel in the target remote sensing image is constructed. The NDVI time series is input to the first multivariate Gaussian model to obtain a likelihood value of each pixel displaying the specific crop in the remote sensing images. Second planting blocks of the specific crop in the target remote sensing images are determined accordingly. An accurate and robust identification result is thereby achieved.
An image processing method and an electronic device are provided, the method extracts a first object mask of a texture image and a second object mask of a to-be-optimized image. An image recognition model is used to obtain a first content matrix, a first texture matrix, a second content matrix, a second texture matrix, a first mask matrix, and a second mask matrix. A total loss of the to-be-optimized image is determined, and the total loss is minimized by adjusting a value of each pixel of the to-be-optimized image, thereby an optimized image is obtained. By utilizing the image processing method, quality of final image is improved.
A method for generating a model for facial sculpture based on a generative adversarial network (GAN) includes training a predetermined GAN based on a three dimensional (3D) face dataset of multiple 3D face images to obtain an initial sculpture generation model. A curvature conversion on each of the multiple 3D face images is performed to obtain a distribution map of curvature value and the distribution map of curvature value of each of the multiple 3D face images is added as attention information to the initial sculpture generation model, to train and generate a face sculpture generation model. A target 3D face data and predetermined face curvature parameters are received, and the target 3D face data and the predetermined face curvature parameters are inputted into the face sculpture generation model to generate a face sculpture model. A computing device using the method is also provided.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
92.
Vehicle damage detection method based on image analysis, electronic device and storage medium
A vehicle damage detection method based on image analysis, an electronic device, and a storage medium are provided. In the vehicle damage detection method, query images are obtained by filtering received images through a pre-trained Single Shot MultiBox Detector (SSD) object detection model, and a feature vector of each of the query images is obtained by inputting each of the query images into a residual network. Target output data is obtained using a Transformer model, similar images of the query images are obtained by processing the target output data using a pre-trained similarity judgment model. Loss of a current vehicle damage assessment case is evaluated based on similar cases, and evaluated loss is outputted. By utilizing the vehicle damage detection method, effectiveness of the vehicle damage detection is improved, and automatic evaluation of a loss is achieved.
A driving model training method, a driver identification method, apparatuses, a device and a medium are provided. The driving model training method comprises: acquiring training behavior data of a user wherein the training behavior data are associated with a user identifier; acquiring training driving data associated with the user identifier based on the training behavior data; acquiring positive and negative samples from the training driving data based on the user identifier, and dividing the positive and negative samples into a training set and a test set; training the training set using a bagging algorithm, and acquiring an original driving model; and testing the original driving model using the test set, and acquiring a target driving model. The driving model training method effectively enhances generalization of the driving model, solves the problem of a poor identification result of the current driving identification model.
In a method for training an image generation model, a first generator generates a first sample matrix, a first converter generates a sample contour image, a first discriminator optimizes the first generator and the first converter, a second generator generates a second sample matrix according to the first sample matrix, a second converter generates a first sample grayscale image, a second discriminator optimizes the second generator and the second converter, a third generator generates a third sample matrix according to the second sample matrix, a third converter generates a second sample grayscale image, a third discriminator optimizes the third generator and the third converter, a fourth generator generates a fourth sample matrix according to the third sample matrix, a fourth converter generates a sample color image, and a fourth discriminator optimizes the fourth generator and the fourth converter. The image generation model can be trained easily.
A method for accelerated detection of objects in videos, a server, and a non-transitory computer readable storage medium are provided. The method realizes the detection of a target object in a video by dividing all frame images in video images into preset groups of frame images, each group of frame images including a keyframe image and a non-keyframe image, using a detection box of a target in the keyframe image to generate a preselection box in the non-keyframe image, and detecting the location of the target in the preselection box.
G06F 18/21 - Conception ou mise en place de systèmes ou de techniques; Extraction de caractéristiques dans l'espace des caractéristiques; Séparation aveugle de sources
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
96.
Text-based speech synthesis method, computer device, and non-transitory computer-readable storage medium
A text-based speech synthesis method, a computer device, and a non-transitory computer-readable storage medium are provided. The text-based speech synthesis method includes: a target text to be recognized is obtained; each character in the target text is discretely characterized to generate a feature vector corresponding to each character; the feature vector is input into a pre-trained spectrum conversion model, to obtain a Mel-spectrum corresponding to each character in the target text output by the spectrum conversion model; and the Mel-spectrum is converted to speech to obtain speech corresponding to the target text.
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p.ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
G10L 13/047 - Architecture des synthétiseurs de parole
G10L 25/18 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information spectrale de chaque sous-bande
G10L 25/24 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant le cepstre
97.
Scoring information matching method and device, storage medium and server
Scoring information matching method and device, storage device and server. This scoring information matching method comprises: obtaining a target scoring information and a target scoring message which corresponds to the target scoring information; obtaining a first telephone number which sends out the target scoring message; obtaining the second telephone number which sends out the target scoring information; extracting a first identity number from the first telephone number; searching in preset service records for a service record of which an identity number is the same as the first identity number, a telephone number of a recipient of a corresponding scoring message is the same as the second telephone number, and a transmission time of the corresponding scoring message satisfies a preset condition; and determining the searched service record as a target service record that matches with the target scoring information.
A finger vein comparison method, a computer equipment, and a storage medium are provided. The finger vein comparison method includes: two finger vein images to be compared are obtained (S10); image channel fusion is performed on the two finger vein images to be compared to obtain a two-channel target finger vein image to be compared (S20); the target finger vein image to be compared is input into a feature extractor, and a feature vector of the target finger vein image to be compared is extracted by the feature extractor (S30); the feature vector of the target finger vein image to be compared is input into a dichotomy classifier to obtain a dichotomy result (S40); and it is determined according to the dichotomy result whether the two finger vein images to be compared come from the same finger (S50).
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
A certificate image extraction method, including: step S101, obtaining an original image containing a certificate image, wherein the original image is obtained by a camera device by means of photographing; step S102, performing white balance processing on the original image to obtain a balance image according to component values of pixel points in the original image in red, green and blue color components; step S103, determining a position of the certificate image in the balance image according to a pre-trained certificate feature model; wherein the certificate feature model is obtained by training based on historical certificate images, a certificate image model and a preset initial weight value; and step S104, extracting the certificate image from the balance image according to the position of the certificate image. By performing the certificate image extraction method, the accuracy of extracting the certificate image from the original image is improved.
A method for detecting and locating a lesion in a medical image is provided. A target medical image of a lesion is obtained and input into a deep learning model to obtain a target sequence. A first feature map output from the last convolution layer in the deep learning model is extracted. A weight value of each network unit corresponding to each preset lesion type in a fully connected layer is extracted. For each preset lesion type, a fusion feature map is calculated according to the first feature map and the corresponding weight value and resampled to the size of the target medical image to generate a generic activation map. The maximum connected area in each generic activation map is determined, and a mark border surrounding the maximum connected area is created. A mark border corresponding to each preset lesion type is added to the target medical image.