Ping An Technology (Shenzhen) Co., LTD.

Chine

Retour au propriétaire

1-100 de 5 847 pour Ping An Technology (Shenzhen) Co., LTD. Trier par
Recheche Texte
Affiner par
Juridiction
        International 5 662
        États-Unis 185
Date
2024 janvier 1
2023 décembre 5
2024 (AACJ) 1
2023 233
2022 561
Voir plus
Classe IPC
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques 465
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales 462
G06F 17/30 - Recherche documentaire; Structures de bases de données à cet effet 372
G06N 3/04 - Architecture, p.ex. topologie d'interconnexion 262
H04L 29/06 - Commande de la communication; Traitement de la communication caractérisés par un protocole 251
Voir plus
Statut
En Instance 38
Enregistré / En vigueur 5 809
Résultats pour  brevets
  1     2     3     ...     59        Prochaine page

1.

SYSTEM AND METHOD FOR MULTIMODAL VIDEO SEGMENTATION IN MULTI-SPEAKER SCENARIO

      
Numéro d'application 17867667
Statut En instance
Date de dépôt 2022-07-18
Date de la première publication 2024-01-18
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wu, Xinyi
  • Xia, Tian
  • Yu, Xinlu
  • Chen, Ziyi
  • Chu, Iek-Heng
  • Xu, Sirui
  • Han, Mei
  • Xiao, Jing
  • Chang, Peng

Abrégé

A system and method for multimodal video segmentation in a multi-speaker scenario are provided. A transcript of a video with a plurality of speakers is segmented into a plurality of sentences. Speaker change information is detected between each two adjacent sentences of the plurality of sentences based on at least one of audio content or visual content of the video. The video is segmented into a plurality of video clips based on the transcript of the video and the speaker change information.

Classes IPC  ?

  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
  • G10L 17/18 - Réseaux neuronaux artificiels; Approches connexionnistes
  • G10L 17/02 - Opérations de prétraitement, p.ex. sélection de segment; Représentation ou modélisation de motifs, p.ex. fondée sur l’analyse linéaire discriminante [LDA] ou les composantes principales; Sélection ou extraction des caractéristiques
  • G10L 17/14 - Par catégorisation phonémique ou reconnaissance de la parole avant identification ou vérification du locuteur
  • G10L 25/60 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour mesurer la qualité des signaux de voix
  • G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence

2.

ORDER PROCESSING METHOD AND SYSTEM BASED ON HANDHELD TERMINALS, COMPUTER DEVICE, AND MEDIUM

      
Numéro d'application CN2022120484
Numéro de publication 2023/245892
Statut Délivré - en vigueur
Date de dépôt 2022-09-22
Date de publication 2023-12-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Wu, Pulin

Abrégé

An order processing method and system based on handheld terminals, a computer device and a medium. The method comprises: a server determines a target store identifier according to order data to be processed, and sends the order data to be processed to a handheld terminal corresponding to the target store identifier (S101); when the handheld terminal acquires a scanned item bar code, the handheld terminal generates an actual item information list according to the scanned item bar code (S102); the handheld terminal verifies, according to the received order data to be processed, whether the item information in the actual item information list is correct, and generates a verification result (S103); and the handheld terminal generates order sorting completion information on the basis of the verification result, and sends the order sorting completion information to the server (S104). The handheld terminal is used to scan the item bar code to obtain the actual item information list, and meanwhile, the handheld terminal can check the actual item information list according to the order data from the server. The approach that combines hardware and software effectively prevents sorting errors caused by human errors, thereby improving the order completion rate and the platform operation efficiency.

Classes IPC  ?

  • G06Q 30/06 - Transactions d’achat, de vente ou de crédit-bail
  • G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
  • G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie

3.

INFORMATION MANAGEMENT METHOD AND APPARATUS, AND DEVICE AND MEDIUM

      
Numéro d'application CN2022121285
Numéro de publication 2023/240830
Statut Délivré - en vigueur
Date de dépôt 2022-09-26
Date de publication 2023-12-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Wu, Feiwang

Abrégé

Disclosed in the present application are an information management method and apparatus, and a device and a medium. The method comprises: according to a preset interface selection rule, determining, from among at least one candidate access interface corresponding to each hospital, a target access interface that matches each hospital; by means of the target access interface that matches each hospital, acquiring appointment availability information of each hospital, and generating an information presentation page corresponding to each hospital; by means of calling a preset script, updating or modifying the information presentation page corresponding to each hospital; and according to a preset hospital abnormality detection rule, detecting a registration state of each hospital, and if there is a target hospital of which the registration state is abnormal, adding preset abnormality early-warning information to an information presentation page corresponding to the target hospital. By means of the embodiments of the present application, effective management over appointment availability information of each hospital can be realized, and the appointment availability information is updated or modified in a timely manner, such that the validity and timeliness of the appointment availability information provided for a user are ensured, thereby preventing invalid or expired appointment availability information from being provided for the user.

Classes IPC  ?

  • G16H 40/20 - TIC spécialement adaptées à la gestion ou à l’administration de ressources ou d’établissements de santé; TIC spécialement adaptées à la gestion ou au fonctionnement d’équipement ou de dispositifs médicaux pour la gestion ou l’administration de ressources ou d’établissements de soins de santé, p.ex. pour la gestion du personnel hospitalier ou de salles d’opération

4.

SERVICE PACKAGE GENERATION METHOD, APPARATUS AND DEVICE BASED ON PATIENT DATA, AND STORAGE MEDIUM

      
Numéro d'application CN2022121728
Numéro de publication 2023/240837
Statut Délivré - en vigueur
Date de dépôt 2022-09-27
Date de publication 2023-12-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Ye, Jiebao

Abrégé

The present application relates to the technical field of big data. Disclosed are a service package generation method, apparatus and device based on patient data, and a storage medium. The method comprises: performing de-identification processing on collected original case data, so as to obtain target treatment data of a patient; extracting a plurality of key events from the target treatment data, and performing fusion processing on the key events, so as to obtain a medical information set of a disease corresponding to a disease type; inputting the target treatment data into a preset Bilstm model, so as to obtain a medical feature vector of medical data, and performing pooling analysis on the medical feature vector, so as to obtain a medical feedforward vector; and according to the medical feedforward vector, performing clustering analysis on the medical information set on the basis of a preset cosine similarity algorithm, so as to obtain a medical service package corresponding to the disease. By means of the present application, original case data is clustered to obtain service packages corresponding to different types of diseases with common features, thereby solving the technical problem of low medical service efficiency, and relieving the medical pressure.

Classes IPC  ?

  • G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p.ex. pour des dossiers électroniques de patients

5.

MACHINE TRANSLATION METHOD AND APPARATUS, AND COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022122036
Numéro de publication 2023/240839
Statut Délivré - en vigueur
Date de dépôt 2022-09-28
Date de publication 2023-12-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) He, Aofei

Abrégé

The present application relates to the technical fields of artificial intelligence and speech processing. Provided are a machine translation method and apparatus, and a computer device and a storage medium. The method comprises: acquiring source language data to be translated; performing forward maximum matching on said source language data, and determining a field proper noun in said source language data; inputting the field proper noun into a target machine translation model for translation, so as to obtain a proper noun translation result, and inputting said source language data into the target machine translation model for translation, so as to obtain translation target language data, wherein the target machine translation model is obtained by means of performing training by using sample data; and replacing a corresponding translation result in the translation target language data with the proper noun translation result, so as to obtain a machine translation result. By using the method, the accuracy of translating a field proper noun by means of a target machine translation model can be improved, thereby obtaining a machine translation result with accurate translation.

Classes IPC  ?

  • G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement

6.

MICRO-SERVICE-BASED MULTI-CHANNEL INFORMATION ISOLATION METHOD AND APPARATUS, AND COMPUTER DEVICE

      
Numéro d'application CN2022122102
Numéro de publication 2023/240840
Statut Délivré - en vigueur
Date de dépôt 2022-09-28
Date de publication 2023-12-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Zhang, Wenhui

Abrégé

The present application relates to a micro-service-based multi-channel information isolation method and apparatus, and a computer device. The method comprises: receiving a service request, which is sent by a user terminal to a server, wherein the server provides a micro-service chain for the user terminal; parsing user channel information carried by the service request, so as to obtain a user access channel identifier; transmitting a user channel identifier into attachment information of a temporary state recorder; and a target micro-service, which has a channel isolation requirement, in the micro-service chain acquiring the user channel identifier from the attachment information, isolating channel information, and processing the service request. By using the method, the transformation difficulty and transformation cost for a micro-service can be reduced in the field of digital medical treatment.

Classes IPC  ?

  • G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption
  • G06F 9/50 - Allocation de ressources, p.ex. de l'unité centrale de traitement [UCT]

7.

AUTOMATIC VIDEO EDITING METHOD AND SYSTEM, AND TERMINAL AND STORAGE MEDIUM

      
Numéro d'application CN2022089560
Numéro de publication 2023/184636
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-10-05
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Shu, Chang
  • Chen, Youxin

Abrégé

Disclosed in the present application are an automatic video editing method and system, and a terminal and a storage medium. The method comprises: acquiring key frames of a video to be edited, and self-tagging the key frames by using an image comparison algorithm, so as to generate unsupervised vector representations of the key frames; acquiring corpus information of said video, and acquiring an unsupervised vector representation of the corpus information by using a text comparison algorithm; segmenting said video according to the key frames, so as to generate video clips corresponding to the number of key frames; and calculating the similarity between adjacent video clips according to the unsupervised vector representations of the key frames and the unsupervised vector representation of the corpus information, and combining adjacent video clips between which the similarity is greater than a set similarity threshold value, so as to generate a video editing result of said video. In the embodiments of the present application, image information and text information are used, thereby avoiding manual data labeling, realizing automatic editing of a video, and greatly improving the video editing efficiency.

Classes IPC  ?

  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo

8.

DISEASE RISK ESTIMATION NETWORK OPTIMIZATION METHOD AND APPARATUS, MEDIUM, AND DEVICE

      
Numéro d'application CN2022089727
Numéro de publication 2023/178789
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Xu, Zhuoyang
  • Zhao, Tingting
  • Hu, Gang
  • Sun, Xingzhi
  • Zhao, Yue

Abrégé

The present application relates to the technical fields of artificial intelligence and digital medical treatment, and discloses a disease risk estimation network optimization method and system, a storage medium, and a computer device. The method comprises: obtaining a patient sample library; randomly selecting at least three patient samples from the patient sample library; inputting sample information of the at least three patient samples into a preset neural network in pairs, and calculating a first distance between every two patient samples by using the neural network, wherein the neural network is used for estimating a disease risk of a patient; calculating a loss value of the neural network according to the first distances; writing the loss value into a loss value list, and determining whether the loss value list meets a preset convergence condition; and if not, adjusting parameters of the neural network according to the loss value, and returning to the step of randomly selecting at least three patient samples from the patient sample library until the loss value list meets the preset convergence condition. The method of the present application improves the accuracy of the neural network for disease risk estimation.

Classes IPC  ?

  • G16H 50/20 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour le diagnostic assisté par ordinateur, p.ex. basé sur des systèmes experts médicaux

9.

METHOD AND APPARATUS FOR TRAINING DUAL-PERSPECTIVE GRAPH NEURAL NETWORK MODEL, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090086
Numéro de publication 2023/178793
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Wang, Jun

Abrégé

The present application relates to the technical field of artificial intelligence, and provides a method and apparatus for training a dual-perspective graph neural network model, a device, and a medium. The method comprises: obtaining a plurality of pieces of graph data having labels, the graph data comprising nodes, edges, and attribute information; exchanging the locations of the nodes and the edges in the graph data to obtain exchanged graph data; inputting a node feature matrix and a node adjacency matrix into a node perspective network to obtain a first determination result of the graph data; inputting an edge feature matrix and an edge adjacency matrix into an edge perspective network to obtain a second determination result of the graph data; weighting the first determination result of the graph data and the second determination result of the graph data to obtain a determination result of the graph data, and obtaining a trained dual-perspective graph neural network model on the basis of the determination result of the graph data and the labels of the graph data. According to the present application, the locations of the nodes and the edges in the graph data are exchanged, and node features and edge features are obtained at the same time, so that a trained model has good generalization and mobility.

Classes IPC  ?

  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

10.

IMAGE CLASSIFICATION METHOD AND APPARATUS, AND DEVICE AND MEDIUM

      
Numéro d'application CN2022090437
Numéro de publication 2023/178798
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO.,LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Zhang, Yidi
  • Shu, Chang
  • Chen, Youxin

Abrégé

An image classification method and an image classification apparatus. The image classification method comprises: extracting an image feature and a text feature from an image to be classified (S1, S2); fusing the image feature and the text feature to obtain a fused feature (S3); and calculating, by using a pre-trained activation function, probability values of the fused feature with respect to a plurality of preset classification tags (S4); and performing image classification analysis on said image according to the fused feature and the probability values and by using a pre-trained integrated classification model, so as to obtain a classification result of said image (S5), thereby improving the accuracy and efficiency of image classification.

Classes IPC  ?

  • G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement

11.

NAMED ENTITY RECOGNITION METHOD AND APPARATUS, DEVICE, AND COMPUTER READABLE STORAGE MEDIUM

      
Numéro d'application CN2022090756
Numéro de publication 2023/178802
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of artificial intelligence. Provided are a named entity recognition method and apparatus, a device, and a computer readable storage medium. The named entity recognition method comprises acquiring a pre-trained named entity recognition model, acquiring a first sentence to be recognized, and inputting same into the named entity recognition model, so that the named entity recognition model executes the following named entity recognition processing: performing word tokenization processing on the first sentence so as to obtain a second sentence comprising a plurality of tokenization words; performing feature extraction on the plurality of tokenization words so as to obtain a plurality of word embedding feature vectors; according to the plurality of word embedding feature vectors, processing the second sentence so as to obtain a plurality of cross-domain information features; by means of an information bottleneck layer, processing the plurality of cross-domain information features so as to obtain a plurality of information bottleneck features; and, by using a classification function, performing classification recognition on the plurality of information bottleneck features so as to determine a corresponding named entity class. Unregistered words in named entities can be better recognized, and the named entity recognition accuracy is therefore improved.

Classes IPC  ?

12.

CIPHERTEXT DATA STORAGE METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089999
Numéro de publication 2023/178792
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Liu, Ming
  • Yu, Huiqiang
  • Gao, Yong

Abrégé

The present application is applicable to the technical field of data security. Provided are a ciphertext data storage method and apparatus, and a device and a storage medium. The method comprises: encrypting service data generated by a service system, so as to obtain ciphertext data, and acquiring encryption information of the ciphertext data; according to the encryption information of the ciphertext data and according to a preset data header generation rule, generating data header information corresponding to the ciphertext data, wherein the data header information is represented by a character string, which includes several characters, which characters in the character string are used for identifying the encryption information of the ciphertext data; and adding the data header information to a head position of the ciphertext data, so as to generate target data, and storing the target data in a ciphertext database of the service system. On the basis of the method, technical problems such as it being difficult to distinguish, when a service system uses a plurality of cryptographic algorithms in a combined manner, the algorithms applied to ciphertext, it being impossible to find a key corresponding to ciphertext in a scenario where the service system changes a key, and it being impossible to trace a relationship of ciphertext data in the service system can be solved.

Classes IPC  ?

  • G06F 21/62 - Protection de l’accès à des données via une plate-forme, p.ex. par clés ou règles de contrôle de l’accès
  • G06F 21/60 - Protection de données

13.

IMAGE DESCRIPTION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090723
Numéro de publication 2023/178801
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide an image description method and apparatus, a computer device, and a storage medium. The method comprises: acquiring an original image, and performing feature extraction on the original image to obtain an image feature; performing region detection on the original image according to the image feature to obtain a target region image; performing feature extraction on the target region image to obtain a region feature vector; performing extraction on the region feature vector by means of a subject generation model to obtain subject data, the subject data comprising a subject word vector and moment state information; performing word prediction on the subject data by means of a word generation model to obtain description words; and splicing the description words according to the moment state information to obtain a target description text for describing the original image. By means of multiple feature extractions, a target description text can contain more image details, and a description text having coherent semantics is generated hierarchically by utilizing a subject generation model and a word generation model.

Classes IPC  ?

  • G06V 10/40 - Extraction de caractéristiques d’images ou de vidéos

14.

INTENT RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022120942
Numéro de publication 2023/178965
Statut Délivré - en vigueur
Date de dépôt 2022-09-23
Date de publication 2023-09-28
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Dong, Yihua

Abrégé

The embodiments of the present application relate to the technical field of artificial intelligence. Disclosed are an intent recognition method and apparatus, and an electronic device and a storage medium. The intent recognition method comprises: acquiring first target intent sample data according to original intent sample data, wherein the first target intent sample data comprises non-long-tail input sample data and intent matching result sorting data; performing entity abstraction on the non-long-tail input sample data, so as to obtain an abstract generalized entity word; performing logic combination on the abstract generalized entity word and the intent matching result sorting data, so as to generate an intent matching generalization dictionary; constructing a first intent recognition model according to the intent matching generalization dictionary; when it is determined that input data to be subjected to recognition is non-long-tail input data, inputting said input data into the first intent recognition model; and outputting an intent recognition result of said input data according to the first intent recognition model. The technical solution in the embodiments of the present application can increase the accuracy rate of intent understanding.

Classes IPC  ?

15.

ALZHEIMER DISEASE EVALUATION METHOD AND SYSTEM, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089556
Numéro de publication 2023/173538
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhao, Tingting
  • Sun, Xingzhi
  • Xu, Zhuoyang

Abrégé

An Alzheimer disease evaluation method and system, and a device and a storage medium. The method comprises: acquiring multi-modal illness state description data of a target object (S210); on the basis of a multi-modal attention mechanism, acquiring fused features between illness state observation data of any two aspects in the multi-modal illness state description data, and combining all the fused features, so as to obtain a multi-modal feature (S220); and inputting the multi-modal feature into a neural network evaluation model, and evaluating whether the target object is in a high-risk stage of an early onset of Alzheimer disease (S230). In the method, by means of extracting illness state observation data of different aspects of a target object, and acquiring an internal relationship between the illness state observation data of different aspects by means of a multi-modal attention mechanism, the accuracy of the evaluation method is improved; and by means of determining whether the target object is in a high-risk stage of an early onset of Alzheimer disease, an early warning can be performed on the high probability of the onset of Alzheimer disease in advance without manual screening, such that labor costs are reduced, and screening efficiency is improved.

Classes IPC  ?

  • G16H 50/30 - TIC spécialement adaptées au diagnostic médical, à la simulation médicale ou à l’extraction de données médicales; TIC spécialement adaptées à la détection, au suivi ou à la modélisation d’épidémies ou de pandémies pour l’évaluation des risques pour la santé d’une personne
  • G16H 10/60 - TIC spécialement adaptées au maniement ou au traitement des données médicales ou de soins de santé relatives aux patients pour des données spécifiques de patients, p.ex. pour des dossiers électroniques de patients
  • G16H 30/20 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le maniement d’images médicales, p.ex. DICOM, HL7 ou PACS
  • A61B 5/16 - Dispositifs pour la psychotechnie; Test des temps de réaction

16.

INTELLIGENT QUESTION ANSWERING OPTIMIZATION METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE

      
Numéro d'application CN2022089824
Numéro de publication 2023/173540
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Qiao, Yixuan

Abrégé

An intelligent question answering optimization method and apparatus, a storage medium, and a computer device, relating to the technical field of natural language processing, and mainly aiming to solve the problem of poor performance of the existing question answering system due to the fact that whether the answer output by the existing question answering model is accurate is closely related to a preliminarily extracted article, and the model deployment is difficult. The method comprises: receiving a question to be answered and calculating a question vector corresponding to said question (101); obtaining a plurality of phrase vectors in a preset phrase library (102); on the basis of the question vector and the phrase vectors, using a pre-trained question answering model to calculate a matching probability between a phrase and said question (103); and outputting an answer to said question according to the matching probability (104).

Classes IPC  ?

  • G06F 16/332 - Formulation de requêtes
  • G06F 40/35 - Représentation du discours ou du dialogue
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

17.

TEXT-BASED EMOTION RECOGNITION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022089998
Numéro de publication 2023/173541
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Ma, Jun
  • Wang, Shaojun

Abrégé

A text-based emotion recognition method and apparatus, a device, and a storage medium, applicable to the technical field of artificial intelligence. The method comprises: extracting, from a text to be recognized, a feature keyword for representing the text to be recognized, comparing the feature keyword with a preset pre-recognition rule, performing first-level emotion recognition processing on the text to be recognized, and determining whether the feature keyword satisfies the pre-recognition rule; if not, inputting the text to be recognized into a first emotion recognition model for second-level emotion recognition processing, and determining whether the text to be recognized is a negative emotion text; and if so, inputting the text to be recognized into a second emotion recognition model for third-level emotion recognition processing, recognizing an emotion classification category corresponding to the text to be recognized, and generating, according to the emotion classification category, an emotion recognition result of the text to be recognized. The method employs multi-level emotion recognition to perform hierarchical detection on the text to be recognized, so that the accuracy and efficiency of emotion recognition can be improved simultaneously.

Classes IPC  ?

18.

METHOD AND APPARATUS FOR TRAINING TEXT RECOGNITION MODEL, AND COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090160
Numéro de publication 2023/173546
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhu, Yi
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of natural language processing of artificial intelligence technology. Provided in the present application are a method and apparatus for training a text recognition model, and a computer device and a storage medium. The method comprises: performing random augmentation processing on a first image, so as to obtain a plurality of second images; marking the first image and the plurality of second images as reference images; acquiring a text feature of text information in each reference image, and calculating the similarity of the text features of every two reference images; taking two reference images, the similarity between which is greater than a preset similarity threshold, as a reference image pair, and inputting the reference image pair into a neural network model for training; acquiring a training result after the neural network model is trained, and determining whether the training result meets a requirement; and if so, taking the trained neural network model as a text recognition model. In this way, the data volume of training data is increased by means of a data augmentation processing mode, such that the recognition accuracy of a text recognition model is improved.

Classes IPC  ?

  • G06V 30/413 - Classification de contenu, p.ex. de textes, de photographies ou de tableaux
  • G06V 30/16 - Prétraitement de l’image

19.

DATA EQUALIZATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090170
Numéro de publication 2023/173548
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Xie, Lin
  • Ma, Jun
  • Wang, Shaojun

Abrégé

Disclosed in the present application are a data equalization method and apparatus, and an electronic device and a storage medium. The method comprises: converting, into a numerical value, a variable value included in each piece of data in a table, wherein each piece of data comprises an independent variable value and a target variable value; dividing the data in the table into a majority class and a minority class according to the target variable value; clustering data in the majority class, and performing undersampling extraction on the data in the majority class according to the data proportion of each cluster, so as to obtain an undersampling result of the majority class; performing oversampling extraction on data in the minority class by using a preset random perturbation policy, so as to obtain an oversampling result of the minority class; and combining the undersampling result and the oversampling result, so as to obtain equalized data. Undersampling extraction is realized by means of clustering data in a majority class, such that extracted data has a relatively strong representativeness for the majority class. In addition, undersampling extraction is realized by means of a random perturbation policy, such that the problem of overfitting of subsequent model training that is caused by simple repetition of data in a minority class can be avoided.

Classes IPC  ?

  • G06F 16/215 - Amélioration de la qualité des données; Nettoyage des données, p.ex. déduplication, suppression des entrées non valides ou correction des erreurs typographiques
  • G06F 16/2458 - Types spéciaux de requêtes, p.ex. requêtes statistiques, requêtes floues ou requêtes distribuées
  • G06Q 40/02 - Opérations bancaires, p.ex. calcul d'intérêts ou tenue de compte

20.

CROSS-DOMAIN DATA RECOMMENDATION METHOD AND APPARATUS, AND COMPUTER DEVICE AND MEDIUM

      
Numéro d'application CN2022090364
Numéro de publication 2023/173550
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Hou, Changyu

Abrégé

A cross-domain data recommendation method and apparatus, and a computer device and a medium. The method comprises: acquiring a plurality of pieces of source domain data and a plurality of pieces of target domain data (S101); inputting the plurality of pieces of source domain data and the plurality of pieces of target domain data into a pre-trained cross-domain data recommendation model, so as to determine, from the plurality of pieces of target domain data and according to the plurality of pieces of source domain data, data to be recommended, wherein the pre-trained cross-domain data recommendation model is generated by means of performing training on the basis of a knowledge graph and user data, and the knowledge graph is constructed according to a plurality of pieces of historical source domain data (S102); and outputting the data to be recommended, which corresponds to the plurality of pieces of source domain data, and pushing the data to be recommended to a corresponding client (S103). A topological structure formed by the relationship between users and products in different domains is represented by means of a knowledge graph; in addition, the influence of user data on results is combined, so that a more accurate source domain embedding vector is obtained, the precision of a model after training is higher, and the accuracy of data recommendation is improved.

Classes IPC  ?

  • G06F 16/335 - Filtrage basé sur des données supplémentaires, p.ex. sur des profils d’utilisateurs ou de groupes

21.

SHORT ADDRESS GENERATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM

      
Numéro d'application CN2022090444
Numéro de publication 2023/173551
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO.,LTD. (Chine)
Inventeur(s) He, Zhonglin

Abrégé

The present application relates to data processing technologies. Disclosed is a short address generation method, comprising: acquiring a number set, sequentially converting each number in the number set into non-repeated character strings with a preset length, and taking the character strings, which are obtained by means of conversion, as a short address of the corresponding number; storing the short address in a preset memory queue; and when a long address conversion request is received, sequentially extracting a short address from the preset memory queue, and returning the extracted short address to the corresponding long address conversion request. Further provided in the present application are a short address generation apparatus, an electronic device, and a computer-readable storage medium. By means of the present application, a long address conversion request with a large data volume can be dealt with, and the consumption of short addresses concurrently generated is reduced, thereby improving the user experience and system stability.

Classes IPC  ?

22.

CLUSTERING ALGORITHM-BASED PEDESTRIAN ABNORMAL BEHAVIOR DETECTION METHOD AND APPARATUS, AND STORAGE MEDIUM

      
Numéro d'application CN2022090716
Numéro de publication 2023/173553
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Zhou, Chenghao
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to artificial intelligence technology, and provides a clustering algorithm-based pedestrian abnormal behavior detection method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining a pedestrian image data set comprising a plurality of pieces of pedestrian image data; performing pose extraction on the pedestrian image data set on the basis of a pre-trained pose extraction network model to obtain a pedestrian pose vector corresponding to each piece of pedestrian image data; performing clustering iterative processing on the basis of the pedestrian pose vector corresponding to each piece of pedestrian image data until an iteration ending condition is met; and determining a pedestrian pose behavior corresponding to a pedestrian pose vector outside a second pedestrian pose vector group as an abnormal behavior. In embodiments of the present application, there is no need to mark which action types are abnormal behaviors, a relatively isolated pedestrian pose vector that is not clustered is determined as an abnormal pedestrian pose vector by means of a clustering algorithm, an abnormal behavior in a pedestrian image data set can be effectively detected, and the accuracy of pedestrian abnormal behavior detection is improved.

Classes IPC  ?

  • G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux

23.

IMAGE PROCESSING METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090742
Numéro de publication 2023/173557
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Wang, Yifei
  • Shu, Chang
  • Chen, Youxin

Abrégé

The embodiments of the present application belong to the technical field of artificial intelligence. Provided are an image processing method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring an original image; performing grayscale processing on the original image, so as to obtain a grayscale image; comparing a pixel value corresponding to the grayscale image with a preset initial threshold value, and then performing binarization processing on the grayscale image according to a comparison result, so as to obtain a binarized image; performing index calculation on the binarized image, so as to obtain index data; and adjusting the preset initial threshold value to update the index data until a preset convergence condition is met. By means of the embodiments of the present application, a better image effect can be ensured.

Classes IPC  ?

24.

TEXT ERROR CORRECTION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022089175
Numéro de publication 2023/173533
Statut Délivré - en vigueur
Date de dépôt 2022-04-26
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Jiang, Peng

Abrégé

The present application relates to the technical field of language processing. Disclosed are a text error correction method and apparatus, a device, and a storage medium. The method comprises: preprocessing text data to be error-corrected to obtain text information; then inputting the text information into a pre-trained text error correction model for text error correction processing to obtain a text error correction result corresponding to the text information; calculating, according to a minimum edit distance algorithm, a minimum edit distance between characters comprised in the text information and characters comprised in the text error correction result corresponding to the text information; and performing mapping processing on the characters comprised in the text information and the characters comprised in the text error correction result corresponding to the text information according to the minimum edit distance to obtain a text error correction opinion. The text error correction opinion is obtained by calculating the minimum edit distance, so as to reflect a relationship between incorrect content and correct content, and the position of the incorrect content in a text is provided, so that a user can perform an adjustment in real time.

Classes IPC  ?

  • G06F 40/232 - Correction orthographique, p.ex. vérificateurs d’orthographe ou insertion des voyelles
  • G06F 16/31 - Indexation; Structures de données à cet effet; Structures de stockage

25.

CHEMICAL FORMULA IDENTIFICATION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022089509
Numéro de publication 2023/173536
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhu, Yi
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of artificial intelligence, and in particular to a chemical formula identification method and apparatus, a computer device, and a storage medium. The method comprises: obtaining an image to be detected comprising a chemical formula; inputting the image to be detected into a multi-target detection model to obtain a chemical formula region image; inputting the chemical formula region image into a chemical formula identification model to obtain a candidate chemical formula in the chemical formula region image; performing existence check on the candidate chemical formula according to a pre-established chemical formula database to obtain a check result; and when it is determined, according to the check result, that the candidate chemical formula exists, determining the candidate chemical formula as an identified chemical formula. In addition, the present application also relates to blockchain technology, and the image to be detected can be stored in a blockchain. The present application improves the efficiency of chemical formula identification.

Classes IPC  ?

  • G06V 20/62 - Texte, p.ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
  • G06V 30/146 - Alignement ou centrage du capteur d’image ou du champ d’image
  • G06V 30/40 - Reconnaissance des formes à partir d’images axée sur les documents

26.

TEXT SENTIMENT ANALYSIS METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089530
Numéro de publication 2023/173537
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yuan, Chao
  • Li, Min
  • Xu, Jiefu

Abrégé

A text sentiment analysis method and apparatus, a device and a storage medium, related to the technical field of semantic recognition. The method comprises: preprocessing data to be analyzed to obtain text information (101); determining whether the length of the text information is greater than a preset length threshold (102); if yes, calling a preset text abstract extraction algorithm to perform reduced processing on the text information to obtain abstract data of the text information (103); and inputting the abstract data into a pre-trained sentiment analysis model for sentiment analysis to obtain sentiment information in the data to be analyzed (104). Therefore, the problem in the prior art that sentiment recognition is affected due to incomplete information after truncation processing is performed on a long text is solved.

Classes IPC  ?

  • G06F 16/31 - Indexation; Structures de données à cet effet; Structures de stockage

27.

VIDEO CONTENT PROCESSING METHOD AND SYSTEM, AND TERMINAL AND STORAGE MEDIUM

      
Numéro d'application CN2022089559
Numéro de publication 2023/173539
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Pan, Yunqian
  • Ye, Jingxian
  • Xi, Yue
  • Bao, Xiaoxi
  • Chen, Youxin

Abrégé

A video content processing method and system, and a terminal and a storage medium. The method comprises: extracting audio signals and text information from videos to be processed; extracting video images from said videos, performing image analysis on the video images, so as to determine the video types of said videos, wherein the video types involve a PPT video, a single-person video and a multi-person video; and on the basis of the audio signals and the text information, extracting, by using a multi-modal video processing model, highlight clips from said videos of the different types, extracting, by using a deep neural network model, titles, summaries and tag information, which correspond to the highlight clips, and generating a short-video clipping result of said videos. The present application can generate a plurality of finely clipped short videos in one click, thereby greatly improving the efficiency of clipping and shortening the period of short-video production.

Classes IPC  ?

  • H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de services; Interface pour utilisateurs finaux pour l'interaction avec le contenu, p.ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
  • H04N 21/44 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4

28.

VISUAL MONITORING METHOD AND SYSTEM FOR DATA LINK, AND DEVICE AND MEDIUM

      
Numéro d'application CN2022090085
Numéro de publication 2023/173542
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yang, Xin
  • Chen, Youxin

Abrégé

A visual monitoring method and system for a data link, and a device (1) and a medium. The visual monitoring method comprises: tracking traffic data in a data link in real time, and constructing a topological relational graph, which includes traffic nodes, in the data link (S1); obtaining state information of each traffic node according to the tracked traffic data, and visually configuring the state information of each traffic node at the corresponding traffic node in the topological relational graph (S2); in view of the tracked traffic data and state information of the traffic nodes, obtaining overview data information of the data link and an overview data chart of same (S3); and pushing the overview data chart and the topological relational graph to a front end, and generating, at the front end, a coarse-grained image-text interface and a fine-grained image interface, which can be switched between each other (S4). The operation state and node congestion condition of a data link during the whole data processing process are displayed in a graphical manner, and both the overall information and partial information of the data link are displayed.

Classes IPC  ?

  • H04L 45/02 - Mise à jour ou découverte de topologie
  • H04L 47/10 - Commande de flux; Commande de la congestion

29.

DATA CLASSIFICATION MODEL TRAINING METHOD AND APPARATUS, CLASSIFICATION METHOD AND APPARATUS, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090105
Numéro de publication 2023/173543
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Ma, Jun
  • Wang, Shaojun

Abrégé

The present application relates to a data classification model training method and apparatus, a classification method and apparatus, a device, and a medium. The training method comprises: dividing a plurality of data samples into a minority class sample set and a majority class sample set; undersampling from the majority class sample set to obtain an undersampling set; performing first iterative training on a classification model on the basis of a training set composed of the minority class sample set and the undersampling set to obtain a classification model meeting a first preset condition; if the model does not meet a second preset condition, performing oversampling on the minority class sample set on the basis of the model, and adding obtained samples into the training set; and performing second iterative training on the model on the basis of the updated training set to obtain a data classification model meeting the second preset condition. According to the training method in the present application, data obtained by means of undersampling and data obtained by means of oversampling are used for training the classification model, data used for training the classification model has good balance, a good training effect is achieved, and the classification accuracy of the trained classification model is high.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

30.

PERSON RE-IDENTIFICATION METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090156
Numéro de publication 2023/173544
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhu, Yi
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of artificial intelligence. Disclosed are a person re-identification method and apparatus based on artificial intelligence, and a device and a storage medium. The method comprises: inputting a target image into a preset feature extraction model, so as to obtain a feature vector to be analyzed that is output by each feature output module; inputting, into a preset classification prediction module, each feature vector to be analyzed, so as to perform classification probability prediction, and obtaining a classification probability prediction result; according to a target feature vector and a preset number of similar images, determining sets of similar human body images from a preset human body image library, wherein the target feature vector is any feature vector to be analyzed; for each human body image in each set of similar human body images, performing weighted summation on each classification probability prediction result and the weight of each classification prediction module, so as to obtain a soft voting score; and determining a person re-identification result according to the soft voting scores. Attention is paid to both low-level features, such as the color and texture of clothes, and high-level global semantic information, thereby improving the accuracy of person re-identification.

Classes IPC  ?

  • G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes

31.

METHOD AND APPARATUS FOR GENERATING REFERENCE IMAGE OF ULTRASOUND IMAGE, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090159
Numéro de publication 2023/173545
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Hu, Haonan
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of image feature processing, and in particular to a method and apparatus for generating a reference image of an ultrasound image, and a computer device and a storage medium. The method comprises: acquiring pre-configured mask information; acquiring an original ultrasound image; performing product processing on the original ultrasound image and the mask information, so as to obtain a feature region ultrasound image; inputting the feature region ultrasound image into a deep learning network, so as to obtain a first feature variable; inputting the original ultrasound image into the deep learning network, so as to obtain a second feature variable; obtaining a shape feature of an image according to the first feature variable, and obtaining a fine-grained feature of the image according to the second feature variable; and generating a reference image according to the shape feature and the fine-grained feature. By means of the present application, an accurate reference image with reliable interpretability can be generated, thereby improving the reliability of a prediction result.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées

32.

TEXT IMAGE MATCHING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090161
Numéro de publication 2023/173547
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Zhou, Chenghao
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present disclosure relates to the technical field of artificial intelligence. Disclosed are a text image matching method and apparatus, a device, and a storage medium. The method comprises: identifying the type of an object to be matched to obtain a type identification result; determining a candidate object set from a preset candidate object library according to the type identification result; extracting a fusion feature according to the object to be matched and each candidate object in the candidate object set; performing feature extraction on each candidate object in the candidate object set to obtain candidate object features; performing similarity calculation on the fusion feature and the candidate object feature corresponding to the same candidate object to obtain a single-object similarity; and determining, according to each single-object similarity and the candidate object set, a target matching result corresponding to the object to be matched. A direct matching operation between image features and text features is avoided, and the matching precision can be improved by adopting fusion features for text image matching.

Classes IPC  ?

  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo

33.

ESTABLISHMENT METHOD FOR TARGET DETECTION MODEL, APPLICATION METHOD FOR TARGET DETECTION MODEL, AND DEVICE, APPARATUS AND MEDIUM

      
Numéro d'application CN2022090664
Numéro de publication 2023/173552
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Jia, Yunshu
  • Zhou, Chenghao
  • Shu, Chang
  • Chen, Youxin

Abrégé

Disclosed in the present application are an establishment method for a target detection model, an application method for a target detection model, and a device, an apparatus and a medium, which can be used in the field of image recognition. The establishment method comprises: acquiring a basic target detection network, replacing an ordinary convolutional layer of the basic target detection network with a depthwise separable convolutional layer, and adding a multi-scale feature fusion mechanism into the basic target detection network so as to obtain an initial target detection model; acquiring a preset digital image, and inputting the preset digital image into the initial target detection model; performing feature extraction on the preset digital image by means of the depthwise separable convolutional layer of the initial target detection model, and outputting a feature map; performing target detection on the feature map by means of the multi-scale feature fusion mechanism of the initial target detection model so as to obtain an intermediate target detection model; and performing optimization processing on the intermediate target detection model by using a NetAdapt algorithm and a pruning algorithm so as to obtain a final target detection model. By means of the present application, the efficiency of target detection of an embedded device can be effectively improved.

Classes IPC  ?

  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
  • G06N 3/02 - Réseaux neuronaux
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

34.

INAPPROPRIATE AGENT LANGUAGE IDENTIFICATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090717
Numéro de publication 2023/173554
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Cheng, Yiji
  • Ma, Jun
  • Wang, Shaojun

Abrégé

The present application belongs to the field of artificial intelligence, and provided are an inappropriate agent language identification method and apparatus, an electronic device and a storage medium. The method comprises: acquiring agent language information for training, splitting the agent language information for training to obtain agent side single sentences, and performing text preprocessing on the agent side single sentences; on the basis of the preprocessed agent side single sentences, training by means of a three-layer BERT model so as to obtain an inappropriate agent language identification model; inputting agent language information to be identified into the inappropriate agent identification model for reasoning so as to obtain probability distribution of target classification, wherein the agent language information to be identified is agent language information in a credit card service and sales scenario; and determining inappropriate language in the agent language information to be identified according to the probability distribution of the target classification. The embodiments of the present application are optimized from two levels of model and data, are applied to inappropriate language identification, can improve the efficiency of identifying inappropriate language of quality inspection personnel in a business scenario, and have a certain popularization value.

Classes IPC  ?

35.

MODEL TRAINING METHOD AND APPARATUS, TEXT CLASSIFICATION METHOD AND APPARATUS, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090737
Numéro de publication 2023/173555
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Xie, Lin
  • Ma, Jun
  • Wang, Shaojun

Abrégé

A model training method and apparatus, a text classification method and apparatus, a device, and a storage medium, relating to the technical field of artificial intelligence. The training method comprises: obtaining original training data, the original training data comprising first original data and second original data (S101); performing up-sampling processing on the second original data to obtain initial training data (S102); performing enhancement processing on the initial training data according to a preset enhancement parameter to obtain enhanced training data (S103); encoding the enhanced training data to obtain a target word embedding vector (S104); performing disturbance processing on the target word embedding vector to obtain target training data (S105); and training a preset neural network model according to the first original data and the target training data to obtain a target classification model, the target classification model being a text classification model and being used for classifying target text data (S106). The present method can improve the recognition accuracy of a model on sample text data and the training effect of the model.

Classes IPC  ?

  • G06F 16/35 - Groupement; Classement
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence
  • G06F 40/295 - Reconnaissance de noms propres
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

36.

DEEP LEARNING-BASED NAMED ENTITY RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090740
Numéro de publication 2023/173556
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-21
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s) Jiang, Peng

Abrégé

The present application relates to the technical fields of artificial intelligence and natural language processing, and provides a deep learning-based named entity recognition method and apparatus, a device, and a medium. The method comprises: recognizing a plurality of candidate spans from a sentence to be processed, so as to recognize all possible candidate spans with the length not exceeding a preset recognition length threshold, and then forming a candidate span set, thereby solving the problem that a nested entity with a relatively long span cannot be recognized; screening the candidate spans in the candidate span set to remove low-quality candidate spans, so as to obtain at least one first forward span, thereby reducing the subsequent calculation overhead; predicting a boundary deviation value corresponding to the first forward span by means of a first neural network to obtain a target span; and predicting an entity classification corresponding to the target span by means of a second neural network. In this case, the span boundary can be finely adjusted on the basis of the predicted boundary deviation value, such that the final target span overlaps a real span as much as possible to reach or close to the ideal state of complete overlapping, thereby improving the entity recognition accuracy.

Classes IPC  ?

37.

METHOD AND APPARATUS FOR PREDICTING PROPERTIES OF DRUG MOLECULE, STORAGE MEDIUM, AND COMPUTER DEVICE

      
Numéro d'application CN2022089687
Numéro de publication 2023/168810
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Jun
  • Gao, Peng
  • Sun, Ning
  • Xie, Guotong

Abrégé

The present application discloses a method and apparatus for predicting the properties of a drug molecule, a storage medium, and a computer device, and relates to the technical fields of artificial intelligence and digital medical treatment. The method comprises: acquiring a drug molecule to undergo prediction, and performing mode conversion on the molecular structure of the drug molecule, obtaining a multimodal drug molecule structure, said multimodal drug molecule structure comprising a drug molecule sequence, a drug molecule map, a drug molecule image, and a drug molecule fingerprint; by means of a pre-trained multimodal feature extraction model, performing feature extraction on the multimodal drug molecule structure, obtaining a multimodal drug molecule feature vector; converting the multimodal drug molecule feature vector into a multimodal high-dimensional feature vector, and performing feature fusion on the multimodal high-dimensional feature vector, obtaining a fused feature vector of the drug molecule; and inputting the fused feature vector of the drug molecule into a pre-trained drug molecule property prediction model, obtaining a property prediction result of the drug molecule.

Classes IPC  ?

  • G16C 20/20 - Identification d’entités moléculaires, de leurs parties ou de compositions chimiques
  • G16C 20/50 - Conception moléculaire, p.ex. de médicaments
  • G16C 20/70 - Apprentissage automatique, exploration de données ou chimiométrie
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

38.

PICTURE-TEXT MODEL GENERATION METHOD AND APPARATUS BASED ON MULTIPLE EXPERTS, AND DEVICE AND MEDIUM

      
Numéro d'application CN2022089730
Numéro de publication 2023/168811
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Qiao, Yixuan

Abrégé

The present application relates to the field of artificial intelligence. Disclosed are a picture-text model generation method and apparatus based on multiple experts, and a storage medium and a computer device. The method comprises: acquiring a training sample set; determining an initial picture vector on the basis of a sample picture in a training sample, and inputting the initial picture vector into an initial picture expert module, so as to obtain a first target vector; determining an initial text vector on the basis of sample text in the training sample, and inputting the initial text vector into an initial text expert module, so as to obtain a second target vector; determining a picture-text target vector according to the first target vector and the second target vector, inputting the picture-text target vector into an initial picture-text expert module, and obtaining a first predicted score on the basis of an output result and a fully-connected layer; and determining a model loss value of a preset picture-text model on the basis of the first predicted score and a real label, and training the preset picture-text model on the basis of the model loss value, so as to obtain a picture-text model based on multiple experts.

Classes IPC  ?

  • G06F 16/51 - Indexation; Structures de données à cet effet; Structures de stockage
  • G06F 16/532 - Formulation de requêtes, p.ex. de requêtes graphiques
  • G06F 16/332 - Formulation de requêtes

39.

SENTENCE VECTOR GENERATION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089817
Numéro de publication 2023/168814
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Chen, Hao

Abrégé

The present application discloses a sentence vector generation method and apparatus, a computer device, and a storage medium, which relate to the technical field of artificial intelligence, and can improve the accuracy of sentence vector generation. The method comprises: performing semantic segmentation on obtained initial sentence text to obtain segmented sentence text; and utilizing a pre-constructed sentence vector generation model to obtain a vector representation of the sentence text by means of encoding processing used to predict the context of the sentence text, the sentence vector generation model being an encoding layer of a trained sequence-to-sequence model. The present application is suitable for book recommendation on the basis of sentence vectors of book texts.

Classes IPC  ?

  • G06F 40/30 - Analyse sémantique
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06F 16/335 - Filtrage basé sur des données supplémentaires, p.ex. sur des profils d’utilisateurs ou de groupes

40.

OPTIMIZATION METHOD AND APPARATUS FOR SEARCH SYSTEM, AND STORAGE MEDIUM AND COMPUTER DEVICE

      
Numéro d'application CN2022089732
Numéro de publication 2023/168812
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Qiao, Yixuan

Abrégé

The present application relates to the field of artificial intelligence. Disclosed are an optimization method and apparatus for a search system, and a storage medium and a computer device. The method comprises: on the basis of a preset recall module, respectively calculating a first score between each preset question in a preset question set and each preset article in a preset article database, and according to the first scores, determining a first number of target articles having relatively high similarities with each preset question; on the basis of a preset sorting module, respectively calculating a second score between any preset question and target articles corresponding to the preset question; determining a first KL divergence value according to the first scores and the second scores; and adjusting parameters of the preset recall module and the preset sorting module on the basis of the first KL divergence value, so as to obtain an optimized search system. By means of the present application, the accuracy of a recall module recalling articles, and the accuracy of a sorting module sorting the recalled articles can be improved.

Classes IPC  ?

41.

TIMBRE MODEL CONSTRUCTION METHOD, TIMBRE CONVERSION METHOD, APPARATUS, DEVICE, AND MEDIUM

      
Numéro d'application CN2022089770
Numéro de publication 2023/168813
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhang, Jian
  • Jiang, Huijun
  • Xu, Wei
  • Chen, Youxin
  • Xiao, Jing

Abrégé

A musical instrument timbre conversion model construction method and apparatus, a musical instrument timbre conversion method, a computer device, and a medium. The musical instrument timbre conversion model construction method comprises: converting first sample audio vector sequences into second sample audio vector sequences (S102), and then converting the second sample audio vector sequences into input sample audio vector sequences (S106); and updating, by means of first loss values and first scores obtained by means of calculation, model parameters of a model to be trained (S112), so as to perform training to obtain a musical instrument timbre conversion model. According to the method, the model is trained by means of the first loss values and the first scores, and thus training efficiency is high; and the accuracy of the musical instrument timbre conversion model is improved.

Classes IPC  ?

  • G10L 21/007 - Changement de la qualité de la voix, p.ex. de la hauteur tonale ou des formants caractérisé par le procédé utilisé
  • G10L 25/30 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
  • G10L 25/48 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier

42.

TRAINING METHOD AND APPARATUS FOR MONOCULAR DEPTH ESTIMATION MODEL, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090166
Numéro de publication 2023/168815
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Hu, Haonan
  • Shu, Chang
  • Chen, Youxin

Abrégé

A training method and apparatus for a monocular depth estimation model, a device, and a storage medium. The method comprises: acquiring an image to be predicted (S1); and inputting the image into a preset target monocular depth estimation model for monocular depth estimation to obtain a target depth image corresponding to the image, a training method for the target monocular depth estimation model comprising: performing fine-tuning training on a preset monocular depth estimation interpretable model by employing a preset training sample set and a target loss function, wherein the target loss function is a loss function obtained on the basis of depth error loss; and using the monocular depth estimation interpretable model which has undergone fine-tuning training as the target monocular depth estimation model (S2). The monocular depth estimation interpretable model undergoes fine-tuning training by means of a loss function obtained on the basis of depth error loss, thereby improving the accuracy of a network on the basis of an interpretability method.

Classes IPC  ?

  • G06T 7/50 - Récupération de la profondeur ou de la forme

43.

METHOD AND APPARATUS FOR DETERMINING SIMILARITY BETWEEN VIDEO AND TEXT, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090656
Numéro de publication 2023/168818
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-14
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of artificial intelligence, and provides a method and apparatus for determining the similarity between a video and text, an electronic device, and a storage medium. The method comprises: obtaining a video and corresponding text information, and performing encoding on the video and the text information to obtain encoding feature information; inputting the encoding feature information into an improved T-Transformer model to obtain global information and local information; respectively inputting the global information and the local information into corresponding Attention-FA modules to obtain global features and local features; inputting the global features and the local features as common input into a Contextual Transformer model, and obtaining a video feature and a text feature by means of feature merging; and determining the similarity between the video and the text information according to the video feature and the text feature. By converting videos and text into a same comparison space, the similarity between two different things is calculated, so that a target video is obtained according to text matching.

Classes IPC  ?

  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo

44.

MULTI-LEAD ELCTROCARDIOGRAM SIGNAL PROCESSING METHOD, DEVICE, APPARATUS, AND STORAGE MEDIUM

      
Numéro d'application CN2022089174
Numéro de publication 2023/165005
Statut Délivré - en vigueur
Date de dépôt 2022-04-26
Date de publication 2023-09-07
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhang, Nan
  • Wang, Jianzong
  • Qu, Xiaoyang

Abrégé

A multi-lead electrocardiogram signal processing method, device, apparatus, and storage medium used for improving the accuracy and richness of electrocardiosignal extraction. The multi-lead electrocardiogram signal processing method comprises: acquiring a to-be-processed multi-lead electrocardiogram signal, the to-be-processed multi-lead electrocardiogram signal being used for indicating heart detection information for a target object; performing data preprocessing on the to-be-processed multi-lead electrocardiogram signal to obtain processed electrocardiogram data; performing data framing processing on the processed electrocardiogram data to obtain multi-dimensional lead electrocardiogram data; and performing feature extraction and feature aggregation processing on the multi-dimensional lead electrocardiogram data by means of a deep neural network model incorporating a dual attention mechanism to obtain target electrocardiogram feature data.

Classes IPC  ?

  • A61B 5/318 - Modalités électriques se rapportant au cœur, p.ex. électrocardiographie [ECG]
  • A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

45.

CROP GROWTH MONITORING METHOD AND SYSTEM, DEVICE, AND MEDIUM

      
Numéro d'application CN2022089634
Numéro de publication 2023/165007
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-09-07
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yang, Xin
  • Xu, Wei
  • Jiang, Kaiying

Abrégé

A crop growth monitoring method and system, a device, and a medium, relating to the technical field of artificial intelligence. The crop growth monitoring method comprises: obtaining normalized difference vegetation indexes of at least two sub-regions of a crop in a current region (S1); determining the growth state of the crop according to the obtaining time of the normalized difference vegetation indexes, wherein the growth state at least comprises a sowing period, a growing period, and a maturation period (S2); comparing, according to the growth state, the normalized difference vegetation indexes of the sub-regions with at least one reference range comprised in the growth state of the crop, and determining and counting the number of corresponding sub-regions located in the reference ranges (S3); and performing grading early warning according to the counted number of sub-regions (S4). According to the method, a plurality of normalized difference vegetation indexes of the crop in a current region are compared with preset reference ranges, and the growth of the crop is monitored according to the number of corresponding sub-regions in the reference ranges. The visualization of alert information is realized, and the user can monitor the growth state of the crop more conveniently.

Classes IPC  ?

46.

CONSULTATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090732
Numéro de publication 2023/165012
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-09-07
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Shipeng
  • Yao, Haishen
  • Liu, Jiarui
  • Sun, Xingzhi

Abrégé

The present application belongs to the technical fields of artificial intelligence and digital healthcare, and embodiments thereof provide a consultation method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining target consultation data to undergo identification, wherein the target consultation data comprises target text data and target image data; performing vectorization on the target text data to obtain a consultation text vector; performing feature extraction on the target image data by means of a pre-trained image processing model to obtain an consultation image vector; performing combination on the consultation text vector and the consultation image vector to obtain a standard dialogue vector; and performing dialogue prediction on the standard dialogue vector by means of a pre-trained dialogue model to generate target consultation response data. An embodiment of the present application can improve consultation efficiency.

Classes IPC  ?

  • G06F 16/332 - Formulation de requêtes
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

47.

IMAGE MATTING METHOD AND APPARATUS BASED ON IMAGE SEGMENTATION, COMPUTER DEVICE, AND MEDIUM

      
Numéro d'application CN2022089507
Numéro de publication 2023/159746
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhang, Yidi
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and relate to an image matting method based on image segmentation. The method comprises: inputting an obtained training image set into a pre-constructed initial image matting model, wherein the image matting model comprises an image segmentation layer and an image matting layer; segmenting images in the training image set by means of the image segmentation layer to obtain a set of preliminarily segmented images; inputting the set of preliminarily segmented images into the image matting layer to obtain finely segmented images; determining a target loss function on the basis of the finely segmented images, performing iterative updating on the initial image matting model according to the target loss function, and outputting a trained image matting model; and inputting a target image into the image matting model to obtain a matting result. The present application further provides an image matting apparatus based on image segmentation, a computer device, and a medium. In addition, the present application further relates to a blockchain technology, and the target image can be stored in a blockchain. The present application can improve the precision and accuracy of image matting.

Classes IPC  ?

  • G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan

48.

METHOD AND DEVICE FOR RECOGNIZING ONLINE STATE OF USER, SERVER, AND STORAGE MEDIUM

      
Numéro d'application CN2022089997
Numéro de publication 2023/159750
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhou, Chenghao
  • Shu, Chang
  • Chen, Youxin
  • Lu, Jin
  • Xiao, Jing

Abrégé

Embodiments of the present application is suitable for the technical field of computer version, and provide a method and device for recognizing an online state of a user, a server, and a storage medium. The method comprises: receiving a video stream formed by a camera device of a first user side continuously performing image acquisition on a first user, the video stream being a video stream formed by the camera device continuously performing image acquisition on the first user; extracting multiple image frames from the video stream; for any of the image frames, extracting, from the image frame, an image box containing a both-eye image of the first user, and classifying an online state of the first user according to the image box to obtain an online state classification result corresponding to the image frame; and determining the current online state of the first user according to multiple online state classification results corresponding to the multiple image frames. By means of the method, the online state of a user can be accurately recognized, and an implementation effect of online activity is ensured.

Classes IPC  ?

  • G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
  • G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image
  • G06Q 50/20 - Services Éducation

49.

MODEL PRUNING METHOD AND APPARATUS, COMPUTING DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090010
Numéro de publication 2023/159751
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Xiaorui
  • Zheng, Qiang

Abrégé

The present application relates to the technical field of artificial intelligence. Disclosed are a model pruning method and apparatus, a computing device, and a storage medium. The method comprises: first, obtaining a weight of each filter in a convolutional neural network model and a first scaling factor of each filter, wherein the first scaling factor of each filter is a scaling factor corresponding to each filter in a batch normalization layer, and a convolutional layer where each filter is located and the batch normalization layer are adjacent network layers; then determining an importance score of each filter according to the weight of each filter and the first scaling factor of each filter; and finally, pruning the convolutional neural network model according to the importance score of each filter. According to the embodiments of the present application, relatively unimportant filters in a convolutional neural network model can be accurately selected, and the precision loss after model pruning is very small.

Classes IPC  ?

  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

50.

ANSWER GUIDANCE-BASED QUESTION GENERATION METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090421
Numéro de publication 2023/159753
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

Disclosed in the present application are an answer guidance-based question generation method and apparatus, a device and a storage medium. The method comprises: according to a paragraph text corresponding to an answer text, performing word segmentation processing on the answer text to obtain at least one first word; according to the at least one first word, performing paragraph segmentation on the paragraph text to obtain a first sub-paragraph; according to the at least one first word, analyzing and processing the first sub-paragraph to obtain at least one second word; performing dependency analysis on the at least one second word to obtain a relational graph; according to the relational graph, performing graph convolutional coding on each second word to obtain at least one graph convolution vector; coding each second word to obtain at least one word vector and at least one coding vector; and according to the at least one graph convolution vector, the at least one word vector and the at least one coding vector, performing word generation processing multiple times, and splicing at least one generated third word according to the generation time of each third word, so as to obtain a question.

Classes IPC  ?

  • G06F 16/33 - Requêtes
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage
  • G06N 5/02 - Représentation de la connaissance; Représentation symbolique

51.

FAKE NEWS DETECTION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090433
Numéro de publication 2023/159755
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO.,LTD. (Chine)
Inventeur(s)
  • Wu, Yuemin
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to an artificial intelligence technology, and discloses a fake news detection method, comprising: constructing a fake news image training set and a news propagation graph set on the basis of a fake news text training set, and aggregating to obtain a multi-modal training data set; constructing a multi-modal fake news detection model by using a preset neural network; training the multi-modal fake news detection model by using the multi-modal training data set so as to obtain a standard fake news detection model; and outputting, by using the standard fake news detection model, a detection result of a piece of news to be detected. In addition, the present application further relates to a blockchain technology, and a planning result can be stored in a node of a blockchain. The present application further provides a fake news detection apparatus, an electronic device, and a computer readable storage medium. The present application can solve the problem of low detection accuracy of fake news in different fields.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

52.

PRICE DATA PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090661
Numéro de publication 2023/159756
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Liu, Xi
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a price data processing method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining original data to be predicted, wherein the original data comprises target report data and target transaction data; constructing index factor features according to the target transaction data; constructing public opinion factor features according to the target report data; screening the plurality of index factor features and the plurality of public opinion factor features to obtain a plurality of quantitative transaction features; performing feature extraction on the plurality of quantitative transaction features by means of a preset first neural network model to obtain a plurality of distributed feature vectors; and inputting the plurality of distributed feature vectors into a preset second neural network model for price prediction processing to obtain target price data. According to the technical solutions of the embodiments of the present application, the accuracy of price data prediction can be improved.

Classes IPC  ?

  • G06Q 30/02 - Marketing; Estimation ou détermination des prix; Collecte de fonds

53.

DISPARITY MAP GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090665
Numéro de publication 2023/159757
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Zhang, Yidi
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a disparity map generation method and apparatus, an electronic device, and a storage medium. The method comprises: acquiring a target image, the target image comprising a left view and a right view; performing feature extraction on the left view to obtain a plurality of left view features, and performing feature extraction on the right view to obtain a plurality of right view features; performing image segmentation processing on the left view features to obtain a first image feature; performing combination processing on the left view features, the first image feature, and the right view features to obtain a target cost volume; performing disparity estimation on the target cost volume by means of a preset three-dimensional convolutional hourglass model to obtain an estimated disparity map; and performing semantic refinement processing on the estimated disparity map by means of a preset semantic refinement network and the first image feature to obtain a target disparity map. According to the embodiments of the present application, the accuracy of disparity estimation can be improved, and the error of the target disparity map is reduced.

Classes IPC  ?

54.

IMAGE INTERACTION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090724
Numéro de publication 2023/159761
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yang, Xin
  • Jiang, Kaiying

Abrégé

Disclosed in the present application are an image interaction method and apparatus, and a device, a medium and a program product. The image interaction method comprises: in response to a first instruction for enabling an image processing tool box, displaying an enabled state, which is used for representing the image processing tool box, of the image processing tool box; in response to a second instruction for enabling the image processing tool box, displaying the enabled state of the image processing tool box, and a to-be-selected state of a first operation option of the image processing tool box; in response to a third instruction for performing a first operation on an image, displaying the enabled state, which is used for representing the first operation, of the image processing tool box, and a to-be-selected state of a second operation option of the image processing tool box; and in response to a fourth instruction for performing a second operation on the image, displaying an enabled state of the second operation option, wherein the first operation is used for realizing a cloud removal operation on the image, and the second operation is used for realizing an image enhancement operation. The method can be widely applied to the field of images.

Classes IPC  ?

  • G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport

55.

TEXT CLASSIFICATION METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090727
Numéro de publication 2023/159762
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Chen, Hao

Abrégé

The present application relates to artificial intelligence and provides a text classification method and apparatus based on artificial intelligence, a device, and a storage medium. The text classification method based on artificial intelligence comprises: obtaining to-be-classified text information and template text information, wherein the template text information comprises at least one piece of supplementary text information that is in one-to-one correspondence with a preset text class; for each piece of to-be-classified text information, generating reconstructed text information that is in one-to-one correspondence with the preset text class according to the supplementary text information; calculating a next sentence prediction probability value of the reconstructed text information, wherein the next sentence prediction probability value represents the matching degree between the to-be-classified text information and the supplementary text information; and determining a predicted text class of the to-be-classified text information according to the next sentence prediction probability value. According to the technical solution of an embodiment of the present application, a classification result for an input text can be quickly and accurately obtained on the basis of an NSP mechanism of a BERT model without needing to go through a fine-tuning stage, thereby reducing user time and computing resource waste.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

56.

VIDEO SEARCH METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090736
Numéro de publication 2023/159765
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a video search method and apparatus, an electronic device and a storage medium. The method comprises: acquiring original search data, wherein the original search data comprises text data and original video data; performing frame extraction on the original video data to obtain candidate key frame data; performing, by means of a pre-trained data processing model, standardization processing on the candidate key frame data to obtain standard key frame data; coding, by means of a coding layer of the data processing model, the text data to obtain a text vector, and coding, by means of the coding layer, the standard key frame data to obtain a plurality of key frame image vectors; calculating a first similarity value between the text vector and each key frame image vector; and screening the standard key frame data according to the first similarity value to obtain a target video clip. According to the embodiments of the present application, the accuracy of video search can be improved.

Classes IPC  ?

  • G06F 16/735 - Filtrage basé sur des données supplémentaires, p.ex. sur des profils d'utilisateurs ou de groupes
  • G06F 16/783 - Recherche de données caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu

57.

DIALOGUE PROCESS CONTROL METHOD AND APPARATUS OF CUSTOMER SERVICE ROBOT, SERVER AND MEDIUM

      
Numéro d'application CN2022089996
Numéro de publication 2023/159749
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Luo, Shengxi
  • Ma, Jun
  • Wang, Shaojun

Abrégé

The embodiments of the present application are applicable to the technical field of artificial intelligence. Provided are a dialogue process control method and apparatus of a customer service robot, a server and a medium. The method comprises: during a broadcasting process of a customer service robot, if a first character fragment input by a user side is detected, calculating an interruption probability on the basis of the first character fragment and a pre-constructed trie tree; if the interruption probability is greater than a first preset threshold, controlling the customer service robot to stop broadcasting so as to receive complete voice information input by the user side; when the user side stops inputting the voice information, acquiring a second character fragment which has been input by the user side before stopping, and calculating a cut-off probability on the basis of the second character fragment and the trie tree; and according to the cut-off probability, controlling a dialogue process of the customer service robot. By using the method, the meaning of the voice information input by the user side can be accurately identified, so that situations of wrong dialogue interruption or cut-off of a robot are avoided.

Classes IPC  ?

58.

ABILITY LEVEL ANALYSIS METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090430
Numéro de publication 2023/159754
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO.,LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to an intelligent decision technology, and discloses an ability level analysis method, which comprises: when monitoring that a test participant completes one question, obtaining an answer score of a tested question of the test participant, and updating the ability value of the test participant according to the response level and the answer score of each tested question; determining whether the absolute difference value between the ability values before and after the update meets a second preset condition; if the absolute difference value does not meet the second preset condition, selecting, according to the response level of each question and the ability value of the test participant, from a preset question bank a question that meets a first preset condition, and sending the question to the test participant; and if the absolute difference value meets the second preset condition, using the recently updated ability value of the test participant as the final ability level of the test participant. The present application further provides an ability level analysis apparatus, a device and a medium. The present application can improve the accuracy of ability level analysis.

Classes IPC  ?

  • G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation

59.

DATA ENHANCEMENT METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090666
Numéro de publication 2023/159758
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Tao, Qing
  • Wang, Yan
  • Ma, Jun
  • Wang, Shaojun

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a data enhancement method and apparatus, an electronic device, and a storage medium. The data enhancement method comprises: acquiring an original text sample, inputting same into a pre-trained topic model, calculating a contribution value of each topic word in each sentence to a text sentence, then obtaining, according to the contribution value of the topic word to the text sentence, a set of words to be replaced, next selecting candidate words from a set of word vectors obtained by pre-training, and finally using the candidate words to replace the words to be replaced, so as to obtain a data enhancement text sample. Topic distribution probability information corresponding to each sentence in the original text sample is obtained by using the topic model, such that the contribution value of each word in the sentence to the topic of the text sentence is well measured, and data enhancement is completed under the condition that sentence topic distribution is not affected; and moreover, by means of pre-training the word vectors, words semantically similar to the words to be replaced are selected as replacement words, such that semantic information of the sentence is guaranteed to the maximum extent.

Classes IPC  ?

  • G06F 16/35 - Groupement; Classement
  • G06N 5/02 - Représentation de la connaissance; Représentation symbolique

60.

MODEL TRAINING METHOD AND APPARATUS, EMOTION MESSAGE GENERATION METHOD AND APPARATUS, DEVICE AND MEDIUM

      
Numéro d'application CN2022090670
Numéro de publication 2023/159759
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

A model training method and apparatus, an emotion message generation method and apparatus, a device, and a medium, belonging to the technical field of artificial intelligence. The model training method comprises: acquiring a dialog data set (S110); preprocessing the dialog data set to obtain a preliminary data set (S120); performing emotion tagging on the preliminary data set to obtain a preliminary emotion tag by means of a preset emotion classifier (S130); performing preliminary coding on the preliminary data set and the preliminary emotion tag to obtain a preliminary coding vector by means of a first neural network (S150); decoding the preliminary coding vector to obtain a target emotion message by means of a second neural network (S160); and training, according to the target emotion message, a neural network model to obtain an emotion message generation model (S170). The preliminary emotion tag is trained by means of the data set on which the emotion tagging is performed, and the neural network model is preliminarily coded and decoded by means of diversity requirements of an emotion reply, such that an emotion generation model is obtained by training, and the emotion generation model can generate target emotion messages of different emotions for a problem message.

Classes IPC  ?

  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage

61.

CONVOLUTIONAL NEURAL NETWORK MODEL PRUNING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090721
Numéro de publication 2023/159760
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s) Wang, Xiaorui

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a convolutional neural network model pruning method and apparatus, an electronic device, and a storage medium. The convolutional neural network model pruning method comprises: acquiring information of convolutional layers in a model to be pruned; performing convolution calculation according to the information of the convolutional layers to obtain a filter similarity value corresponding to filters in the convolutional layers; according to the filter similarity value, calculating a pruning importance index corresponding to each convolutional layer; and according to a preset pruning rate and the pruning importance index corresponding to each convolutional layer, pruning said model to obtain a pruned model. According to the embodiments, the importance of filters in the convolutional layers is quantified by means of convolution operation, redundant information of the filters in the convolutional layers in the model is obtained according to a filter importance value, and then the redundant information is used for pruning, so that the pruning accuracy of a convolutional neural network model is improved, and the model compression precision and operation speed are improved.

Classes IPC  ?

62.

MODEL TRAINING METHOD AND APPARATUS, TEXT SUMMARY GENERATING METHOD AND APPARATUS, AND DEVICE

      
Numéro d'application CN2022090729
Numéro de publication 2023/159763
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

A model training method and apparatus, a text summary generating method and apparatus, and a device. According to the model training method, multimodal encoding processing is performed on original image data to obtain original image vectors, and multimodal encoding processing is performed on original text data to obtain original text vectors; original summary data is obtained according to the original text data and the original image data; the original summary data is vectorized to obtain original summary vectors; first positive example pairs are constructed according to the original summary vectors and the corresponding original text vectors; second positive example pairs are constructed according to the original text vectors and the corresponding original image vectors; contrastive learning training is performed on an original summary generating model by means of the original summary data, a plurality of first positive example pairs, and a plurality of second positive example pairs to obtain a text summary generating model.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

63.

CATERING DATA ANALYSIS METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090739
Numéro de publication 2023/159766
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Liu, Xi
  • Shu, Chang
  • Chen, Youxin

Abrégé

Provided in the present application are a catering data analysis method and apparatus, and an electronic device and a storage medium. The catering data analysis method comprises: acquiring comment data, and executing analysis by means of a natural language analysis method; determining a visualization configuration, and obtaining a visual chart according to the visualization configuration; and visually displaying an analysis result by means of the visualization configuration and the visual chart. In the present application, by means of executing various types of natural language analysis on catering data which is generated when a catering merchant sells catering products, the preferences, degree of satisfaction, and feedback suggestions of customers in comment data are determined; by means of a visualization configuration and a visual display, a required visual-display mode is provided for the catering merchant; and by means of comprehensive natural language analysis, points of concern and demands of consumers, shortcomings of commodities themselves, and problems related to shop services can be learned about quickly, thereby providing more accurate guidance for the merchant regarding the demands and the degree of satisfaction of the customers.

Classes IPC  ?

  • G06F 16/34 - Navigation; Visualisation à cet effet

64.

TARGET WORD DETECTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090743
Numéro de publication 2023/159767
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wu, Yuemin
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of artificial intelligence, and provides a target word detection method and apparatus, an electronic device and a storage medium. The method comprises: acquiring original speech data to be detected; performing entity feature extraction on the original speech data by means of a preset feature extraction model to obtain text entity features; performing knowledge extraction on a preset knowledge graph according to the text entity features to obtain an entity triple; performing feature extraction on the original speech data, the text entity features and the entity triple by means of a preset target word detection model to obtain a target text feature vector, a target entity feature vector and a target attribute feature vector; performing weighted calculation on the target text feature vector, the target attribute feature vector and the target entity feature vector by means of the target word detection model to obtain a target speech representation vector; and performing target word detection on the target speech representation vector to obtain target word data. The present application can improve the accuracy of target word detection.

Classes IPC  ?

65.

ANOMALY DETECTION METHOD AND SYSTEM FOR MOBILE DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090759
Numéro de publication 2023/159768
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-31
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zhou, Quan
  • Wang, Wenbin
  • Guo, Yuqiao
  • Chen, Lianghua

Abrégé

The present application discloses an anomaly detection method and system for a mobile device, an electronic device, and a storage medium. The anomaly detection method for a mobile device comprises: obtaining a dynamic link library when a mobile device requests an application, and splitting up the dynamic link library to obtain a plurality of plug-in libraries; obtaining multiple pieces of feature information and first label data of each plug-in library, and inputting the feature information and the first label data into a support vector machine model to obtain risk scores of the plug-in libraries; obtaining weights for the plug-in libraries by means of Newton's law of cooling and according to the risk scores; performing weighting according to the weights for the plug-in libraries and the corresponding risk scores to obtain a dynamic link library score; and determining whether the corresponding mobile device is an anomalous device according to the dynamic link library score. It is not necessary to depend on a blacklist when performing detection on a mobile device, anomalous device detection coverage can be improved on the basis of identification and detection of a plug-in library, and anomaly detection efficiency is improved.

Classes IPC  ?

  • G06F 11/34 - Enregistrement ou évaluation statistique de l'activité du calculateur, p.ex. des interruptions ou des opérations d'entrée–sortie
  • G06F 9/445 - Chargement ou démarrage de programme

66.

TIME SERIES DATA DETECTION METHOD AND APPARATUS, DEVICE, AND COMPUTER STORAGE MEDIUM

      
Numéro d'application CN2022089438
Numéro de publication 2023/155296
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Li, Weijun
  • Liu, Sihao
  • Ba, Kun
  • Zhuang, Bojin

Abrégé

The present application is suitable for the technical field of artificial intelligence. Provided are a time series data detection method and apparatus, a device, and a computer storage medium, the time series data detection method comprising the following steps: performing a conversion operation on one-dimensional time series data to be detected so as to obtain a two-dimensional image; performing image reconstruction on the basis of the two-dimensional image so as to obtain a training image sample set containing a reconstructed image; training an introspective variational autoencoder (IntroVAE) model by using the training image sample set so as to obtain a trained IntroVAE model; and calculating a loss value between the two-dimensional image and the reconstructed image by using the trained IntroVAE model so as to obtain a target loss value, the target loss value being used for representing whether the one-dimensional time series data is abnormal. The technical solution provided by the embodiment of the present application can be used to enlarge an application range of time series data detection during anomaly detection of large-scale time series data on the basis of ensuring the anomaly detection efficiency of the time series data.

Classes IPC  ?

67.

DATA AUGMENTATION PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090165
Numéro de publication 2023/155298
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of neural networks of artificial intelligence technology, and provides a data augmentation processing method and apparatus, a computer device, and a storage medium. The method comprises: dividing a data set into a plurality of sub data sets; selecting different data augmentation policies for each sub data set; according to the first data augmentation policies of the sub data sets, respectively performing data augmentation processing on the corresponding sub data sets to obtain target sub data sets; respectively inputting the target sub data sets into a preset neural network model for training; comparing a plurality of training results with each other, and selecting a target training result having an optimal training effect; determining a target data augmentation policy corresponding to the target training result; and acquiring a total data set, and invoking the target data augmentation policy to perform data augmentation processing on all data in the total data set so as to obtain a target data set. An optimal data augmentation policy is selected for performing data augmentation processing on the total data set, so that the overall effect of data augmentation processing is improved.

Classes IPC  ?

  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection

68.

ANSWER SEQUENCE PREDICTION METHOD BASED ON IMPROVED IRT STRUCTURE, AND CONTROLLER AND STORAGE MEDIUM

      
Numéro d'application CN2022090662
Numéro de publication 2023/155301
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Liu, Xi
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to artificial intelligence technology. Provided in the embodiments of the present application is an answer sequence prediction method based on an improved IRT structure. The method comprises the following steps: respectively performing vectorization processing on historical answer record information, so as to obtain answer sequence input vectors carrying directions; inputting, into an LSTM, all the answer sequence input vectors carrying the directions to perform processing, so as to obtain a user capability vector; performing vectorization processing on question difficulty data and question discrimination degree data, so as to obtain a question difficulty vector and a question discrimination vector; performing calculation on the user capability vector, the question difficulty vector and the question discrimination vector by means of IRT, so as to obtain a correct answering probability of the current question; and determining a target recommended question according to the correct answering probability. The level of a user and the level of a question can be accurately depicted by means of vectorizing user capability, question difficulty and a question discrimination degree, thereby reducing information loss; and moreover, the situation of a user wrongly answering a question, but the capability of the user being improved is solved by means of improving an LSTM structure, thereby improving the accuracy of prediction.

Classes IPC  ?

  • G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"

69.

PDF LAYOUT SEGMENTATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090674
Numéro de publication 2023/155302
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provided are a PDF layout segmentation method and apparatus, an electronic device, and a storage medium. The PDF layout segmentation method comprises: obtaining a PDF document, and converting the PDF document into a PDF image; then inputting the PDF image into a pre-trained PDF layout segmentation model for layout segmentation to obtain a boundary point set corresponding to the PDF image, and generating a transverse line set and a longitudinal line set according to the boundary point set; obtaining reading blocks of the PDF image on the basis of a preset communication rule according to the transverse line set and the longitudinal line set; and finally obtaining a PDF layout segmentation result according to a reading sequence of the reading blocks. The present embodiments are not only suitable for common PDF documents, but also suitable for the PDF documents including operations such as column dividing or chart inserting, the reading sequence of the PDF documents is obtained to facilitate subsequent PDF content operation, and the efficiency and accuracy of PDF layout segmentation are improved.

Classes IPC  ?

70.

WEBPAGE DATA EXTRACTION METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090719
Numéro de publication 2023/155303
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zhou, Xuan
  • Xu, Bing
  • Wang, Wei

Abrégé

Embodiments relate to the technical field of artificial intelligence, and provide a webpage data extraction method and apparatus, a computer device, and a storage medium. The method comprises: obtaining source code data of a target webpage, and parsing the source code data to obtain a DOM tree; traversing the DOM tree to obtain a node sequence, the node sequence comprising a root node and a plurality of label nodes; obtaining a plurality of node paths of the node sequence, each node path being a path from each label node to the root node; obtaining first target paths from a preset sample set according to the plurality of node paths, and inputting the first target paths into a pre-trained model for screening to obtain a second target path; and extracting corresponding target webpage data from the source code data according to the second target path. A label node condition is analyzed by means of a pre-trained model, a second target path is screened out from first target paths according to webpages of the same type, and then target webpage data is extracted, such that there is no need to manually construct a special path template, and the efficiency of extraction of webpage data is improved.

Classes IPC  ?

  • G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
  • G06F 16/906 - Groupement; Classement
  • G06F 16/9035 - Filtrage basé sur des données supplémentaires, p.ex. sur des profils d'utilisateurs ou de groupes
  • G06F 16/901 - Indexation; Structures de données à cet effet; Structures de stockage
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage

71.

KEYWORD RECOMMENDATION MODEL TRAINING METHOD AND APPARATUS, KEYWORD RECOMMENDATION METHOD AND APPARATUS, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090746
Numéro de publication 2023/155304
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Liu, Xi
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of artificial intelligence, and provide a keyword recommendation model training method and apparatus, a keyword recommendation method and apparatus, a device, and a medium. The keyword recommendation model training method comprises: obtaining an index sample, and constructing a keyword training data set according to the index sample; obtaining word vectors of words according to the index sample; obtaining primary environment word vectors according to the word vectors; then obtaining input environment word vectors according to the primary environment word vectors; then generating keyword vectors according to the input environment word vectors and the word vectors; inputting the keyword vectors into a prediction classification layer to obtain recommendation prediction values; and adjusting parameters according to detection errors between the recommendation prediction values and classification labels to obtain a keyword recommendation model. According to the embodiments of the present application, by obtaining input environment word vectors related to the word vectors, and combining the learning difficulty of keywords and environmental information corresponding to keywords, the selected keywords conform to learning requirements of a user, and the relevance and accuracy of keyword recommendation are improved.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

72.

IMAGE RECONSTRUCTION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090747
Numéro de publication 2023/155305
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhang, Nan
  • Wang, Jianzong
  • Qu, Xiaoyang

Abrégé

The embodiments of the present application belong to the technical field of image processing. Provided are an image reconstruction method and apparatus, and an electronic device and a storage medium. The method comprises: acquiring an original image; performing degradation processing on the original image using a preset image degradation model, and determining a target blur kernel that matches the original image; performing dimensionality change processing on the target blur kernel, so as to generate a degraded image; and inputting the original image and the degraded image into a pre-trained image restoration model for processing, so as to obtain a reconstructed image, wherein the resolution of the reconstructed image is higher than the resolution of the original image. By means of the embodiments of the present application, the effect of an unknown blur kernel on image reconstruction can be reduced, and the reconstruction of an image having any blur kernel is realized, thereby improving the image quality of a reconstructed image.

Classes IPC  ?

  • G06T 3/40 - Changement d'échelle d'une image entière ou d'une partie d'image

73.

IMAGE ENHANCEMENT PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090167
Numéro de publication 2023/155299
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO.,LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Wang, Yingni
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of neural networks of artificial intelligence technologies, and provides an image enhancement processing method and apparatus, a computer device and a storage medium. The method comprises: using a data enhancement algorithm to amplify images of a dataset, using a digit recognition model to classify target images, and screening out first target images which are correctly classified and second target images which are erroneously classified; obtaining a first weight vector and a feature vector of each category of the first target images, and using the first target images to train a pre-constructed image recognition model and obtain a trained image recognition model; using the trained image recognition model to predict the second target images and obtain a prediction result, generating a second weight vector according to the prediction result, and multiplying the second weight vector by the second target images to obtain a training image with enhanced quality. Therefore, inherent noise that exists in the second target images can be suppressed, and the quality of the data-enhanced image is improved.

Classes IPC  ?

  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/08 - Méthodes d'apprentissage

74.

DATA RECOMMENDATION METHOD AND APPARATUS BASED ON GRAPH NEURAL NETWORK AND ELECTRONIC DEVICE

      
Numéro d'application CN2022090754
Numéro de publication 2023/155306
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-24
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Wu, Yuemin
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application specifically relates to the technical field of artificial intelligence, and discloses a data recommendation method and apparatus based on a graph neural network and an electronic device. The method comprises: obtaining target search information of a new user, and generating a candidate data set according to the target search information; determining an associated user associated with the new user, and obtaining first search data of the associated user; generating a social relation graph of the new user according to the new user, the associated user, and the first search data; inputting the social relation graph into a preset graph neural network model for feature prediction to obtain target feature information of the new user; and according to the target feature information, sorting the candidate data set to obtain a recommended data set. In this way, the cold start problem on a user side can be solved, the accuracy of data recommendation for a new user is improved, and the use experience of the new user on a data search function is improved.

Classes IPC  ?

  • G06F 16/9536 - Personnalisation de la recherche basée sur le filtrage social ou collaboratif
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage
  • G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme

75.

SYSTEM AND METHOD FOR UNSUPERVISED SUPERPIXEL-DRIVEN INSTANCE SEGMENTATION OF REMOTE SENSING IMAGE

      
Numéro d'application 17667523
Statut En instance
Date de dépôt 2022-02-08
Date de la première publication 2023-08-10
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yang, Zhicheng
  • Zhou, Hang
  • Lai, Jui-Hsin
  • Han, Mei

Abrégé

A system and method for unsupervised superpixel-driven instance segmentation of a remote sensing image are provided. The remote sensing image is divided into one or more image patches. The one or more image patches are processed to generate one or more superpixel aggregation patches based on a graph-based aggregation model, respectively. The graph-based aggregation model is configured to learn at least one of a spatial affinity or a feature affinity of a plurality of superpixels from each image patch and aggregate the plurality of superpixels based on the at least one of the spatial affinity or the feature affinity of the plurality of superpixels. The one or more superpixel aggregation patches are combined into an instance segmentation image.

Classes IPC  ?

  • G06T 7/136 - Découpage; Détection de bords impliquant un seuillage

76.

Structured landmark detection via topology-adapting deep graph learning

      
Numéro d'application 17116310
Numéro de brevet 11763468
Statut Délivré - en vigueur
Date de dépôt 2020-12-09
Date de la première publication 2023-08-03
Date d'octroi 2023-09-19
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Miao, Shun P
  • Li, Weijian
  • Lu, Yuhang
  • Zheng, Kang
  • Lu, Le

Abrégé

The present disclosure describes a computer-implemented method for image landmark detection. The method includes receiving an input image for the image landmark detection, generating a feature map for the input image via a convolutional neural network, initializing an initial graph based on the generated feature map, the initial graph representing initial landmarks of the input image, performing a global graph convolution of the initial graph to generate a global graph, where landmarks in the global graph move closer to target locations associated with the input image, and iteratively performing a local graph convolution of the global graph to generate a series of local graphs, where landmarks in the series of local graphs iteratively move further towards the target locations associated with the input image.

Classes IPC  ?

  • G06T 7/00 - Analyse d'image
  • G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
  • G06T 7/11 - Découpage basé sur les zones
  • G06T 7/162 - Découpage; Détection de bords impliquant des procédés basés sur des graphes

77.

METHOD AND SYSTEM FOR OPTIMIZING TIME-VARIATION RESISTANT MODEL, AND DEVICE AND READABLE STORAGE MEDIUM

      
Numéro d'application CN2022090222
Numéro de publication 2023/142288
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-08-03
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Xiao, Jing
  • Zhao, Mengmeng
  • Wang, Lei
  • Li, Na
  • Wang, Yuan
  • Tan, Tao
  • Chen, Youxin

Abrégé

The present application belongs to the technical field of financial security. Disclosed are a method and system for optimizing a time-variation resistant model, and a device and a readable storage medium. The method comprises: constructing a frequency mixing model on the basis of a high frequency factor (S1); checking whether a factor distribution in the frequency mixing model has changed (S2); and updating the frequency mixing model online according to the changed factor distribution (S3). By means of the present application, a frequency mixing model based on a high frequency factor, and an adaptive modeling framework are provided, such that the problem of the influence of the time variation of a financial law on a prediction model is effectively solved; moreover, the hysteresis of a risk prediction model is effectively avoided, and the capability of the model adapting to a change in the financial law in a timely manner is improved. On the basis of the framework, economic financial indexes, such as a macroscopic economic index, mesoscopic industry prosperity, and a microscopic enterprise revenue fluctuation, can be effectively predicted, and dynamic risk early warning is effectively performed.

Classes IPC  ?

  • G06F 17/18 - Opérations mathématiques complexes pour l'évaluation de données statistiques
  • G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
  • G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation

78.

REPLY STATEMENT DETERMINATION METHOD AND APPARATUS BASED ON ROUGH SEMANTICS, AND ELECTRONIC DEVICE

      
Numéro d'application CN2022090129
Numéro de publication 2023/137903
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

Disclosed in the present application are a reply statement determination method and apparatus based on rough semantics, and an electronic device. The method comprises: according to an occurrence time of speech information of a user at the current moment, acquiring a previous round of speech information adjacent to the speech information; according to the previous round of speech information, performing rough semantic extraction on the speech information, so as to obtain a rough semantic feature corresponding to the speech information; performing word segmentation processing on the speech information, so as to obtain a keyword group; performing a plurality of instances of hidden feature extraction processing on the keyword group, so as to obtain an initial hidden layer state feature vector; according to the rough semantic feature and the initial hidden layer state feature vector, performing a plurality of instances of reply word generation processing, so as to obtain at least one reply word; and splicing the at least one reply word according to a generation sequence of each reply word from among the at least one reply word, so as to obtain a reply statement of the speech information.

Classes IPC  ?

  • G06F 40/35 - Représentation du discours ou du dialogue
  • G06F 40/268 - Analyse morphologique
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence

79.

EYE FUNDUS IMAGE-BASED LESION DETECTION METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090164
Numéro de publication 2023/137904
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Wang, Tianyu
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the fields of image recognition and digital treatment. Disclosed are an eye fundus image-based lesion detection method and apparatus, and a computer device and a storage medium. The method comprises: acquiring an eye fundus screening image, wherein the eye fundus screening image comprises a scanning image and an angiography image; inputting the scanning image into a first network in a dual-channel network of a deep learning network model, and acquiring a first image feature obtained by the first network, wherein the first image feature comprises an eye fundus curvature and a reflectivity; inputting the angiography image into a second network in the dual-channel network of the deep learning network model, and acquiring a second image feature obtained by the second network, wherein the second image feature comprises a blood vessel density and an eye fundus tissue thickness; fusing the first image feature with the second image feature to obtain a fused feature; and according to the fused feature, matching a macular lesion level corresponding to the image. The present application can improve the accuracy of recognizing a macular lesion level of the eye fundus.

Classes IPC  ?

  • G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
  • G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06T 7/00 - Analyse d'image
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G16H 30/20 - TIC spécialement adaptées au maniement ou au traitement d’images médicales pour le maniement d’images médicales, p.ex. DICOM, HL7 ou PACS

80.

INTENTION CLASSIFICATION METHOD AND APPARATUS BASED ON SMALL-SAMPLE CORPUS, AND COMPUTER DEVICE

      
Numéro d'application CN2022090659
Numéro de publication 2023/137911
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wu, Yuemin
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of artificial intelligence. Provided in the present application are an intention classification method and apparatus based on a small-sample corpus, and a computer device. The method comprises: constructing a first sample data set, the first sample data set comprising a labeled sample data set and an unlabeled sample data set; acquiring a plurality of initial weak classification models; training the initial weak classification models on the basis of the labeled sample data set, so as to obtain target weak classification models; inputting the unlabeled sample data set into the target weak classification models, so as to obtain first predicted classification labels; constructing a second sample data set on the basis of the unlabeled sample data set and the first predicted classification labels; training initial intention classification models on the basis of the second sample data set, so as to obtain a target intention classification model; and performing intention classification by using the target intention classification model, so as to output a classification result. In the present application, by means of the above steps, relatively accurate intention classifications can be realized on the basis of a small number of marked samples and unmarked samples.

Classes IPC  ?

81.

VIDEO TEXT SUMMARIZATION METHOD BASED ON MULTI-MODAL MODEL, DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090712
Numéro de publication 2023/137913
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

A video text summarization method based on a multi-modal model, a device and a storage medium, relating to artificial intelligence technology. The method comprises the following steps: performing feature extraction on video data to obtain video features, the video data being video data from which a text summary needs to be extracted (S100); vectorizing the video features to obtain a video feature vector (S200); performing speech extraction on the video data to obtain monologue speech information (S300); by means of automatic speech recognition (ASR), converting the monologue speech information into text information (S400); performing word segmentation processing on the text information to obtain a plurality of pieces of word information (S500); and inputting the video feature vector and the plurality of pieces of word information into a transformer model for training to obtain a text summarization result, a sub-layer used for fusing image-class features and text-class features being arranged in an encoder of the transformer model (S600). The accuracy of text summary content extracted from a video can be improved.

Classes IPC  ?

  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement

82.

TEXT DIFFICULTY CLASSIFICATION METHOD AND DEVICE BASED ON CLASSIFICATION MODEL, AND STORAGE MEDIUM

      
Numéro d'application CN2022090733
Numéro de publication 2023/137917
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Liu, Xi
  • Shu, Chang
  • Chen, Youxin

Abrégé

A text difficulty classification method and device based on a classification model, and a storage medium. The method comprises: classifying a first text set to be trained and a second text set to be trained according to text difficulty, so as to obtain a fine first training set and a coarse second training set; inputting the first training set and the second training set into an initial classification model to obtain a predicted text difficulty value; setting an adaptive loss value for different training sets according to the predicted text difficulty value; adjusting the parameters of the initial classification model according to the loss value to obtain a target classification model; and inputting the text to be classified into the target classification model for classification, so as to obtain a text difficulty classification result. The data characteristics of the first data set and the second data set can be fully utilized, and the implicit information of the data is fully mined, so that the target classification model can complete a text difficulty classification task more accurately, and improve the accuracy of the text difficulty classification result.

Classes IPC  ?

83.

CONTROL METHOD AND SYSTEM FOR LIVE MIGRATION OF VIRTUAL MACHINE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090741
Numéro de publication 2023/137919
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Mu, Jun

Abrégé

Disclosed in embodiments of the present application are a control method and system for live migration of a virtual machine, an electronic device, and a storage medium. The control method for live migration of a virtual machine comprises: receiving a virtual machine migration request sent by a client; packaging the virtual machine migration request into a task object to obtain a virtual machine migration task, and configuring a corresponding preset memory for the virtual machine migration task in a data cache module; adding the virtual machine migration task into a task queue; a task processor obtaining the virtual machine migration task from the task queue; acquiring original data of virtual machine migration according to the virtual machine migration task; and storing the original data as task data of virtual machine migration in the preset memory. According to the present application, data in the live migration process of the virtual machine can be conveniently and quickly obtained, thereby conveniently controlling and managing the operation of live migration of the virtual machine.

Classes IPC  ?

  • G06F 9/455 - Dispositions pour exécuter des programmes spécifiques Émulation; Interprétation; Simulation de logiciel, p.ex. virtualisation ou émulation des moteurs d’exécution d’applications ou de systèmes d’exploitation

84.

SEMANTIC TRUNCATION DETECTION METHOD AND APPARATUS, AND DEVICE AND COMPUTER-READABLE STORAGE MEDIUM

      
Numéro d'application CN2022090745
Numéro de publication 2023/137920
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhao, Shihao
  • Ma, Jun
  • Wang, Shaojun

Abrégé

Provided in the present application are a semantic truncation detection method and apparatus, and a device and a computer-readable storage medium. The semantic truncation detection method comprises: acquiring text data to be subjected to detection; acquiring first language material data, and obtaining a plurality of semantic truncation types; determining a semantic truncation type of said text data; and according to the semantic truncation type, performing detection on said text data by means of a preset rule and/or a BERT classification model, so as to obtain a detection result. The BERT classification model is obtained by means of the following steps: acquiring service language material data; selecting a random position for each piece of service text data to perform segmentation, and performing construction to obtain a positive example sentence pair; selecting any two pieces of service text data, and performing construction to obtain a negative example sentence pair; and according to the positive example sentence pair and the negative example sentence pair, constructing a training set, and inputting the training set into an initial BERT model for training, so as to obtain the BERT classification model. The intention of a user can be more accurately identified, and an increased number of interactions due to identification failure is reduced, thereby improving the good experience of the user.

Classes IPC  ?

  • G06F 40/30 - Analyse sémantique
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence
  • G06F 40/211 - Parsage syntaxique, p.ex. basé sur une grammaire hors contexte ou sur des grammaires d’unification
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

85.

MODEL TRAINING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090760
Numéro de publication 2023/137924
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zhao, Yue
  • Xu, Zhuoyang

Abrégé

Embodiments of the present invention relate to the technical field of artificial intelligence, and provides a model training method and apparatus, a computer device, and a storage medium. The training method comprises: obtaining a plurality of pieces of inquiry sample data and corresponding prescription sample data; performing prediction processing on patient sample data and dialogue sample data by means of a pre-training model to obtain prescription prediction data; calculating a first loss value according to a plurality of pieces of drug prediction data; constructing a drug co-occurrence matrix according to the plurality of pieces of drug sample data, and calculating a second loss value; and training the pre-training model according to the first loss value and the second loss value to obtain a prescription recommendation model. According to the embodiments of the present invention, the drug co-occurrence matrix is constructed by means of the prescription sample data, and a drug co-occurrence loss is added according to the drug co-occurrence matrix, so that the correlation among a plurality of drugs is considered, the problem that, in the prescription recommendation model, classifiers are increased along with the increase of the number of the drugs is avoided, and the training efficiency of the model is thus improved.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06F 40/35 - Représentation du discours ou du dialogue
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

86.

DOCUMENT TITLE GENERATION METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090434
Numéro de publication 2023/137906
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Tang, Xiaochu
  • Zhang, Yidi
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to artificial intelligence technology. Disclosed is a document title generation method, comprising: dividing original document information into blocks so as to obtain a plurality of pieces of text sub-information, a plurality of pieces of image sub-information and a plurality of pieces of position sub-information; inputting the plurality of pieces of text sub-information into a text encoding model for text encoding, so as to obtain a text feature vector; performing weighted addition on the text feature vector, image features in the plurality of pieces of image sub-information and a multi-dimensional position vector obtained by position encoding, so as to obtain a final input vector, and inputting the final input vector into a transformer encoder model for fusion encoding, so as to obtain a final output feature; and performing feature decoding on the final output feature by using a decoder module so as to obtain a document block containing a title. In addition, the present application further relates to blockchain technology. The image features can be stored in nodes of a blockchain. The present application further provides a document title generation apparatus, an electronic device and a storage medium. The present application may improve accuracy of document title generation.

Classes IPC  ?

  • G06V 30/40 - Reconnaissance des formes à partir d’images axée sur les documents
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage
  • G06V 30/148 - Découpage de zones de caractères
  • G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”

87.

IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022090713
Numéro de publication 2023/137914
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application relate to the technical field of image processing, and provide an image processing method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining an original image to be processed; performing preliminary matting processing on the original image by means of a backbone network of a pre-trained matting model to obtain an initial foreground image; performing local refinement processing on an edge area of the initial foreground image by means of a fine-tuning network of the matting model to obtain a target foreground image; performing super-resolution reconstruction processing on the target foreground image by means of a pre-trained image reconstruction model to obtain a standard foreground image, wherein the resolution of the standard foreground image is higher than that of the target foreground image; and performing image fusion on the standard foreground image and a preset background image to obtain a target image. The embodiments of the present application can improve the image quality of the matted target image.

Classes IPC  ?

  • G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction

88.

FEATURE FUSION-BASED BEHAVIOR RECOGNITION METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090714
Numéro de publication 2023/137915
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Su, Hang
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of artificial intelligence, and provides a feature fusion-based behavior recognition method and apparatus, a device and a storage medium. The method comprises: performing frame extraction of an input video; fusing color information and optical flow information to obtain a fused image; inputting the fused image into a feature extraction network to obtain target features; and classifying the target features to obtain a behavior recognition result. The color information is guided by means of the optical flow information, so that THE feature extraction of the fused image is facilitated; a fused result of an output of a previous first feature extraction module and an output of a previous second feature extraction module serve as an input of a next second feature extraction module; time dimension information and spatial dimension information are fused to capture semantic information and motion information in the video; an attention mechanism is introduced into the feature extraction model, so that the model can pay more attention to information of a region of interest, the accuracy of behavior recognition is improved, and the training efficiency of the model is improved.

Classes IPC  ?

  • G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
  • G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
  • G06V 10/80 - Fusion, c. à d. combinaison des données de diverses sources au niveau du capteur, du prétraitement, de l’extraction des caractéristiques ou de la classification
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux

89.

GRAPH NEURAL NETWORK-BASED IMAGE SCENE CLASSIFICATION METHOD AND APPARATUS

      
Numéro d'application CN2022090725
Numéro de publication 2023/137916
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Wang, Jun

Abrégé

The present application relates to the technical field of artificial intelligence. Embodiments of the present application provide a graph neural network-based image scene classification method and apparatus, an electronic device, and a storage medium. The method comprises: performing superpixel segmentation on a target image to be classified so as to obtain a target superpixel segmented image; for each target superpixel unit, updating state vectors of the target superpixel units according to the state vectors of the target superpixel units, state vectors of adjacent target superpixel units, and edge features between the target superpixel units and the adjacent target superpixel units so as to obtain updated state vectors of the target superpixel units; and inputting the updated state vectors of all target superpixel units to a pre-trained image scene classification model so as to obtain a target scene label. According to the present application, global context information can be effectively obtained, thereby improving the accuracy performance of the model in image comprehension tasks, and also eliminating the limitations of high-cost spatial information.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

90.

TEXT DATA ANALYSIS METHOD AND APPARATUS, MODEL TRAINING METHOD, AND COMPUTER DEVICE

      
Numéro d'application CN2022090738
Numéro de publication 2023/137918
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY(SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Jiang, Peng
  • Gao, Peng
  • Qiao, Yixuan

Abrégé

A text data analysis method and apparatus, a model training method and apparatus, and a computer device. The analysis method comprises: acquiring text data to be processed and a first emotion label corresponding to the text data (110), wherein the text data comprises a plurality of words; inputting the text data and the first emotion label into a text analysis model, and by means of the text analysis model, extracting an emotion feature sentence from the text data to obtain a first output probability and a second output probability (120), wherein the first output probability is used for representing a prediction probability of each word in the text data being a start word of the emotion feature sentence, and the second output probability is used for representing a prediction probability of each word in the text data being an end word of the emotion feature sentence; and according to the first output probability and the second output probability, determining the emotion feature sentence from the text data (130). The analysis method can extract an emotion feature sentence from text data, has a relatively high extraction efficiency and accuracy, and can be widely applied to the technical field of artificial intelligence.

Classes IPC  ?

  • G06F 40/44 - Méthodes statistiques, p.ex. modèles probabilistes
  • G06F 40/284 - Analyse lexicale, p.ex. segmentation en unités ou cooccurrence
  • G06F 40/211 - Parsage syntaxique, p.ex. basé sur une grammaire hors contexte ou sur des grammaires d’unification
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

91.

ARTIFICIAL INTELLIGENCE-BASED INSTANCE SEGMENTATION MODEL TRAINING METHOD AND APPARATUS, AND STORAGE MEDIUM

      
Numéro d'application CN2022090748
Numéro de publication 2023/137921
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Chen, Zhenhong
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of artificial intelligence, and provides an artificial intelligence-based instance segmentation model training method and apparatus, and a storage medium. The method comprises: obtaining a long-tail distribution image data set; obtaining a first sample and a second sample from the long-tail distribution image data set; cutting the first sample according to first position information to obtain a target tail category image; obtaining a first size and a second size; determining target application position information of the target tail category image in the second sample according to the first position information, the first size and the second size; applying the target tail category image to the second sample according to the target application position information to obtain training data; and obtaining a preset instance segmentation model, and training the instance segmentation model according to the training data and a preset loss function to obtain a target instance segmentation model. According to the technical solution of the present application, data category distribution of long-tail distribution image data can be effectively balanced, and the accuracy of the instance segmentation model is improved.

Classes IPC  ?

  • G06T 7/10 - Découpage; Détection de bords

92.

VOICE MESSAGE GENERATION METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090752
Numéro de publication 2023/137922
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY(SHENZHEN)CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Jia, Yunshu
  • Shu, Chang
  • Chen, Youxin

Abrégé

Embodiments of the present application provide a voice message generation method and apparatus based on facial expression recognition, a computer device and a storage medium, and relate to the technical field of artificial intelligence. The voice message generation method based on facial expression recognition comprises: obtaining voice data and a corresponding facial image, performing voice recognition on the voice data to obtain a text message, and performing facial expression recognition on the facial image to obtain a facial expression message; inputting the text message and the facial expression message into a first model, and obtaining an answer text message by means of the first model according to the text message and the facial expression message; and finally performing voice conversion on the answer text message to obtain a corresponding answer voice message. According to the embodiments of the present application, the facial image is added into a chatting robot, the current scene can be determined more accurately by identifying the facial image, the first model obtains the answer text message according to the text message and the facial expression message, and the answer text message is converted into a voice reply message, such that the accuracy of the voice reply message is improved.

Classes IPC  ?

  • G10L 15/26 - Systèmes de synthèse de texte à partir de la parole
  • G10L 15/25 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques utilisant la position des lèvres, le mouvement des lèvres ou l’analyse du visage
  • G10L 25/30 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par la technique d’analyse utilisant des réseaux neuronaux
  • G10L 25/63 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour estimer un état émotionnel
  • G06F 40/30 - Analyse sémantique
  • G06F 40/289 - Analyse syntagmatique, p.ex. techniques d’états finis ou regroupement
  • G06F 40/242 - Dictionnaires
  • G06F 40/216 - Analyse syntaxique utilisant des méthodes statistiques

93.

PERSON RE-IDENTIFICATION METHOD AND APPARATUS BASED ON POSTURE GUIDANCE, AND DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022090753
Numéro de publication 2023/137923
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-27
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the field of artificial intelligence. Provided in the embodiments of the present application are a person re-identification method and apparatus based on posture guidance , and a device and a storage medium. The method comprises the following steps: performing preprocessing of posture identification on a target person image, so as to generate local features of a plurality of body parts; inputting the plurality of local features into a person body attention module for training, so as to generate an attention mask; processing the attention mask and a first three-dimensional feature, so as to obtain a second three-dimensional feature; inputting the second three-dimensional feature into a second-order information attention module for processing, so as to obtain a covariance matrix of a two-dimensional matrix; and performing attention calculation on the covariance matrix, so as to obtain an output result of a second-order matrix. In the technical solution of the embodiments, each body part is determined by means of a person body attention module, to prevent a shielding object from affecting person identification; in addition, the degree of association between the parts is increased by means of a second-order information module, such that information expression of a human body can be highlighted, thereby inhibiting information expression of the background and the shielding object.

Classes IPC  ?

  • G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion
  • G06N 3/08 - Méthodes d'apprentissage
  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06V 10/62 - Extraction de caractéristiques d’images ou de vidéos relative à une dimension temporelle, p.ex. extraction de caractéristiques axées sur le temps; Suivi de modèle

94.

AFFAIR INFORMATION QUERY METHOD AND APPARATUS, AND COMPUTER DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089316
Numéro de publication 2023/134057
Statut Délivré - en vigueur
Date de dépôt 2022-04-26
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Shu, Chang
  • Chen, Youxin

Abrégé

The embodiments of the present application belong to the field of artificial intelligence, are applied to the field of smart government affairs, and relate to an affair information query method. The method comprises: acquiring affair information text; identifying events from the affair information text and a logic relationship between the events, and identifying ontologies from the events and an ontology relationship between the ontologies; constructing a semantic network framework according to the events, the logic relationship, the ontologies and the ontology relationship; on the basis of the semantic network framework, generating an affair knowledge graph which corresponds to the affair information text; acquiring affair query text; and calculating a semantic similarity between the affair query text and each node in the affair knowledge graph, and determining an affair information query result according to the semantic similarities and the affair knowledge graph. The present application further provides an affair information query apparatus, a computer device and a storage medium. Moreover, the present application further relates to blockchain technology. The affair information text may be stored in a blockchain. The present application improves the affair information acquisition efficiency.

Classes IPC  ?

95.

IMAGE FEATURE EXTRACTION METHOD, APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE

      
Numéro d'application CN2022089692
Numéro de publication 2023/134064
Statut Délivré - en vigueur
Date de dépôt 2022-04-27
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Qiao, Yixuan
  • Chen, Hao

Abrégé

The present application relates to the field of information technology and discloses an image feature extraction method, an apparatus, a storage medium, and a computer device, which are primarily to be able to improve the precision of image feature extraction. The method comprises: obtaining an image to be processed in an actual service scenario; dividing the image to be processed into a plurality of sub-images, and determining a clear target sub-image from among the plurality of sub-images; inputting the target sub-image into a preset image feature extraction model and performing feature extraction to obtain a first image feature vector corresponding to the target sub-image; determining second image feature vectors corresponding to the remaining sub-images in the plurality of sub-images on the basis of the first image feature vector and respective position information of the plurality of sub-images in the image to be processed; and determining a third image feature vector corresponding to the image to be processed on the basis of the first image feature vector and the second image feature vectors. The method is applicable to the extraction of image features.

Classes IPC  ?

  • G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
  • G06T 7/10 - Découpage; Détection de bords
  • G06N 3/04 - Architecture, p.ex. topologie d'interconnexion

96.

VIRTUAL PRIVATE CLOUD SERVICE ACCESS METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022089868
Numéro de publication 2023/134066
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Wang, Yan
  • Meng, Xianyu

Abrégé

The present application relates to artificial intelligence, and provides a virtual private cloud service access method, apparatus and device, and a storage medium. The method comprises: accessing a target service by means of a virtual firewall assembly and a first address mapping in the virtual firewall assembly, and accessing an interface service on the basis of the target service, wherein the first address mapping is a mapping between a virtual private cloud IP of an accessed virtual private cloud service and an IP of a host to which the interface service belongs; and determining a service IP of the accessed virtual private cloud service by means of the interface service, and forwarding an access request to the accessed virtual private cloud service according to the service IP. On the basis of an address mapping in a virtual firewall assembly, mutual access between an object inside a virtual private cloud and an object outside the virtual private cloud, which are isolated from each other, is realized, such that the problem of access to a virtual private cloud service failing due to the isolation of a virtual private cloud is avoided, thereby improving the access success rate of the virtual private cloud service and thus improving the user experience.

Classes IPC  ?

  • H04L 9/40 - Protocoles réseaux de sécurité
  • H04L 61/10 - Correspondance entre adresses de types différents
  • H04L 61/2503 - Traduction d'adresses de protocole Internet [IP]
  • H04L 61/4511 - Répertoires de réseau; Correspondance nom-adresse en utilisant des protocoles normalisés d'accès aux répertoires en utilisant le système de noms de domaine [DNS]
  • H04L 67/10 - Protocoles dans lesquels une application est distribuée parmi les nœuds du réseau

97.

DIGIT RECOGNITION MODEL TRAINING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM

      
Numéro d'application CN2022089871
Numéro de publication 2023/134068
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Chen, Zhenhong
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the fields of artificial intelligence and image recognition, and specifically discloses a digit recognition model training method and apparatus, a device, and a storage medium. The method comprises: obtaining a sample image and a digit label corresponding to the sample image; performing image clipping on the sample image, and using the remainder of the image after image clipping as a first training image; performing data enhancement on the sample image to obtain a second training image; inputting the first training image and the second training image into a neural network respectively to obtain a first output value corresponding to the first training image and a second output value corresponding to the second training image, and according to the digit label, calculating a loss function value of the neural network and the similarity between the first output value and the second output value; and iteratively training the neural network according to the loss function value and the similarity, and when the neural network converges, using the neural network as a digit recognition model.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

98.

ENTITY RELATIONSHIP IDENTIFICATION METHOD, DEVICE, AND READABLE STORAGE MEDIUM

      
Numéro d'application CN2022089938
Numéro de publication 2023/134069
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Yang, Kun
  • Wang, Yanmeng
  • Wang, Shaojun

Abrégé

The present application relates to the technical field of artificial intelligence, and provides an entity relationship identification method, a device, and a readable storage medium. The method comprises: obtaining a word vector of each word in a training sample by using a pre-constructed entity relationship identification model; obtaining an entity vector of each entity in all entity pairs of the training sample according to the obtained word vectors; performing weighted summation processing on a term vector of each entity of the entity pairs by means of a preset weight matrix to obtain an additional feature vector of each entity of the entity pairs; obtaining a predicted probability for an entity pair relationship category by means of an activation function; completing iterative training of the constructed entity relationship identification model by means of a pre-constructed loss function; and performing, by means of the trained entity relationship identification model, entity relationship identification on text to be identified. The main objective of the present application is to cyclically train the entity relationship identification model by means of the constructed loss function, so as to solve the existing problem of a poor identification effect in an entity identification process.

Classes IPC  ?

99.

PERSON RE-IDENTIFICATION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

      
Numéro d'application CN2022089991
Numéro de publication 2023/134071
Statut Délivré - en vigueur
Date de dépôt 2022-04-28
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s)
  • Zheng, Ximin
  • Zhai, You
  • Shu, Chang
  • Chen, Youxin

Abrégé

The present application relates to the technical field of artificial intelligence, and provides a person re-identification method and apparatus, an electronic device and a storage medium. The method comprises: inputting a first image sequence into a preset posture identification network so as to obtain a first local feature of each body part of a person to be identified; inputting the first image sequence into a preset multi-layer convolutional neural network so as to obtain a first global feature of said person; performing first fusion on the first local features of the plurality of body parts and the first global feature so as to obtain a second local feature of each body part; and inputting the second local features of the plurality of body parts of said person into a pre-trained person re-identification model, and outputting a person re-identification result. According to the present application, the accuracy of person re-identification is improved. The present application further relates to blockchain technology, and the first image sequence is stored in a blockchain node.

Classes IPC  ?

  • G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains

100.

TEXT TOPIC GENERATION METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, DEVICE, AND MEDIUM

      
Numéro d'application CN2022090163
Numéro de publication 2023/134075
Statut Délivré - en vigueur
Date de dépôt 2022-04-29
Date de publication 2023-07-20
Propriétaire PING AN TECHNOLOGY (SHENZHEN) CO., LTD. (Chine)
Inventeur(s) Chen, Hao

Abrégé

The present application relates to the technical field of artificial intelligence, and discloses a text topic generation method and apparatus based on artificial intelligence, a device, and a medium. The method comprises: obtaining a target text set; generating a sentence vector for each target text in the target text set; clustering the sentence vectors by using a K-Means clustering algorithm and a preset number of clusters to obtain multiple sentence vector cluster sets; and respectively performing TF-IDF weight value calculation and word extraction on the target texts corresponding to a specified sentence vector cluster set, so as to obtain a target text topic corresponding to the specified sentence vector cluster set, wherein the specified sentence vector cluster set is any one of the sentence vector cluster sets. Therefore, the combination of a statistical method and a semantic information-based method is implemented, the generalization is improved, and the accuracy of a determined text topic is improved.

Classes IPC  ?

  • G06F 40/211 - Parsage syntaxique, p.ex. basé sur une grammaire hors contexte ou sur des grammaires d’unification
  • G06F 16/35 - Groupement; Classement
  • G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
  1     2     3     ...     59        Prochaine page