Methods, systems, and apparatus, including computer programs encoded on computer storage media, for recognizing speech using a spiking neural network acoustic model implemented on a neuromorphic processor are described. In one aspect, a method includes receiving, a trained acoustic model implemented as a spiking neural network (SNN) on a neuromorphic processor of a client device, a set of feature coefficients that represent acoustic energy of input audio received from a microphone communicably coupled to the client device. The acoustic model is trained to predict speech sounds based on input feature coefficients. The acoustic model generates output data indicating predicted speech sounds corresponding to the set of feature coefficients that represent the input audio received from the microphone. The neuromorphic processor updates one or more parameters of the acoustic model using one or more learning rules and the predicted speech sounds of the output data.
G10L 15/16 - Speech classification or search using artificial neural networks
G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 15/197 - Probabilistic grammars, e.g. word n-grams
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 15/30 - Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
G10L 25/21 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being power information
2.
ROBOTIC ASSEMBLY INSTRUCTION GENERATION FROM A VIDEO
In some implementations, a robot host may receive a video associated with assembly using a plurality of sub-objects. The robot host may determine spatio-temporal features based on the video and may identify a plurality of actions represented in the video based on the spatio-temporal features. The robot host may map the plurality of actions to the plurality of sub-objects to generate an assembly plan and may combine output from a point cloud model and output from a color embedding model to generate a plurality of sets of coordinates corresponding to the plurality of sub-objects. The robot host may perform object segmentation to estimate a plurality of grip points and a plurality of widths corresponding to the plurality of sub-objects. Accordingly, the robot host may generate instructions, for robotic machines, based on the assembly plan, the plurality of sets of coordinates, the plurality of grip points, and the plurality of widths.
A method for detection of a corrupted sensor including providing a first sensor; identifying one or more correlating sensors to the first sensor; determining a correlation between the first sensor and the correlating sensors according to historical sensor values; obtaining a calculated value of the first sensor based on values of the correlating sensors and the correlation; obtaining a measured value of the first sensor; and determining whether the first sensor is corrupted according to a difference between the calculated value and the measured value of the first sensor.
G01D 9/12 - Producing one or more recordings of the values of a single variable the recording element, e.g. stylus, being controlled in accordance with the variable, and the recording medium, e.g. paper roll, being controlled in accordance with time recording occurring continuously
4.
UPPER EXTREMITY PROSTHETIC DEVICE WITH ENHANCED SPRING DESIGNS
Springs can provide energy return and have a conductivity that changes in relation to an amount of strain or deformation of the spring. An upper-extremity prosthetic device includes a first coil spring coupled to a first member and a first cantilever spring extending from the first member to a surface adapted to engage with an object. The first coil spring is arranged to absorb energy and to provide energy return in response to movement of the first member. The first coil spring includes a first conductive surface and a second conductive surface separate from the first conductive surface by non-conductive surfaces. The first cantilever spring includes a conductive trace with a plurality of conductive segments arranged on the conductive trace.
A61F 5/01 - Orthopaedic devices, e.g. long-term immobilising or pressure directing devices for treating broken or deformed bones such as splints, casts or braces
In some implementations, a dataset evaluation system may receive a target dataset. The dataset evaluation system may-processing the target dataset to generate a normalized target dataset. The dataset evaluation system may process the normalized target dataset with an intruder dataset to identify whether any quasi-identifiers are present in the normalized target dataset. The dataset evaluation system may determine a Cartesian product of the normalized target dataset and the intruder dataset. The dataset evaluation system may compute, using a distance linkage disclosure technique, an inference risk score for the target dataset with the intruder dataset based on the Cartesian product and whether any quasi-identifiers are present in the normalized target dataset. The dataset evaluation system may output information associated with the inference risk score.
In some examples, energy cost reduction of metaverse operations may include generating a unified model of What-IF scenarios. For a semantic association graph of organization avatar entities and for each logically independent IF scenario of a plurality of logically independent IF scenarios of the What-IF scenarios, a sub-metaverse of semantically connected organization avatar entities may be determined. State transitions of the semantically connected organization avatar entities may be iteratively performed until the sub-metaverse reaches a stationarily stable state or an operating limit. A determination may be made as to whether a goal condition is met in the sub-metaverse. For each of the logically independent IF scenarios for which the goal condition is met, an overall energy cost may be determined, and a logically independent IF scenario that includes a minimum energy cost may be identified and used to control an operation for an organization entity.
Methods, systems and apparatus, including computer programs encoded on computer storage medium, for machine unlearning. In one aspect a method includes receiving a request to remove a client dataset from a machine learning model, the model being associated with noise sensitivities determined during training of the model on respective client datasets including the client; and in response to receiving the request: identifying, from stored noise sensitivities of the client, a most recent training iteration that produced a noise sensitivity that is below a predetermined threshold that is based on a noise standard deviation and predefined target privacy parameters; updating parameters of the model, comprising adding noise to model parameters for the most recent training iteration; and performing subsequent iterations of training of the model, wherein the model is initialized with the updated parameters and the subsequent iterations train the model on datasets excluding the dataset owned by the client.
Example computer-implemented methods, media, and systems for improving experience and performance of applications over 5G networks are disclosed. One example computer-implemented method includes establishing multiple signaling message quality of service (QoS) flows of an application over a communications network. Multiple data message QoS flows of the application are established over the communications network. The multiple signaling message QoS flows are sent to a user device through an ultra-reliable low latency communication (URLLC) slice over the communications network. The multiple data message QoS flows are sent to the user device through an enhanced multimedia broadband (eMBB) slice of the communications network. The multiple signaling message QoS flows are mapped to first multiple data radio bearers (DRBs). The multiple data message QoS flows are mapped to second multiple DRBs. One or more services associated with the application are provided to the user device based on the first and the second multiple DRBs.
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support automated source code review using sentiment analysis with magnitude of entities. Known compliant and non-compliant source code may be used to generate dictionaries for evaluating lines of code using AI and ML techniques, such as by clustering data entities (lines of software code) and performing sentiment analysis on the data entities (lines of software code) which accounts for a magnitude of the data entities in the software code. The dictionaries enable automated review and correction of non-compliant code, such as vulnerable or insecure code, during the coding process. For example, sentiment analysis may be performed using the dictionaries on in-development code to determine a polarity and magnitude score for each line of code. The scores for each line can be compared to one or more conditions to determine a remediation action for individual lines of code.
A method of producing a textile sensor includes: obtaining an organic fabric; carbonizing the organic fabric by applying heat to the organic fabric in an inert environment to form a conductive fabric; and attaching one or more electrical terminals to the conductive fabric. The method includes coating the conductive fabric with a polymeric encapsulating material. The method includes, for each of the one or more electrical terminals, connecting a first end of a flexible conductor to the electrical terminal and connecting a second end of each flexible conductor to a wireless interface printed circuit board. The textile sensor comprises at least one of a pressure sensor, a proximity sensor, a touch sensor, a strain sensor, a wind sensor, a temperature sensor, a heating element, a triboelectric sensor, and an energy harvester.
D06M 11/74 - Treating fibres, threads, yarns, fabrics or fibrous goods made from such materials, with inorganic substances or complexes thereof; Such treatment combined with mechanical treatment, e.g. mercerising with carbon or compounds thereof with graphitic acids or their salts
D06M 15/03 - Polysaccharides or derivatives thereof
D06M 15/643 - Macromolecular compounds obtained otherwise than by reactions only involving carbon-to-carbon unsaturated bonds containing silicon in the main chain
D06M 15/693 - Treating fibres, threads, yarns, fabrics or fibrous goods made from such materials with macromolecular compounds; Such treatment combined with mechanical treatment with natural or synthetic rubber, or derivatives thereof
D06M 23/10 - Processes in which the treating agent is dissolved or dispersed in organic solvents; Processes for the recovery of organic solvents thereof
H05K 3/32 - Assembling printed circuits with electric components, e.g. with resistor electrically connecting electric components or wires to printed circuits
11.
UTILIZING QUANTUM COMPUTING AND A POWER OPTIMIZER MODEL TO DETERMINE OPTIMIZED POWER INSIGHTS FOR A LOCATION
A device may receive input data that includes demographic data, power demand data, power source data, power route data, technology data, industry data, and problem data associated with a geographic location, and may identify a section of the geographic location from the demographic data. The device may identify power sources of the section, and may estimate power generation and power demand for the section. The device may determine whether the power demand is greater than the power generation for the section. The device may utilize a quantum computer and a power optimizer model with the input data associated with the section to determine optimized power insights for the section based on determining that the power demand is greater than the power generation for the section, and may perform actions based on the optimized power insights for the section.
Aspects of the present disclosure provide systems, methods, apparatus, and computer-readable storage media that support improved watermarking and fingerprinting of a shared dataset. To illustrate, clustering may be performed on the dataset using initial clustering parameters (e.g., a secret key) to assign each record (e.g., attribute) of the dataset to one of multiple clusters. The secret key may be selected by a user or determined automatically based on the clustering algorithm. After the clustering, the records of each cluster may be selected for embedding a portion of fingerprint data based on one or more security parameters (e.g., a hash function, priority values, even/or selection, etc.). The selected records (or portions thereof) may be replaced with corresponding portions of the fingerprint data to embed the fingerprint data within different records as watermarking. Aspects also include analyzing a dataset to verify whether watermarking is present and to extract a fingerprint.
The present disclosure relates to electronic devices and methods of manufacturing electronic devices. A method of manufacturing a dissolvable electronic device includes forming a dissolvable sheet; applying a self-sintering agent to the dissolvable sheet to form a substrate; and depositing electrically conductive ink onto the substrate in a trace. A method of manufacturing a meltable electronic device includes mixing a conductive material with a melted wax to form a conductive wax mixture in liquid form; molding the conductive wax mixture; and solidifying the conductive wax mixture to obtain the meltable electronic device. A method of manufacturing an edible electronic device includes cutting a layer of conductive material to form a pattern that defines a circuit; applying the layer of conductive material to an edible medium, wherein the edible medium is in liquid or semi-solid form; and solidifying the edible medium to obtain the edible electronic device.
H05K 3/12 - Apparatus or processes for manufacturing printed circuits in which conductive material is applied to the insulating support in such a manner as to form the desired conductive pattern using printing techniques to apply the conductive material
A23G 1/00 - Cocoa; Cocoa products, e.g. chocolate; Substitutes therefor
A23G 1/32 - Cocoa products, e.g. chocolate; Substitutes therefor characterised by the composition
A23G 1/54 - Composite products, e.g. layered, coated, filled
A23G 7/00 - Other apparatus specially adapted for the chocolate or confectionery industry
B33Y 40/20 - Post-treatment, e.g. curing, coating or polishing
B33Y 70/10 - Composites of different types of material, e.g. mixtures of ceramics and polymers or mixtures of metals and biomaterials
G01D 5/24 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage by varying capacitance
H05K 3/00 - Apparatus or processes for manufacturing printed circuits
H05K 3/20 - Apparatus or processes for manufacturing printed circuits in which conductive material is applied to the insulating support in such a manner as to form the desired conductive pattern by affixing prefabricated conductor pattern
Methods, systems, and computer-readable storage media for receiving, by a self-healing platform within the container orchestration system, fault data that is representative of two or more error events occurring within a cluster provisioned within the container orchestration system, determining, by the self-healing platform, a set of actions to be executed in response to the two or more error events, providing, by the self-healing platform, a priority value for each error event of the two or more error events, and transmitting, by the self-healing platform, instructions to execute actions in the set of actions based on respective priority values of the two or more error events.
A device may receive source compound simplified molecular-input line-entry (SMILE) data, target compound SMILE data, and a latent space representing compounds, and may project the source compound SMILE data and the target compound SMILE data into the latent space to generate a source compound tensor and a target compound tensor, respectively. The device may process the source compound tensor, with one or more pretrained models, to determine a reward for the source compound tensor, and may determine, based on the reward, a direction and a magnitude to move in the latent space from the source compound tensor. The device may move the direction and the magnitude in the latent space to a new compound tensor, and may determine whether the new compound tensor matches the target compound tensor. The device may return a policy based on the new compound tensor matching the target compound tensor.
The proposed systems and methods are directed to explainability-augmented AI systems. These systems are configured to automatically identify, based on one or more of metadata associated with labels assigned to sample data and responses to AI-system-related questionnaires, one or more reasons that support the decisions made by an AI model in response to user queries. The proposed systems apply natural language processing (NLP) to transform the explainability data (e.g., metadata and questionnaire data) to generate human reader-friendly output that summarizes the reasoning by which the AI system made a specific decision and offer transparency to the AI-decision-making process.
Systems and methods to facilitate the identification of connections or relationships in a network that are high impact in order to generate recommendations for future network growth are disclosed. The embodiments convert network maps into graphs comprising nodes and edges. The system identifies the edge that, when removed, causes the greatest impact on the network as a whole. In one embodiment, the system can be used to identify locations for installation of electric vehicle charging stations.
This disclosure relates to cloud migration. In some aspects, a method includes receiving, by one or more computing devices, a plurality of parameters associated with an on-premises system to be migrated to a cloud architecture, the plurality of parameters including an identifier of the on-premises system, identifiers of components of the on-premises system, and migration requirements; extracting, from the plurality of parameters, a set of input parameters substantially affecting a migration of the on-premises system to the cloud architecture; identifying a target cloud architecture, selected from a plurality of cloud architectures, that i) is compliant with the set of input parameters, and ii) satisfies one or more threshold conditions associated with the migration; determining, a set of output parameters representing features of the target cloud architecture; and training, a neural network model using the set of input parameters and the set of output parameters.
This disclosure relates generally to ASR and is particularly directed to automatic, efficient, and intelligent detection of transcription bias in ASR models. Contrary to a tradition approach to the testing of ASR bias, the example implementations disclosed herein do not require actual test speeches and corresponding ground-truth texts. Instead, test speeches may be machine-generated from a pre-constructed reference textual passage according short speech samples of speakers using a neural voice cloning technology. The reference passage may be constructed according to a particular target domain of the ASR model being tested. Bias of the ASR model in various aspects may be identified by analyzing transcribed text from the machine-generated speeches and the reference textual passage. The underlying principles for bias detection may be applied to evaluation of general transcription effectiveness and accuracy of the ASR model.
G10L 15/01 - Assessment or evaluation of speech recognition systems
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 21/0216 - Noise filtering characterised by the method used for estimating noise
20.
INTELLIGENT API SERVICE FOR ENTERPRISE DATA IN THE CLOUD
The proposed systems and methods provide a fixed set of intelligent, general APIs to manage access to enterprise data stored in a cloud-based data lake. These systems and methods allow a fixed set of APIs to respond to all queries regarding the stored enterprise data by using a cached reference table that locates the container and document in which the requested data is held. The proposed systems and methods provide a framework for a minimal API service code with the capacity for responding to dynamic queries while maintaining stringent privacy control protections.
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for system contextualization. For example, a method can include obtaining information pertaining to (i) parts of a manufacturing plant and (ii) steps of a process executing on the manufacturing plant; generating one or more virtual representations associated with each of the parts and the steps of the process; receiving a query from a user, the query being in conformation with a format of a hierarchical template model; and generating a response to the query based on the one or more virtual representations of the manufacturing plant.
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
22.
TRANSFERRING INFORMATION THROUGH KNOWLEDGE GRAPH EMBEDDINGS
A device may receive a knowledge graph and SMILE data identifying compounds, and may train embeddings based on the knowledge graph. The device may generate graph embeddings for the SMILE data based on the embeddings, and may encode the SMILE data into a latent space. The device may combine the graph embeddings and the latent space to generate a combined latent-embedding space, and may decode the combined latent-embedding space to generate decoded SMILE data. The device may utilize the decoded SMILE data to train an encoder, and may process source SMILE data, with the trained encoder, to generate a source combined latent-embedding space. The device may search the source combined latent-embedding space to identify new SMILE data, and may decode the new SMILE data to generate decoded new SMILE data. The device may evaluate the decoded new SMILE data to identify particular SMILE data associated with a new compound.
The present disclosure describes a system and method for applying machine learning to analyze the sustainability of products and using scoring based on the analysis to curate product recommendations for customers and feedback for product producers. The system and method incorporate product feature (e.g., sustainability) data, including historical data, from multiple sources, and user preferences to generate customized feature (e.g., sustainability) scores.
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support application portfolio management. Input data may be received that is associated with a set of applications. The input data may indicate, for each application of the set of applications, a name of the application and a description of the application. For an application of the set of applications, a functional score, a cost, and a technical score may be determined based on the name of the application, the description of the application, or a combination thereof. A disposition recommendation for the application may be determined based on the functional score, the cost, and the technical score. In some implementation, an indication of the disposition recommendation of the application may be output via a graphical user interface.
Systems and methods for managing supply chain of products and services are disclosed herein. A system generates supply chain data based on historical data received from data sources corresponding to supply chain of product or service. Further, system extracts data entity and set of attributes from supply chain data, to determine semantically related data entities. Furthermore, system determines use case corresponding to management of supply chain, based on semantically related data entities. Additionally, system predicts, risk or priority associated with product or service in the supply chain, to generate risks and alerts, based on prediction. Further, system assigns critical and high-priority use case to one or more agents based on a performance score of the one or more agents. Furthermore, system provides insights and suggestions for managing the supply chain of product or service at regional level and global level of supply chain.
Implementations are directed to receiving a plurality of data samples comprising a first set of data samples associated with respective labels and a second set of data samples to be labeled; generating a random forest structure comprising a set of decisions trees, each decision tree including nodes corresponding to the first set of data samples; adding the second set of data samples into each decision tree as additional nodes of each decision tree; merging the set of decision trees to obtain a universal graph, wherein each node corresponds to a data sample; extracting, using a graph embedding algorithm, an embedding feature for each data sample that corresponds to each node included in the universal graph; determining a distance between any pair of two data samples using respective embedding features of the two data samples; and determining a label for each of the second set of data samples using the distance.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating mixed synthetic data. In one aspect, a method includes obtaining a plurality of mixed input data. At least some of the plurality of mixed input data include one or more categorical variables and one or more continuous variables. The method includes training a machine learning model using the plurality of mixed input data and generating a plurality of mixed synthetic data. The plurality of mixed synthetic data (i) includes one or more categorical variables and one or more continuous variables and (ii) shares statistical properties with the plurality of mixed input data.
Methods, systems, and apparatus, including computer programs encoded on computer-storage media, for dynamic user data filtering. In some implementations, a method includes determining one or more values using data representing a sequence of one or more types of interactions between a user and content; using the one or more values to determine whether to include each interaction of the one or more types of interactions in the sequence within a reduced user data set; generating the reduced user data set by removing one or more interactions from the sequence based on determining not to include the one or more interactions using the one or more values; and providing the reduced user data set to a processing server.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
Systems and methods for deep technology innovation management by cross-pollinating innovations dataset are disclosed. A system extracts context-based keyword from an innovation dataset by transforming the innovation dataset to a vector. Further, the system searches semantically relevant keywords for the extracted context-based keyword, by extracting an entity and a key phrase from the extracted a context-based keyword. Furthermore, system clusters the vector, by identifying frequent keywords in the semantically relevant keywords to obtain cluster centroids of the frequent keywords. Thereafter, the system determines weighted keywords in each cluster using the obtained cluster centroids, and classifies the weighted keywords to identify emerging innovation trends relevant to the innovation in the innovation dataset. The system forms cohorts of innovators to explore the reuse of innovations, assets, code, and build focused monetization model.
A containment assembly comprises a first enclosure comprising a cavity for receiving a hazardous object. The first enclosure is configured for containing an explosive event of the hazardous object. A second enclosure comprises a gas impermeable layer, an inner volume, and an air-tight closure. The second enclosure is configured for receiving and containing a gas byproduct of the explosive event from the first enclosure. A gas permeable barrier is disposed between the cavity of the first enclosure and the inner volume of the second enclosure. A smart insulation arrangement may be implemented on the lower side of the first enclosure to allow the event to happen and to cool down over a longer period of time without exceeding maximum allowable temperatures on the outside of the second enclosure. This permits the flight or journey to continue.
F42D 5/045 - Detonation-wave absorbing or damping means
F42B 39/20 - Packages or ammunition having valves for pressure-equalising; Packages or ammunition having plugs for pressure release, e.g. meltable
A62C 3/16 - Fire prevention, containment or extinguishing specially adapted for particular objects or places in electrical installations, e.g. cableways
B65D 85/68 - Containers, packaging elements or packages, specially adapted for particular articles or materials for machines, engines or vehicles in assembled or dismantled form
B65D 85/30 - Containers, packaging elements or packages, specially adapted for particular articles or materials for articles particularly sensitive to damage by shock or pressure
31.
UTILIZING A MACHINE LEARNING MODEL TO MIGRATE A SYSTEM TO A CLOUD COMPUTING ENVIRONMENT
A device may receive logs and files associated with a system to be migrated to a cloud computing environment, and may determine workload data associated of the system. The device may derive a data lineage for source data and target data, and may assess a utilization pattern of the system. The device may process the workload data, the data lineage, and data identifying utilization of a distributed computing feature of the system, with a model, to label utilization features and to recommend a cloud architecture. The device may process the workload data, the data lineage, and the data identifying utilization, with a natural language processing model, to determine a cost of migrating the system. The device may process the labelled utilization features, the cloud architecture, and the cost, with a Q-matrix model, to determine migration actions for migrating the system, and may perform actions based on the migration actions.
In some examples, scalable source code vulnerability remediation may include receiving source code that includes at least one vulnerability, and receiving remediated code that remediates the at least one vulnerability associated with the source code. At least one machine learning model may be trained to analyze a vulnerable code snippet of the source code. The vulnerable code snippet may correspond to the at least one vulnerability associated with the source code. The machine learning model may be trained to generate, for the vulnerable code snippet, a remediated code snippet to remediate the at least one vulnerability associated with the source code. The remediated code snippet may be validated based on an analysis of whether the remediated code snippet remediates the at least one vulnerability associated with the source code.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/56 - Computer malware detection or handling, e.g. anti-virus arrangements
33.
DYNAMIC SCHEDULING PLATFORM FOR AUTOMATED COMPUTING TASKS
In some implementations, a scheduling platform may receive task information regarding a set of tasks for execution using a set of computing resources, wherein the task information includes, for the set of tasks, at least one of: a run time parameter, a priority parameter, or a success rate parameter. The scheduling platform may communicate with a computing resource management device to obtain first computing resource information regarding the set of computing resources. The scheduling platform may generate a first assignment of the set of tasks to the set of computing resources. The scheduling platform may transmit assignment information identifying the first assignment. The scheduling platform may receive second computing resource information. The scheduling platform may generate a second assignment of the set of tasks to the set of computing resources. The scheduling platform may transmit second assignment information identifying the second assignment.
Aspects of the present disclosure provide methods, devices, and computer-readable storage media that support dynamic enforcement of access control policies in a standardized manner. An administrator console enables access control policies to be defined as classes that may be combined and leveraged to rapidly define access control policies for enforcement in a standardized manner. An interceptor operates to detect access requests and perform policy administration (e.g., determining to grant/deny access) for the access requests and where access is granted, initiate policy resolution (e.g., determine any restrictions on the granted access request). An enforcer provides functionality for enforcing policy resolution outcomes, such as restricting access to information stored in a database or disabling interactive elements of a user interface. The enforcer may control enforcement of the policy resolution outcomes by modifying information in received access requests, such as to rewrite a query to incorporate restrictions on access to a data source.
A first device may provide a request to establish a secure communication with a second device, and may hide public keys based on a commutative legacy compatible encryption process sharing a modulus and based on quasi-Carmichael numbers larger than the modulus with quadratic residuals. The first device may utilize variable extendable-output function hashing, based on the modulus, with bloom filtering to generate an output that prevents creation of classical rainbow tables, and may utilize a key derivation function to generate a symmetric key based on the output. The first device may establish the secure communication with the second device based on the symmetric key.
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
36.
ARTIFICIAL INTELLIGENCE BASED SECURITY REQUIREMENTS IDENTIFICATION AND TESTING
The proposed systems and methods apply natural language processing to identify implicit security requirements flowing from input text narratively describing desired features for a software project. These systems and methods can identify hidden security requirements that may not be readily apparent from the features described in the input text. For example, a story may include a feature of a return URL (Uniform Resource Locator), which is the URL for the website to which a user will be redirected. A security vulnerability that would not be obvious from this feature is that a user might be directed to an attacker controlled site instead of the originally intended site. A security requirement that could counteract this vulnerability would be to include the feature of verifying all redirects go to Whitelisted Sites. The proposed systems and methods provide a framework for automated security requirements analysis capable of identifying unstated security requirements early on in a software development lifecycle using artificial intelligence techniques.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
Systems and methods for smart incentivization for achieving collaborative machine learning are disclosed. A system receives local model parameters from plurality of client devices in a network, for global model corresponding to collaborative machine learning. The system determines an optimum score for each client device using pre-trained Conditional Variational Auto Encoder (CVAE), based on local model parameter. The system computes contribution score for each client device by determining relative distance value of optimum score corresponding to each client device with optimum score corresponding to another client device from the plurality of client devices, and a global model optimum score of global model. The system updates global model with local model parameter received from the selected set of client devices of the plurality of client devices corresponding to good class, average class, and bad class. The system outputs grading score, an incentive, importance score for each of selected client devices, and a performance of the global model.
Implementations are directed to configuring a set of applications and one or more modules associated with each application, wherein the one or more modules of an application comprise functional components that are bundled into the application, wherein each application is associated with a site of an on-premise system where the application is to be deployed; creating a process flow that includes a plurality of nodes, each node corresponding to a process executed at the site; associating a collection of applications to each node included in the process flow, wherein the collection of applications are selected from the set of applications, wherein the set of applications are categorized based on a relevance score of each application; and deploying the process flow and the collection of applications associated with each node to corresponding on-premise edge devices of the on-premise system based on the site of each application.
Aspects of the present disclosure provide methods, devices, and computer-readable storage media that support adaptive scheduling of electric vehicles (EVs) of an EV fleet for order deliveries. In some implementations, one or more aspects of the adaptive EV scheduling may be customized for EVs. For example, the adaptive EV scheduling may include identifying an energy efficient route that also reduces stress on a battery of an EV and may be based at least in part on a charging parameter associated with the EV. In some examples, the charging parameter may include one or more of a state of charge (SOC) associated with the battery, a state of health (SOH) associated with the battery, a location of a charging station for the EV, an average charging duration associated with the EV, or an intelligent charging parameter associated with the EV.
B60L 58/12 - Methods or circuit arrangements for monitoring or controlling batteries or fuel cells, specially adapted for electric vehicles for monitoring or controlling batteries responding to state of charge [SoC]
In some implementations, a digital twin system may receive, from one or more sensors and at an interface associated with a digital twin, a first input associated with a first event. The digital twin system may determine that the first event is associated with one or more probable second events. Accordingly, the digital twin system may refrain from processing the first input for a period of time. The digital twin system may further update a prediction associated with the digital twin using the first input based on expiry of the period of time or may update a prediction associated with the digital twin using second input associated with the one or more probable second events based on receiving the second input.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
41.
AUTOMATED ACTION RECOMMENDER FOR STRUCTURED PROCESSES
Aspects of the present disclosure provide systems, methods, apparatus, and computer-readable storage media that support automated action recommendation for structured processes. Aspects described herein leverage trained machine learning (ML) models to assign features extracted from historical event data into multiple clusters using unsupervised learning. In some implementations, current event data of a structured process is received, and extracted features assigned to one of the multiple clusters by the ML models. Candidate event sequences are generated based on members of the assigned cluster and are filtered based on corresponding association rule scores. Multiple incremental candidate sub-sequences are generated from the remaining candidate event sequences, and these are filtered based on a current event level and corresponding association rule scores. The remaining candidate sub-sequences are ranked based on the scores, and at least one of the highest ranking candidate sub-sequences are provided as recommended event sequences.
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training a machine-learning model configured to generate a prediction and recommendation output from input data. The system obtains training data including a plurality of training examples, obtains context data, identifies one or more feature variables from the context data, constructs the machine-learning model based at least on the identified feature variables, generates feature variable training data by processing the training data based on the identified feature variables, and performs training and periodic update (if required) of the machine-learning model to generate model parameter data for the machine-learning model based at least on the generated feature variable training data.
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for allocating computation resources for a plurality of databases. For each database, the system identifies a respective initial computation capacity tier for the respective database based at least on the respective utilization of the respective database. For each of a set of optimization orders, the system determines a respective set of candidate resource pools for accommodating the plurality of databases. The system selects an optimization order and determines a final set of resource pools for the plurality of databases. The system outputs data specifying the final set of resource pools.
A device may receive an application for transforming legacy applications into low-code/no-code applications to be managed by a low-code/no-code platform, and may execute the application for a legacy application of the legacy applications. The device may process the legacy application, with a machine learning model, to identify one or more components of the legacy application to be managed by the low-code/no-code platform, and may transform the one or more components into one or more transformed components to be managed by the low-code/no-code platform. The device may implement the one or more transformed components in the legacy application to generate a transformed legacy application, and may perform one or more actions based on the transformed legacy application.
A device may identify unique segments within data objects, of an object corpus stored in a data structure, as elements, and may generate an embedding space based on unique elements and mappings of the data objects to embeddings. The device may estimate semantic proximities among the data objects based on the mappings, and may build a semantic cohesion network among the data objects based on the semantic proximities. The device may identify semantically cohesive data clusters in the semantic cohesion network, and may sort the data objects in the semantically cohesive data clusters. The device may determine, from the semantically cohesive and sorted data clusters, a home data cluster for a new data object, and may store bookkeeping details of the new data object in the data structure based on the new data object being semantically similar to the data object in the home data cluster.
Systems and methods for inclusive product design are disclosed. The system obtains likeness score for product attributes for product using survey before design phase of product, from user(s), and determines impact and relative contribution of each product attribute, for user, to inclusivity score, using multi-level machine learning models. The system segregates product attributes and inclusivity score at persona level, and determines feature importance score of each feature in product attributes for each user. System calculates risk score for each user indicating sensibility towards product designer choices, and provides what-if analysis capabilities to product designer for analyzing, based on risk score, risk of each user with sensibility towards product designer choices and receives multisensory review from user. The system computes overall score by combining feature importance and inclusivity scores, facial coding, voice tonality, and haptics feedback, to granular level and outputs iteratively enriched survey data for inclusive designing of products.
A code remediation system accesses a programming code including vulnerabilities such as potential secrets and remediates at least a subset of the potential secrets to generate modified programming code wherein the subset of potential secrets which are determined to be actual secrets are replaced with access mechanisms to storage locations on a vault wherein the actual secrets are secured. To identify the subset of potential secrets forming the actual secrets to be remediated, the code remediation system is configured to filter out false positives among the potential secrets and identify true positives. When an application executing the modified code encounters an access mechanism, it accesses the vault to retrieve the actual secrets.
A device may receive and process a change request, work items, and IT data, to generate processed data. The device may transform the processed data into vectorized data, and may select similarity analytics models, regression models, and a classification model. The device may process the vectorized data, with the similarity analytics models, to determine an estimated effort, a user story, and IT requirements, and may process the vectorized data, with the regression models, to determine a schedule overrun, a defect rate, and a sprint velocity. The device may process the vectorized data, with the classification model, to determine a story point, and may calculate a resource capacity. The device may generate an impact analysis based on the estimated effort, the user story, the IT requirements, the schedule overrun, the defect rate, the sprint velocity, the story point, or the resource capacity, and may perform actions based on the impact analysis.
In some examples, generative network-based floor plan generation may include receiving, for a floor plan that is to be classified, a layout graph for which user constraints are encoded as a plurality of room types. The user constraints may include spatial connections therebetween. Based on the layout graph, embedding vectors for each room type of the plurality of room types may be generated. Bounding boxes and segmentation masks may be determined for each room embedding from the layout graph, and based on an analysis of the embedding vectors. A space layout may be generated by combining the bounding boxes and the segmentation masks. The floor plan may be generated based on an analysis of the space layout, and synthesized based on the space layout, noise, and a contextual graph embedding to generate a synthesized floor plan. The synthesized floor plan may be classified as authentic or not-authentic.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
50.
SYSTEMS AND METHODS TO IMPROVE TRUST IN CONVERSATIONS WITH DEEP LEARNING MODELS
The present disclosure relates to a system, a method, and a product for using deep learning models to quantify and/or improve trust in conversations. The system includes a non-transitory memory storing instructions executable to construct a deep-learning network to quantify trust scores; and a processor in communication with the non-transitory memory. The processor executes the instructions to cause the system to: obtain a trust score for each voice sample in a plurality of audio samples, generate a predicated trust score by the deep-learning network based on each voice sample in the plurality of audio samples, wherein the deep-learning network comprises a plurality of branches and an aggregation network configured to aggregate results from the plurality of branches, and train the deep-learning network based on the predicated trust score and the trust score for each voice sample to obtain a training result.
G10L 15/16 - Speech classification or search using artificial neural networks
G10L 25/24 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being the cepstrum
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
An index modeling system that generates index models that predict values of an attribute of a supply chain for a commodity is disclosed. The index models are generated from indicator data that includes data related to multiple indicators and a plurality of sub-indicators of the index arranged in a hierarchical structure. Accordingly, the index values can be predicted for different entities at different levels in the hierarchical structure. The predicted index values can be used to automatically generate a filtered list of suppliers who can be used for procurement based on comparisons of the predicted attribute values of the suppliers with a predetermined attribute threshold value.
A device may generate a knowledge model based on a knowledge model schema, data residency constraints, and a data classification ontology associated with a cloud application, and may perform a dynamic flow analysis of the cloud application data and the data source identifiers to generate a data flow graph. The device may process the data flow graph, with the knowledge model, to determine sensitive attributes in the data flow graph, and may identify sensitive data sources that include the sensitive attributes and sensitive assets based on the data flow graph and the sensitive data sources. The device may process the sensitive data sources and the sensitive assets, with a machine learning model, to determine methods for identifying misconfigurations, and may utilize the methods to identify misconfigurations and severities of the misconfigurations. The device may generate remediation actions for correcting the cloud application based on the severities of the misconfigurations.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A system and method provide a trained model that uses vectorized word embeddings that are averaged or summed to form representations for sentences and phrases. The representations are processed in a Siamese neural network including multiple LSTM stages to find semantically related matches in catalogs for non-catalog queries. The model is trained using catalog data and randomized data using a contrastive loss function to generate similarity metrics for catalog-non-catalog pairs.
G06N 3/0442 - Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
54.
Dynamic decentralized hierarchical Holon network system
Systems and methods for data storage and data streaming in decentralized, self-organized networks are provided. A plurality of computing devices are disposed in a unidirectional communication ring having a plurality of serially-connected spikes. Each spike includes n computing devices, and n×p connections directly connecting each of the n computing devices to p downstream computing devices. Each computing device is configured to request and receive an inventory of the plurality of computing devices; select a computing device from the plurality of computing devices; transmit a join request comprising the inventory to the selected computing device; and request reorganizing the unidirectional communication ring in response to the receipt of the transmitted join request after propagation through each of the plurality of spikes of the unidirectional communication ring.
Systems and methods supporting discovery and quantification of vulnerabilities in software code are disclosed. The systems and methods provide functionality for using software code analysis and other types of tools to analyze the software code and determine whether it can be trusted. The software code tools may be able to discover various hidden issues in the software code and the outputs of such tools may be normalized to quantify the risk associated with vulnerabilities identified by the different tools. A labeling strategy is provided to label the software code to enable users to identify the best software among various available software options based on the label(s) and a set of criteria.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
56.
SYSTEM AND METHODS FOR DYNAMIC WORKLOAD MIGRATION AND SERVICE UTILIZATION BASED ON MULTIPLE CONSTRAINTS
The present disclosure provides systems and methods supporting dynamic migration of jobs (e.g., workloads, containers, service requests, etc.) between execution environments. The disclosed systems and methods may utilize monitoring techniques to determine when a migration should occur and/or forecasting techniques to predict optimal times when a migration should occur. Upon determining a migration should occur, a target execution environment for a job may be identified and a migration process may be initiated. In some aspects, the migration may be performed partway through processing of the job and the migration may resume processing the job after the migration is completed in a manner that enables the processing to resume at the point where processing stopped prior to the migration.
A device may receive raw data and metadata associated with a customer, and may transform the raw data and the metadata into unified data. The device may process the unified data, with a first model, to generate journey record derived variables, customer record derived variables, clickstream derived variables, and analytical matrices, and may process the unified data, the journey record derived variables, the customer record derived variables, the clickstream derived variables, and the analytical matrices, with a second model, to generate a journey data store with journey derived signals. The device may process the unified data, the journey record derived variables, the customer record derived variables, the clickstream derived variables, the analytical matrices, and the journey data store, with a third model, to generate overall statistical data for the customer, and may perform one or more actions based on the overall statistical data.
Implementations include a computer-implemented method for reducing cyber-security risk, comprising: selecting one or more modules for inclusion in a knowledge mesh, wherein each module is associated with a respective aspect and maintains a knowledge graph specific to the respective aspect, wherein each knowledge graph is generated using data from one or more cyber-security repositories and includes nodes and connections between the nodes; receiving a query corresponding to a first node of a first knowledge graph included in the knowledge mesh; generating a response to the query by identifying connections between the first node of the first knowledge graph and at least one node of at least one other knowledge graph included in the knowledge mesh; and identifying, based on the response to the query, one or more actions to reduce cyber-security risk.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Systems and methods for detecting, classifying, and managing impediments are disclosed. For example, embodiments may be related to impediments in project management. The proposed systems and methods are configured to evaluate data harvested from multiple different sources (in different formats), identify potential impediments that may be described or present in the data, and classify said impediments based on whether the impediment is non-technical or technical. In addition, the proposed systems implement a technical solution of active learning combined with reinforcement learning to produce a feedback loop that, over each iteration, improves the accuracy of the impediment classification. The impediment management assistant is configured to identify impediments from various inputs sources across industries with an AI-based self-learning capability, providing a robust and accurate model even with only a limited training dataset.
Implementations include a computer-implemented method for reducing cyber-security risk, comprising: accessing a knowledge mesh including a plurality of modules, wherein each module is associated with a respective aspect and maintains a knowledge graph specific to the respective aspect, wherein each knowledge graph is generated using data from one or more cyber-security repositories and includes nodes and connections between the nodes; performing an information completion process to generate connections between nodes of knowledge graphs maintained by different modules of the knowledge mesh, including performing at least one of: inheritance-based inference; natural language processing classifier-based inference; or natural language processing-based object matching inference; and identifying, using the generated connections between the nodes of the knowledge graphs, one or more actions to reduce cyber-security risk.
A system and method for automating and improving tabular and list-based data extraction from a variety of document types is disclosed. The system and method detect and sort which documents include tables or lists, and performs row and column segmentation. In addition, the system and method apply Conditional Random Fields models to localize each table and semantic data understanding to map and export the extracted data to the desired format and arrangement.
This application relates generally to intelligent and explainable link prediction in knowledge graph systems that automatically incorporate user feedback. In one aspect, this application discloses an iterative process for predicting a link set as a group of links in a knowledge graph in an embedding space by expanding the knowledge graph with predicted and validated single links in each iteration such that a final set of links are predicted with each one being added to the set depending on previously added predicted links. In another aspect, this application also discloses automatically extracting rules from user feedback of link predictions and generating a user feedback knowledge graph from the extracted rules, which in combination with an original knowledge graph are used for the generation of the link predictions.
This application relates generally to intelligent and explainable link prediction in knowledge graph systems that automatically incorporate user feedback. In one aspect, this application discloses an iterative process for predicting a link set as a group of links in a knowledge graph in an embedding space by expanding the knowledge graph with predicted and validated single links in each iteration such that a final set of links are predicted with each one being added to the set depending on previously added predicted links. In another aspect, this application also discloses automatically extracting rules from user feedback of link predictions and generating a user feedback knowledge graph from the extracted rules, which in combination with an original knowledge graph are used for the generation of the link predictions.
A device may receive and transform metric data and share of voice data, associated with digital marketing by an entity, into transformed data, may generate model data from the transformed data, and may divide the model data into training data, test data, and validation data. The device may train models, with the training data, to generate training results, and may process the test data, with the models, to generate test results. The device may process the validation data, with the models, to generate validation results, and may select a first model, a second model, and a third model based on the results. The device may utilize the first model to predict a share of voice, and may utilize the second model to predict a click through rate. The device may utilize the third model to predict a conversion rate, and may perform actions based on the predicted data.
A device may identify standard parameters and real-time parameters associated with content of a content type, and may process the content type, the standard parameters, and the real-time parameters, with a parameter unification model, to generate derived parameters for the content. The device may process the derived parameters and the content type, with a multi-level linear regression machine learning model, to calculate a content score for the content, and may process the derived parameters and the content score, with a linear regression machine learning model, to calculate a quantity of f-NFTs to generate for the content and a divestment ratio. The device may create a unique reference to the content, and may create an NFT for the content based on the unique reference. The device may generate the quantity of f-NFTs for the content based on the NFT, and may provide the quantity of f-NFTs to a content exchange.
G06Q 20/36 - Payment architectures, schemes or protocols characterised by the use of specific devices using electronic wallets or electronic money safes
66.
Intelligent Data Ranking System Based on Multi-Facet Intra and Inter-Data Correlation and Data Pattern Recognition
This disclosure is directed generally to an automatic intelligent electronic data processing system, platform, and method for computerized multi-facet data pattern recognition and ranking, and particularly to intelligently personalizing recommendation of data items for consumption by a particular entity based on past data consumption history of the entity and/or other entities via machine recognition of intra and/or inter-entity data item selection correlations. Such personalized recommendation may be based on a multi-facet ranking of the data items by integrating various intra-entity and inter-entity correlations and patterns in data item consumption into a quantifiable entity-specific ranking score for each data item that may potentially be selected for consumption by a particular entity.
Aspects of the present disclosure provide systems, methods, apparatus, and computer-readable storage media that support creating and leveraging digital twins to model multiple physical systems of an enterprise as a monolithic computer system. A digital twin platform may create an abstracted virtual model of an enterprise's system, the model representing a digital twin of a distributed collection of systems that as a group serve a larger goal of the enterprise. Because the abstracted virtual model is logically organized as a monolithic system that maps to multiple physical systems, the abstracted virtual model may be leveraged to provide system health monitoring and scoring from data gathered from the physical systems. The health monitoring, in addition to generation of insights for improving system health, may be easier to understand and more familiar to a user, thereby enabling meaningful determination of actions to perform to maintain or improve system health.
In some implementations, an application programming interfaces (API) manager may receive, at a set of artificial intelligence (AI) APIs, a set of inputs from a set of on-site devices. Accordingly, the API manager may route the set of inputs to a corresponding set of remote servers and may receive, from at least one server of the corresponding set of remote servers, at least one response based on at least one input, from the set of inputs, routed to the at least one server. The API manager may transmit the at least one response to a corresponding device from the set of on-site devices. Further, the API manager may modify at least one API, of the set of AI APIs, based on a traffic pattern associated with the set of inputs and the at least one response.
H04L 45/00 - Routing or path finding of packets in data switching networks
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support ontology driven processes to generate digital twins having extended capabilities. To generate the digital twin, an ontology may be obtained and modified to define additional types of data, such as events and metrics, for incorporation into the digital twin. The ontology, once modified, may be instantiated as a knowledge graph having the additional types of data embedded therein. The embedded data may be used to convert the knowledge graph to a probabilistic graph model that may be queried to extract information from the digital twin in a probabilistic manner. Additionally, multiple ontologies may be utilized to create a digital twin-of-digital twins, which enables more complex digital twins to be generated (e.g., digital twins of entire ecosystems), and enables new insights and understanding of the various components and interactions between the components of the ecosystem.
G06F 30/12 - Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
70.
Heuristics-based processing of electronic document contents
A computer-implemented method for obtaining content of a document is provided. The method includes: receiving data in an unknown format obtained by an OCR application from the document, the data comprising a plurality of visual elements; for each of the plurality of visual elements, obtaining a position in the document; determining, from the plurality of visual elements, one or more graphic elements and one or more textual elements; determining a particular graphic element from the one or more graphic elements based on the position of the particular graphic element; determining, from the one or more textual elements, a key that is associated with the particular graphic element; determining, from the one or more textual elements, one or more attributes that are associated with the particular graphic element; generating an association between the key and each of the one or more attributes; and providing a structured representation of the association.
G06F 40/103 - Formatting, i.e. changing of presentation of documents
G06V 30/412 - Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
G06V 30/262 - Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
In some implementations, a device may receive a configuration associated with a machine learning model. The device may additionally receive a first hyperparameter set associated with the machine learning model. Accordingly, the device may estimate a first quantity of floating-point operations (FLOPs) associated with one or more epochs, for the machine learning model, based on the first hyperparameter set. The device may output, to a user, an indication of a first energy consumption associated with training the machine learning model based on the first quantity of FLOPs.
Implementations include methods, systems, computer-readable storage medium for mitigating cyber security risk of an enterprise network. A method includes: receiving an initial analytic attack graph (AAG) that is representative of paths within the enterprise network with respect to at least one target asset, the initial AAG comprising nodes and edges between the nodes; identifying, from the nodes of the initial AAG, a plurality of node groups, each node group including two or more nodes having at least one common attribute; generating an abstract AAG from the initial AAG, the abstract AAG including at least one abstract node, wherein each node group of the initial AAG is represented by a respective abstract node of the abstract AAG; determining a set of remedial actions at least partially based on the abstract AAG; and executing remedial actions in the set of remedial actions to reduce a cyber security risk to the enterprise network.
Embodiments of the present disclosure provide systems, methods, and computer-readable storage media that leverage artificial intelligence and machine learning to identify, diagnose, and mitigate occurrences of network faults or incidents within a network. Historical network incidents may be used to generate a model that may be used to evaluate real-time occurring network incidents, such as to identify a cause of the network incident. Clustering algorithms may be used to identify portions of the model that share similarities with a network incident and then actions taken to resolve similar network incidents in the past may be identified and proposed as candidate actions that may be executed to resolve the cause of the network incident. Execution of the candidate actions may be performed under control of a user or automatically based on execution criteria and the configuration of the fault mitigation system.
Methods, systems and apparatus for implementing a secure quantum swap operation on a first and second qubit. In one aspect a method includes establishing, by a first party and with a second party, an agreement to use a secure swap protocol; performing the quantum swap operation, comprising, for each two-qubit gate included in the quantum swap operation: performing, by the first party and according to the secure swap protocol, a respective preceding quantum gate cipher on the first qubit; performing, by the first party and the second party, the two-qubit gate on the first qubit and the second qubit; and performing, by the first party and according to the secure swap protocol, a respective succeeding quantum gate cipher on the first qubit. The preceding and succeeding quantum gate ciphers comprise computational bases that anti-commute with a computational basis of the two-qubit gate across a second axis of the Bloch sphere.
Implementations are directed to obtaining a plurality of item profiles for a plurality of new items, each item profile comprising a set of attributes for a respective new item; for each new item: selecting one or more existing items that are similar to the new item based on item attributes of the existing items and the set of attributes of the new item, and executing a collaborative filtering model to determine a first score for the new item based on historical user interactions with the one or more existing items; determining a second score for each new item using an adaptive model; and outputting a first set of new items based on the first score, and a second set of new items based on the second score, an initial ratio between the first set of new items and the second set of new items is a predetermine value.
The present disclosure relates to systems, methods, and products for optimization of a chromatography purification process using a physics-informed neural network. The method includes inputting a plurality of process parameters into the physics-informed neural network to obtain a predicted output; calculating a loss function based on a set of governing equations, as set of constraints, and the predicted output; determining whether the physics-informed neural network is convergent based on the calculated loss function; in response to the physics-informed neural network being convergent, exporting the physics-informed neural network; and in response to the physics-informed neural network not being convergent: updating a plurality of weights in the physics-informed neural network, and inputting the plurality of process parameters to the physics-informed neural network for a next convergence iteration to calculate the loss function and determine whether the physics-informed neural network is convergent.
In some implementations, a device may receive sensor data from an electronics module associated with a face mask. The device may process a first set of measurements included in the sensor data to determine a user mask wearing pattern that indicates whether a user is wearing the face mask in compliance with guidelines related to reducing a risk of the user spreading a respiratory illness or a risk of the user contracting a respiratory illness. The device may process a second set of measurements included in the sensor data to determine a user breathing pattern that indicates whether the user is at risk of having a medical condition or at risk of experiencing respiratory fatigue. The device may generate one or more outputs that include information related to one or more of the user mask wearing pattern or the user breathing pattern.
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
A62B 9/00 - Component parts for respiratory or breathing apparatus
A62B 23/02 - Filters for breathing-protection purposes for respirators
78.
CLASSIFICATION AND SIMILARITY DETECTION OF MULTI-DIMENSIONAL OBJECTS
Implementations are directed to converting a product representation stored in a computer-readable file to a mesh representation, the product representation including a multi-dimensional model of an object, generating a graph representation from the mesh representation, the graph representation including a set of vertices, each vertex associated with a set of coordinates in multi-dimensional space, providing a compound vector representation as a data structure including a set of vectors, each vector in the set of vectors including an m-bit vector that encodes a respective vertex of the set of vertices, the m-bit vector including a set of bit groups, each bit group representing a respective coordinate associated in the set of coordinates of the respective vertex, and processing the compound vector representation through a ML system to generate a prediction associated with the object.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
79.
SYSTEMS AND METHODS TO IMPROVE TRUST IN CONVERSATIONS
The present disclosure relates to a system, a method, and a product for using machine learning models to quantify and/or improve trust in conversations. The system includes a non-transitory memory; and a processor in communication with the non-transitory memory. The processor executes the instructions to cause the system to: obtain a set of vocal features and a set of text features for each sample in audio samples; obtain a trust score for each sample; perform a preprocess to obtain a set of input features for each sample; determine a type of machine-learning algorithm for the machine-learning network; tune a set of hyper parameters for the machine-learning network; generate a predicated trust score by the machine-learning network with the sets of input features for each sample; and train the machine-learning network based on the predicated trust score and the trust score for each sample to obtain the training result.
Systems and methods for evaluating attributes in supply chain management is disclosed. The system may receive data from a set of data sources corresponding to a supply chain associated with at least a product, pre-process the data based on integration of the data from each of the set of data sources, generate supply chain data based on the integrated data, analyze, via an orchestration engine, the supply chain data to assess an impact of the supply chain data on the supply chain, predict, via the orchestration engine, a state associated with a purchase event of the product in the supply chain, and generate a resolution flow to be executed in the supply chain for managing the predicted state associated with the purchase event of the product.
The proposed systems and methods improve the generation of delivery plans by creating one or more predictive models that can evaluate the likelihood of success of delivery plans based on notable historic delivery plans. The predictive models can be applied on already clustered delivery objects, or the clustering can be applied to results of the predictive models. The output of this stage is a set of clusters of delivery objects that are the initial delivery plans. After this stage refining process(es) can be applied to the initial delivery plans to determine a final set of delivery plan candidates. By learning from successful past outcomes, the predictive models can generate initial delivery plans that have a high likelihood of successful execution. By applying refining techniques, the initial delivery plans can be narrowed down to final delivery plans that are tailored to the delivery needs of the current situation. The refining techniques can include one or more of techniques for optimizing key performance indicators, rule-based techniques, and exploratory techniques.
An automated system and method of converting legacy decision logic to a target format. The legacy files are received by the decision logic translation system, which outputs the business rule content in a standard rule structure, according to the selected target format. The process involves decision logic-based rule extraction. In general, methods or processes for extracting business rules have been difficult to reproduce and do not present clearly the extracted rules regarding the concepts of business rules, their composition and categorization. These drawbacks lead to incomplete extraction of rules and massive manual effort to achieve a complete extraction and verification. In contrast, the proposed system overcomes these drawbacks, and outputs files that can be easily used to migrate the business rules to a new platform.
The disclosure provides a non-opaque, abstract, unified query language that exposes the query as a first-class citizen of the underlying architecture. The present disclosure thus facilitates the creation of no-code or low-code applications by enabling a level of collaboration between components that may be difficult to achieve if the language employed were opaque to the architecture. The disclosed query language may be considered “SQL-like,” which may allow contributors familiar with structured query language (SQL) to effectively participate in the design of an application. The defined structures of a data objects of the non-opaque query language described herein are not-hidden and inspectable.
Implementations are directed to methods, systems, and apparatus for ontology-based risk propagation over digital twins. Actions include obtaining knowledge graph data defining a knowledge graph including nodes and edges between the nodes, the nodes including asset nodes representing assets and process nodes representing processes; each edge representing a relation between nodes; determining, from the knowledge graph, an aggregated risk for a first process represented by a first process node, including: identifying, for the first process node, a set of incoming nodes, each incoming node comprising an asset node or a process node and being connected to the first process node by a respective edge; determining a direct risk for the first process; and determining an indirect risk for the first process; and generating, based on the aggregated risk for the first process node, a mitigation recommendation including actions for reducing the aggregated risk for the first process node.
An Artificial Intelligence (AI) Driven multi-platform, multi-lingual translation system analyzes the speech context of each audio stream in the received audio input, selects one of a plurality translation engines based on the speech context, and provides translated audio output. If audio input from multiple speakers is provided in a single channel then it is diarized into multiple channels so that each speaker transmitted on a corresponding channel to improve audio quality. A translated textual output received from the selected translation engine is modified with sentiment data and converted into an audio format to be provided as the audio output.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
86.
MACHINE-LEARNING SYSTEM AND METHOD FOR PREDICTING EVENT TAGS
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training a machine-learning model for predicting event tags. The system obtains event data that specifies, for each of a plurality of events, a respective set of text fields characterizing the respective event. The system generates, from the event data, encoded language features for the plurality of events. The system also obtains knowledge data that specifies information of the event data. The system generates, from the event data and the knowledge data, tag data specifying a respective tag for each of the plurality of events. The system generates, from the tag data and the encoded language features, a respective encoded feature vector for each of the plurality of events. The system combines the tag data with the encoded feature vectors to generate a plurality of training examples.
A video translation system that generates an output video in a target language which includes a translated/output audio track that runs in synchrony with the video content of a received input video in a source language and further displays translated subtitles corresponding to the translated audio track is disclosed. Upon receiving the input video, the domain of the input video can be identified. A translation engine and a transcription engine are selected based on the domain and the pair of languages corresponding to the input video and the output video. The output audio track is generated using the translation engine and merged with a manipulated video, which runs in synchrony with the output audio track to generate the output video. The transcription engine generates subtitles translated from the source language to the target language for the output video.
A smart translation system that translates the input content received from an application based on translation metadata and the application is disclosed. It is initially determined if a translation of the input content exists in a user cache. If it is determined that the translation of the input content exists in the user cache, the translation is retrieved from the user cache. Else, if it is determined that the translation of the input content does not exist in the user cache, the domain and language contexts of the input content are determined and an automatic translation engine is selected based on the contexts and the translation metadata. The translated content is presented to the user via the application while maintaining the look and feel of the application.
An automated, dynamic system and method of testing infrastructure-as-code (IaC). The system is configured to validate infrastructure provisioned in multi-cloud environments and is able to accommodate any cloud provider. Implementation of such as system can eliminate manual errors, as well as enable early detection of errors (i.e., before production deployment), thus empowering early ‘go live’. Furthermore, the proposed embodiments are configured to integrate with already existing devOps pipelines for rapid test execution and can run as many times as needed with minimal configuration, allowing for creation and execution of complex scenarios to test low-level validation upon cloud infrastructure setup.
A self-learning automated application testing system automatically generates test scripts during the execution of an application using an automatic test script generator plugged into the application. The test scripts are generated by capturing event data of events emitted during the execution of the application. The test scripts are compared to the test scripts stored in a test script repository and those test scripts that are determined to be duplicates of the existing test scripts are discarded while the remaining test scripts are stored as new test scripts in the test script repository. An application tester runs regression tests on the application per the new test scripts and logs the results to a test results repository. A dashboard is also provided that enables a user to view and edit the test scripts from the test scripts repository, change configuration settings from a configuration repository, and view test results from the test results repository.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
91.
EXPLAINABLE ARTIFICIAL INTELLIGENCE (AI) BASED IMAGE ANALYTIC, AUTOMATIC DAMAGE DETECTION AND ESTIMATION SYSTEM
An Artificial Intelligence (AI) based automatic damage detection and estimation system receives images of a damaged object. The images are converted into monochrome versions if needed and analyzed by an ensemble machine learning (ML) cause prediction model that includes a plurality of sub-models that are each trained to identify a cause of damage to a corresponding portion for the damaged object from a plurality of causes. In addition, an explanation for the selection of the cause from the plurality of causes is also provided. The explanation includes image portions and pixels of images that enabled the cause prediction model to select the cause of damage. An ML parts identification model is also employed to identify and labels parts of the damaged object which are repairable and parts that are damaged and need replacement. The cost estimation for the repair and restoration of the damaged object can also be generated.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06F 18/243 - Classification techniques relating to the number of classes
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
92.
DYNAMICALLY UPDATED ENSEMBLE-BASED MACHINE LEARNING FOR STREAMING DATA
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support dynamically updated ensemble-based machine learning (ML) classification. An ensemble of ML classifiers may be created from a plurality of trained ML classifiers. These initial ML classifiers may be trained using labeled data to generate predictions based on input data. When an unlabeled data stream is received, the unlabeled data stream may be provided as input to the ensemble to generate predictions. After obtaining labels for the received data, the labels and the unlabeled data stream may be used to train new ML classifiers. The new ML classifiers may replace older ML classifiers in the ensemble. In this manner, the ensemble of ML classifiers is used to perform predictions on high volume streaming data while being dynamically updated with ML classifiers that have learned changes in statistical distribution across more recent input data.
A device may receive state data, actions, and rewards associated with a network of RL agents monitoring a microgrid environment, and may model the network of RL agents as a spatiotemporal representation. The device may represent interactions of the RL agents as edge attributes in the spatiotemporal representation, and may determine edge attributes, transmissibility, connectedness, and communication delay for each of the RL agents in the spatiotemporal representation. The device may determine, based on the transmissibility, the connectedness, and the communication delay, localized clusters of the RL agents, and may process the localized clusters, with a first machine learning model, to identify consensus master RL agents. The device may process the consensus master RL agents, with a second machine learning model, to identify a final master RL agent for the network of RL agents, and cause the final master RL agent to control the microgrid environment.
Aspects of the present disclosure provide methods, devices, and computer-readable storage media that support detection, effect monitoring, and recovery from failure modes in cloud computing application using a failure mode effect analysis (FMEA) engine. Historical metadata related to operation of a hierarchy of devices may be used as training data to train the FMEA engine to identify failure modes experienced by the hierarchy of devices. After training the FMEA engine, metadata from the hierarchy of devices may be input to the FMEA engine to identify a failure mode that may have occurred, and the FMEA engine may select a recovery process to recommend for addressing or mitigating the identified failure mode. In some implementations, the FMEA engine may output an indication of the recommended recovery process and/or initiate performance of one or more operations at the hierarchy of devices to recover from the failure event.
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
95.
Serverless environment-based provisioning and deployment system
In some implementations, a device may generate, based at least in part on a first set of inputs, a serverless software development environment associated with a set of cloud resources. The device may generate, based at least in part on a first machine learning model, a technology stack recommendation having a set of associated tools for performing a software development task. The device may instantiate the selected technology stack in the serverless software development environment and generate a set of applications based at least in part on executing the set of tools. The device may deploy the set of applications in one or more serverless application environments. The device may use machine learning to observe deployed applications, detect hidden anomalies, and perform root-cause analysis, thereby providing a lean and sustainable serverless environment.
Systems and methods for generating an output representation are disclosed. A system may include a processor including a representation generator. The representation generator may receive an input data comprising an input content and an instruction. The representation generator may include a parsing engine to parse the input data to obtain parsed information. The representation generator include a mapping engine to map the parsed information with a pre-stored base template pertaining to a pre-defined module, to obtain a mapped template. The representation generator may generate, through a machine learning (ML) model, based on the mapped template, an output representation in a pre-defined format. The output representation may correspond to the expected representation of the input content.
Systems and methods for facilitating validation of datasets are disclosed. A system may include a processor. The system may include a data validator implemented via the processor to receive an input dataset including a component metadata. The data validator may perform, using an validation model and through a rules engine, validation of information in the component metadata to obtain a validation dataset. The validation may enable to predict at least one invalid feature in the component dataset. The system may include an insight generator implemented via the processor to generate, based on the validation datasets, automated insights pertaining to mitigation of the at least one invalid feature. In an embodiment, the automated insights may be stored in a distributed ledger to facilitate an authenticated storage of the automated insights. The authenticated storage may be facilitated by a network comprising a plurality of nodes.
An autonomous system and method of comprehensively testing serverless applications. The system is configured to automatically generate test scripts and test data based on deployed and modified function code and configuration files. The system is configured to work in multi-cloud environments and is able to accommodate any cloud provider. Implementation of such a system can eliminate manual errors. Furthermore, the proposed embodiments are configured to integrate with already existing devOps pipelines for rapid test execution, and can continue automated testing in real-time as modifications to the app are made.
The present disclosure describes a system and method for using image processing to check color application and to indicate to a user where color application is erroneous. The system and method may include capturing a first digital image of an object before color is applied and a second digital image after color is applied. Then, the first and second digital images may undergo editing processes to prepare the digital images for analysis, analysis to determine colors in the digital images, and the generation of heatmaps in a third digital image showing errors in the application of color.
Aspects of the present disclosure provide systems, methods, and computer-readable storage media that support security-aware compression of machine learning (ML) and/or artificial intelligence (AI) models, such as for use by edge computing systems. Aspects described herein leverage cybersecurity threat models, particularly models of ML/AI-based threats, during iterative pruning to improve security of compressed ML models. To illustrate, iterative pruning may be performed on a pre-trained ML model until stop criteria are satisfied. This iterative pruning may include pruning an input ML model based on pruning heuristic(s) to generate a candidate ML model, testing the candidate ML model based on attack model(s) to generate risk assessment metrics, and updating the heuristic(s) based on the risk assessment metrics. If the risk assessment metrics fail to satisfy the stop criteria, the candidate ML model may be provided as input to a next iteration of the iterative pruning.