The present disclosure provides techniques and solutions for defining, deploying, or using distributed neural networks. A distributed neural network includes a plurality of computing elements, which can include Internet of things (IOT) devices, other types of computing devices, or a combination thereof. At least one neuron of a neural network is implemented, for a given data processing request using the distributed neural network, on a single computing element. Disclosed techniques can manage data processing requests in the event of an unreachable computing element, such as by processing the request without the participation of such computing element. Disclosed techniques also include redefining distributed neural networks to replace an unreachable computing element. Information to configure computing elements as neurons can include one or more of definitions of computing elements that will provide input, weights to be associated with inputs, definitions of computing elements to receive output, or an activation function.
A system, method and computer product for managing distributed transactions of a database. A transaction manager is provided for each of a plurality of transactions of the database. Each transaction manager is configured to perform functions that include generating a transaction token that specifies data to be visible for a transaction on the database. The database contains both row and column storage engines, and the transaction token includes a transaction identifier (TID) for identifying committed transactions and uncommitted transactions. A last computed transaction is designated with a computed identifier (CID), record-level locking of records of the database is performed using the TID and CID to execute the transaction, and the plurality of transactions of the database are executed with each transaction manager.
Methods, systems, and computer-readable storage media for providing two or more map paths, each map path representing a set of models and relationships between models for data stored in a database system, combining the two or more map paths to provide a map path tree that at least partially defines a data structure for storing at least a portion of the data stored in the database system in the cache, querying the database system by recursively traversing the map path tree to retrieve data instances from the database system, and storing each data instance in the cache using the data structure.
Systems and methods provide identification of a code artifact, determination of logical entities of the code artifact, determination of references between the logical entities of the code artifact, determination, based on the determined references, of one or more methods of the code artifact that are referenced by no logical entities of the code artifact, and identification of ones of the one or more methods which were not executed by a production system by searching a code execution trace for each of the one or more methods.
A method may include a collaboration controller receiving a collaboration request to share a data asset associated with a first customer onboarded at a first data center with a second customer. In response to the collaboration request, the collaboration controller may determine that the second customer is onboarded at a second data center but not the first customer. Moreover, the collaboration controller may replicate, at the second data center, the data asset associated with the first customer upon determining that a copy of the data asset is not already present at the second data center. The replicating of the data asset may provide the second customer access to the data asset by at least creating, at the second data center, the copy of the data asset. Related systems and computer program products are also provided.
Methods, systems, and computer-readable storage media for integrating skills into computer-executable applications using a low-code/no-code (LCNC) development platform.
The present disclosure involves systems, software, and computer implemented methods for an artificial intelligence work center for ERP data. One example method includes receiving scenario and model settings for an artificial intelligence model for a predictive scenario. A copy of the dataset is processed based on settings to generate a prepared dataset that is provided with the settings to a predictive analytical library. A trained model trained and evaluation data for the trained model is received from the predictive analytical library. A request is received to generate a prediction for the predictive scenario for a target field for a record of the dataset. The record of the dataset is provided to the trained model and a prediction for the target field for the record the dataset is received from the model. The prediction is included for presentation in a user interface that displays information from the dataset.
A method includes receiving, at a search toolbar, a search query from a machine in a network. The machine has an associated machine profile for participating in the network as an entity. The machine profile includes a machine identifier and machine metadata. A query type is determined from the search query. A search context for the machine is determined using a semantic graph of the network. From a set of services for the network, one or more relevant services to respond to the search query are identified based on the query type and the search context. The search query is applied to the one or more relevant services to obtain a set of responses. A set of relevant results for the search query is determined from the set of responses. The set of relevant results is transmitted to the machine.
Various examples are directed to systems and methods for administering web management software. A system may access alert data describing a plurality of alerts generated by a cloud-implemented web management software and apply a set of problem code conditions to the alert data. Based on the applying of the set of problem code conditions to the alert data, the system may assign a first problem code to the first alert. The system may determine a risk score for the web management software based at least in part on a problem state for the web management software, where the problem state comprises at least the first problem code. The system may determine that the risk score for the web management software matches a threshold condition and send a problem message to a user computing device associated with a user, the problem message describing a problem state of the web management software.
A method includes receiving a message query from an entity identifier participating in a social network. The message query specifies one or more entities, one or more requirements, and one or more constraints. A set of message query parameters is generated based on the message query. A set of queries for a semantic graph of the social network is generated based on the set of message query parameters. The set of queries is applied to the semantic graph to obtain a set of query results. A message context of the entity identifier is determined based on the set of query results and the set of message query parameters. A set of messages from a message repository is determined based on the message context. The set of messages can be presented on a client computer associated with the entity identifier.
Given a set of port stations, a collection of forwarding orders to/from the port stations, and a set of available rail rakes along with their current schedules, a rake plan that maps order containers to rakes is generated. The optimization model is constrained by rake availability, rake capacity (the number of containers that can be accommodated by a rake), and the indivisibility of containers. The scheduling of the rail rakes can be represented as a sparse 3-dimensional binary matrix. Each element of the matrix indicates whether a particular rail rake is assigned to carry a particular container from a particular order in the next trip. The rail rake scheduling data can be used to generate communications from a rail rake planning server to multiple client devices, each client device associated with a different port station.
Some embodiments are directed to obtaining computer network connection data of the client-side transaction entry program, the computer network connection data allowing a connection to be made with the client-side transaction entry program, receiving a begin transaction message from the client-side computer indicating a request to launch a transaction, the computer network connection data being obtained before receiving the begin transaction message, and setting up a connection between the transaction system and the client-side transaction entry program using the computer network connection data enabling the client-side transaction entry program and the transaction system to cooperate in performing the transaction.
The present disclosure involves systems, software, and computer implemented methods for evaluating machine learning on remote datasets using confidentiality-preserving evaluation data. In response to determining that data of the remote customer dataset is of sufficient quality and quantity, feature data corresponding to a machine learning pipeline is generated. The remote customer dataset into one or more data partitions and for each partition, one or more baseline models and one or more machine learning models are trained using a machine learning library included in the remote customer database. Aggregate evaluation data is generated for each baseline model and each machine learning model that includes model debrief data and customer data statistics. In response to determining that the customer has enabled sharing of the aggregate evaluation data with a software provider who provided the remote customer database, the aggregate evaluation data is provided to the software provider.
Described herein are systems and method for providing data transfer in a computer-implemented database from a database extension layer. A data server associated with a database receives a request to transfer data stored in a database extension layer of the database. Input data chunks are collected from the database extension layer until a configured row count limit is reached. Row positions are determined from the input data chunks. Value identifiers corresponding to the row positions are determined. Values corresponding to the value identifiers are retrieved. Output data is generated based on the values corresponding to the value identifiers.
Some embodiments may be associated with facilitating extensibility for an enterprise portal in a cloud computing environment. A computer processor of a multi-level extensibility framework server may provide to a user a graphical view of existing services of the enterprise portal using information from the business enterprise portal data store and a sample data model. The processor may also receive from the user extension information for at least one of the technical layers and, based on the received extension information, automatically generate and provide an intelligent extension proposal to the user. The processor may also display simulated results to the user based on the intelligent extension proposal and the sample data model. The processor may then receive from the user a confirmation of the intelligent extension proposal and automatically transfer extension fields, entities, and mapping to multiple technical layers of the enterprise portal.
In one aspect, a method may include receiving a query associated with a plurality of data sources, wherein the query includes a first attribute; identifying that a query operator, which is associated with execution of the query and the first attribute, includes a first input from a first data source of the plurality of data sources and a second input from a second data source of the plurality of data sources; determining that the first attribute at the second data source corresponds to null; pruning, based on the determined null, the second input from the second data source to inhibit a select from the second data source; and in response to the pruning, performing the query operator by selecting, from the first data source, a column corresponding to the first attribute. Related systems, methods, and articles or manufacture are also disclosed.
The present disclosure involves systems, software, and computer implemented methods for data confidentiality-preserving machine learning on remote datasets. An example method includes receiving connection information for connecting to a remote customer database and storing the connection information in a machine learning runtime. Workload schedule information for allowable time windows for machine learning pipeline execution on remote customer data of the customer is received from the customer. A determination is made that an execution queue includes a machine learning pipeline during an allowed time window. The connection information is used to connect to the remote customer database during the allowed time window. Execution is triggered by the machine learning runtime of the machine learning pipeline on the remote customer database. Aggregate evaluation data corresponding to the execution of the machine learning pipeline on the remote customer database is received and provided to a user.
Systems and methods include presentation of a subset of a result set of items received from a remote system, reception of a command to perform an operation on all items of the result set while presenting the subset, and determination, in response to the command, of whether a total number of items in the result set exceeds a threshold value. If it is determined that the total number of items in the result set exceeds the threshold value, a first request is transmitted to the remote system to perform the operation on all items of the result set, where the first request includes filter values associated with the result set. If it is determined that the total number of items in the result set does not exceed the threshold value, a second request is transmitted to the remote system to perform the operation on all items of the result set, where the second request includes an identifier of each item of the result set.
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 16/25 - Integrating or interfacing systems involving database management systems
09 - Scientific and electric apparatus and instruments
35 - Advertising and business services
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer programs; computer software; manuals in electronic form in connection with computer software, hardware and peripherals. Business consultation. Design, development, programming, customization, integration, implementation, maintenance, trouble-shooting, updating, and rental of computer programs and software; computer software research and engineering; computer software consulting; cloud computing services; software as a service; information technology services provided on an outsourcing basis.
22.
TABLE USER-DEFINED FUNCTION FOR GRAPH DATA PROCESSING
A method may include receiving a definition of a table user-defined function (TUDF) in a graph query language. The table user-defined function may be created based on the definition. For example, the creation of the table user-defined function may include checking and compiling the definition to generate executable code associated with the table user-defined function. Upon receiving a query including a relational query language statement invoking the table user-defined function, such as a structured query language select statement, the query may be executed on at least a portion of a graph data stored in a database. The executing of the query may include calling the executable code to execute the table user-defined function included in the relational query language statement. Related systems and computer program products are also provided.
Computer-readable media, methods, and systems are disclosed for performing a method for partial validation of a tree-like hierarchy structure. A method includes selecting a first portion of the hierarchy structure to edit, updating a status associated with the first portion to a draft mode status, modifying the first portion to create an edited first portion, and executing a first validation process on the edited first portion to determine if the edited first portion is consistent with a plurality of rules of the hierarchy structure. If the edited first portion is inconsistent with at least one of the plurality of rules, the method includes displaying an error message and/or a warning message to a user on a user interface. If the edited first portion is consistent with the plurality of rules, the method includes updating a status associated with the edited first portion to an active mode status.
Systems and methods include reception of an instruction to perform a consistency check on compressed column data. In response to the instruction, a compression algorithm applied to uncompressed column data to generate the compressed column data is determined, one or more consistency checks associated with the compression algorithm are determined, wherein a first one or more consistency checks associated with a first compression algorithm are different from a second one or more consistency checks associated with a second compression algorithm, the one or more consistency checks are executed on the compressed column data, and, if the one or more consistency checks are not satisfied, a notification is transmitted to a user.
A computer implemented method can receive a parameterized query written in a declarative language. The parameterized query comprises a parameter which can be assigned different values. The method can perform a first compilation session of the parameterized query in which the parameter has no assigned value. Performing the first compilation session can generate an intermediate representation of the parameterized query. The intermediate representation describes a relational algebra expression to implement the parameterized query. The method can perform a second compilation session of the parameterized query in which parameter has an assigned value. Performing the second compilation session reuses the intermediate representation of the parameterized query.
An in-memory computing system for conducting on-line transaction processing and on-line analytical processing includes system tables in main memory to store runtime information. A statistics service can access the runtime information using script procedures stored in the main memory to collect monitoring data, generate historical data, and other system performance metrics while maintaining the runtime data and generated data in the main memory.
A method for executing a query may include generating a partition value identifier for a partitioned table. The partitioned table may include a main fragment including a main dictionary storing a first value and a main value identifier corresponding to the first value and a delta fragment including a delta dictionary storing a second value and a delta value identifier corresponding to the second value. The partition value identifier may be set based at least in part on the first value and the second value. The generated partition value identifier and a corresponding one of the main value identifier and the delta value identified may be maintained as part of a mapping. A query to group data stored in the partitioned table may be received. The query may be executed by at least using the mapping.
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
The present disclosure provides more efficient techniques for removing a host from a multi-host database system. An instruction to remove a host system may be received. In response, a determination of whether the first host system does or does not store any source tables is made based on a host-type identifier for the host system. This determination may not require obtaining landscape information for each of the hosts in the database system. If the host system stores replica tables and does not store source tables, those replica tables may be dropped based on the determination that the first host system does not store any source tables. As such, in cases where table redistribution is not needed the landscape information is not obtained, thereby making the host removal process more efficient.
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
29.
DYNAMICALLY MIGRATING SERVICES BASED ON SIMILARITY
Techniques for dynamically migrating services based on similarity are disclosed. In some embodiments, a computer system may, for each online service in a plurality of online services of a source cloud environment, compute a corresponding edit distance value based on a stream of transaction log data of the online service. The edit distance value may comprise a minimum number of edit operations required to change a first log entry in the stream of transaction log data to a second log entry in the stream of transaction log data. Next, the computer system may determine a migration plan based on a measure of similarity between the edit distance values of the online services, where the migration plan specifies a distribution of the online services amongst a plurality of destination cloud environments, and then migrate the online services from the source cloud environment to the destination cloud environments using the migration plan.
In an example embodiment, machine learning models are trained and used to predict a growth classification of time fields and category fields of application tables of Enterprise Resource Planning (ERP) software databases. These predictions can then be used to forecast future technological needs or the future table size more precisely.
The present disclosure relates to computer-implemented methods, software, and systems for identifying trends in the behavior of execution of services in a cloud platform environment and support alert triggering for expected outages prior their occurrence. Metrics data for performance of the cloud platform is continuously obtained. Based on evaluation of the obtained metrics data, the performance of the cloud platform is tracked over time to identify a trend in a performance of a first service on the cloud platform. The identified trend in the performance is compared with a current performance rate of the first service. Based on an evaluated difference between the current performance rate and the identified trend, the difference is classified into an issue-reporting level associated with a prediction for an outage at the first service. A notification for the trend is reported.
H04L 41/0631 - Management of faults, events, alarms or notifications using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
H04L 41/149 - Network analysis or design for prediction of maintenance
H04L 43/0876 - Network utilisation, e.g. volume of load or congestion level
32.
INTELLIGENT MACHINE LEARNING-BASED MAPPING SERVICE FOR FOOTPRINT
Intelligent mapping from created item information to sustainability reference content from a variety of sources can be implemented to facilitate created item footprint management and other sustainability applications. The difficult task of finding appropriate emission factors across a portfolio can be automated. Assisted search can be implemented using enhanced search techniques. Fallback mappings can be implemented to accommodate different levels of granularity during search. A machine learning model can be trained based on a variety of input data, including confirmed mappings, mapping history, and rules. The process of mapping to emission datasets can thus be simplified, enabling footprint calculations to proceed.
A system of configuring a database which is distributed across multiple nodes according to a table distribution, e.g., by storing respective tables of the database at respective nodes. A graph partitioning procedure is applied to a graph of the distributed database, with vertices representing tables and edges representing cross-table operations. A distribution of the tables across the nodes is determined based on the partitioning. The storage of the tables is configured according to the determined distribution.
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
The present disclosure involves systems, software, and computer implemented methods for performing human capital management. One example method includes receiving a set of experience data of a user as unstructured data, converting the unstructured experience data into structured experience data of the user, receiving a set of personality data of the user as unstructured data, converting the unstructured personality data into structured personality data of the user, receiving a set of motivational and preferences data of the user as unstructured data, converting the unstructured motivational and preferences data into structured motivational and preferences data of the user. The structured experience data, the structured personality data, and the structured motivational and preferences data are combined into a user profile, which is stored in a database. An opportunity is recommended to the user based on the user profile.
Systems, methods, and computer media for determining compatible users through machine learning are provided herein. Previous interactions between some users in a group can be used to determine a first set of user-to-user compatibility scores. Both the first set of compatibility scores and attributes for the users in the group can be provided as inputs to a machine learning model that can be used to determine a second set of user-to-user compatibility scores for user pairs who do not have an interaction history. Along with input constraints, the first and second sets of user-to-user compatibility scores can be used to select compatible user groups.
According to some embodiments, systems and methods are provided, including a repository storing at least an Application Programming Interface (API) mapping table; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to: receive an input of one or more legacy API identification elements for a legacy API; determine whether the received legacy API identification elements correspond to a standard legacy API; in a case the received legacy API identification elements do correspond to a standard legacy API, determine whether a corresponding updated API is available; in a case the corresponding updated API is available, determine whether the legacy API includes at least one extension; and in a case the legacy API does include at least one extension, generate an updated corresponding API extension, and transmit the corresponding updated API and the updated corresponding API extension to the user. Numerous other aspects are provided.
Systems and methods include reception of an indication of a first event associated with a first object instance. In response to the indication of the first event, a first process chain comprising a first process associated with an object instance of a first meta domain model object type, a second process associated with an object instance of a second meta domain model object type, and a first process step adapter to map a response to a request are determined, the first process is executed based on a request associated with an object instance of the first meta domain model object type to generate a first response associated with an object instance of the first meta domain model object type, the first process step adapter is executed to map the first response associated with an object instance of the first meta domain model object type to a first request associated with an object instance of the second meta domain model object type, and the second process is executed based on the first request associated with an object instance of the second meta domain model object type to generate a second response associated with an object instance of the second meta domain model object type.
09 - Scientific and electric apparatus and instruments
Goods & Services
Artificial intelligence software; Artificial intelligence software for analysis; Artificial intelligence and machine learning software; Interactive software based on artificial intelligence; Software for the integration of artificial intelligence and machine learning in the field of Big Data.
09 - Scientific and electric apparatus and instruments
Goods & Services
Artificial intelligence software; Artificial intelligence software for analysis; Artificial intelligence and machine learning software; Interactive software based on artificial intelligence; Software for the integration of artificial intelligence and machine learning in the field of Big Data.
40.
INTELLIGENT DOCUMENT PROCESSING IN ENTERPRISE RESOURCE PLANNING
The present disclosure involves systems, software, and computer implemented methods for intelligent document processing in enterprise resource planning. One example method includes automatically determining that a document file is ready to be processed in an ERP (Enterprise Resource Planning) system. The document file is automatically processed and a request is sent to the ERP system to automatically create or update ERP data in the ERP system based on the document file. Status information is received from the ERP system regarding the request to create or update ERP data in the ERP system. The status information received from the ERP system is logged and information indicating that the document file has been processed in the ERP system is automatically recorded.
Various embodiments for a triple integration and querying system are described herein. An embodiment operates by identifying a plurality of triples corresponding to a knowledge graph, and generating a table in a database into which to import the set of triples. The table includes a subject column, a predicate column, and multiple object columns across different datatypes. Values from the triples of the knowledge graph are loaded into the table and a query is executed on the table.
Various embodiments for a triple integration and querying system with dictionary compression are described herein. An embodiment operates by identifying a table of a database with four or more columns with triple formatted data including one subject column, one predicate column, and two or more object columns. It is determined that a master dictionary is to be generated for the both the subject column and the predicate column based on an identical datatype being used for both columns. A subject data dictionary and a predicate data dictionary are generated. A unique value is assigned a same unique identifier a in both the object data dictionary and the subject data dictionary. A master dictionary including both the unique values from the subject data dictionary and the predicate data dictionary is generated. Values in the subject column and the predicate column are replaced based on the unique values from the master dictionary.
Disclosed herein are various embodiments for performing a delta merge with location data. An embodiment operates by receiving a command to merge a delta storage with an original main storage. A coordinate system corresponding to a plurality of data entries of data in the delta storage is identified. A coordinate system specification, corresponding to one of the identified coordinate system, is added to a metadata of a new version of the main storage. A merge operation is performed between the delta storage and the original main storage, in which the plurality of data entries of the delta storage are copied to a container portion of the new version of the main storage, separate from the metadata. The plurality of data entries of the delta storage are deleted and the original main storage is replaced with the new version of the main storage.
Mechanisms are disclosed for modelling and quantitatively characterizing emissions inflows and outflows. Scoping inputs are received including a physical process defined scope of emission-producing physical inputs. Modeling inputs are received, including footprints associated with physical manufacturing inputs. The model energy flows may be provided via a graphical modeling user interface and support allocation rule definitions for distributing emissions footprint definitions. An estimated emission flow is calculated based on combined energy flows. The material flows may be derived from aggregated transaction data associated with emission-producing physical inputs. The calculated emission flow may be based on a calculated emission footprint at stages along a production process. Analytics user interfaces associated with the calculated emissions flows may provide insight into the highest emission producing emission drivers along the production chain in connection with a technical report.
G06F 30/28 - Design optimisation, verification or simulation using fluid dynamics, e.g. using Navier-Stokes equations or computational fluid dynamics [CFD]
Disclosed herein are various embodiments of a location data processing system. An embodiment operates by configuring a column of a table to store data across a plurality of different coordinate systems. The data to be stored in the configured column is received. The received data is divided into a plurality of fragments, including a first fragment comprising a plurality of data entries. A first data entry in the first fragment includes a coordinate specification including metadata indicating how to evaluate corresponding data of a first coordinate system represented by the first data entry. A query for data from the first fragment is received. The plurality of data entries of the first fragment are evaluated based on the coordinate specification to identify data that satisfies the query. The data is returned responsive to the query.
A61M 5/32 - Syringes - Details - Details of needles pertaining to their connection with syringe or hub; Accessories for bringing the needle into, or holding the needle on, the body; Devices for protection of needles
Techniques for implementing secure tenant-based chaos experiments using certificates are disclosed. In some embodiments, a computer system may receive an indication of a scope of execution for a chaos experiment from a tenant of a multitenancy environment, identify a public key from a certificate chain based on the received indication of the scope of execution, and transmit the identified public key to the tenant. Next, the computer system may then receive an encrypted version of the chaos experiment from the tenant, where the encrypted version of the chaos experiment has been encrypted with the identified public key, and then transmit the encrypted version of the chaos experiment to one or more software agents.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
47.
INDEPENDENTLY LOADING RELATED DATA INTO DATA STORAGE
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives a set of data for a record in a first table. The set of data comprises a set of values for a set of attributes. The first table comprises a first set of columns. A first column in the first set of columns in the first table is configured to refer to a second column in a second set of columns in a second table. The program further generates the record in the first table. The program also generates a value for the first column in the first set of columns in the first table based on a subset of the set of values for a subset of the set of attributes. The program further stores the value in the first column in the first set of columns of the record.
Some embodiments provide a program that receives a set of data for a first record in a first table. The set of data comprises a set of values for a set of attributes. In a data loading process configured to load a subset of the set of data into a subset of a first set of columns in the first table, the program determines that a first column in a first set of columns does not belong in the subset of the first set of columns. The program generates the first record in the first table. The program generates a value for the first column in the first set of columns that refers to a second record in the second table configured to represent a defined type of record. The program stores the value in the first column in the first set of columns of the first record.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives an image of a document, the document comprising a set of text. The program further provides the set of text to a machine learning model configured to determine, based on the set of text, a plurality of probabilities for a plurality of defined types of documents. Based on the plurality of probabilities for the plurality of defined types of documents, the program also determines a type of the document from the plurality of defined types of documents.
Methods, systems, and computer-readable storage media for receiving a key and a value of a data object, determining a first identifier and a second identifier based on the key, defining an entry object including the first identifier, the second identifier, and the value, and storing the entry object in a hashmap by: determining a first value of a first index based on the first identifier, determining a second value of a second index to provide a first value and second value pair that defines a first location within the hashmap storing the first identifier, determining a third value of a third index for the first value and second value pair, where the first value, the second value, and the third value define a second location within the hashmap storing the second identifier, and storing the value at a third location within the hashmap.
In some implementations, there is provided a method including selecting, based on a usage of computing resources, a download speed for downloading of an available one or more upgrades to one or more computing systems, and downloading, using the selected download speed, the available one or more upgrades to the one or more computing systems; determining an installation priority for installation of the available one or more upgrades to the one or more computing systems, and installing the available one or more upgrades to the one or more computing systems in accordance with the determined installation priority; and determining a time for switching one or more software applications to the installed one or more upgrades, and switching, based on the determined time, the one or more software applications to the installed one or more upgrades. Related systems, methods, and articles of manufacture are also disclosed.
Various examples are directed to systems and methods for obscuring directional data to improve privacy. An example system may access a first unit of directional data. The example system may select a sampled value from an angular cumulative distribution function (CDF) of a random distribution. The example system may use the selected sampled value to generate a random sample from the random distribution and apply the random sample to the first unit of directional data to generate a first obscured unit of directional data.
Explanation of an analytical result, is afforded to a user by a populating a template with the result of searching homogenous clusters. During a preliminary phase, configuration changes are asynchronously fetched from services of an analytic application, and then grouped into homogenous clusters. Then, during a synchronous phase, a request to explain a particular analytical result is received from the application. Based upon content of the explanation request, the clusters are traversed in order to create a final path. A template comprising an explanation note with blanks, is selected from a template store and then populated with data from the final path. The populated template and the final path are stored together as an outcome. The outcome is then processed according to a challenge function, with the resulting challenged outcome communicated back to the application and afforded to provide the user with an explanation of the analytical result.
Various examples are directed to systems and methods for training a machine learning model. A computing system may access a bias-cleared model trained according to at least one fairness constraint. The computing system may execute at least a first training epoch for a bias-cleared private model. Executing the first training epoch may comprise applying an explainer model to first bias-cleared private model output data to generate first bias-cleared private model explanation data. Executing the first training epoch may also comprise accessing first bias-cleared model explanation data describing first bias-cleared model output data generated by the bias-cleared model and determining a first explanation loss using the first bias-cleared private model explanation data and the first bias-cleared model explanation data. Executing the first training epoch may further comprises determining first noise data to be added to the bias-cleared private model based at least in part on a privacy budget.
Data sources provide access to data. The data stored by the data source may be transformed before use by an application. Different data sources support different transformations. A data agent sidecar for the application accepts work orders from the application and submits work orders to data sources. A work order identifies a data source from which data is requested. The work order optionally includes one or more transformations to be applied to the data from the data source. The data agent sidecar determines, for the data source from which data is requested, which transformations can be performed by the data source and which transformations are not supported by the data source. The data transformations that can be performed by the data source are included in the work order to the data source. The remaining data transformations are performed by the data agent sidecar.
Provided are systems and methods for creating histograms with distinct value sketches integrated therein and for query processing based on the histograms with distinct value sketches. In one example, the method may include storing a histogram that comprises a representation of a bucket of data from a database and that includes a distinct value sketch with a distinct value attribute that identifies an estimated number of distinct values within the bucket of data, receiving a database query, generating a query execution plan for the database query based on the distinct value attribute of the bucket within the distinct value sketch embedded within the histogram, and executing the database query on the bucket of data from the database based on the generated query execution plan.
The present disclosure relates to computer-implemented methods, software, and systems for calculating an available balance amount for allocation to a user. A request from a user for an advance payment associated with a requested period of time is received by a platform service. The platform service is integrated with a plurality of systems storing data for employees of an enterprise. The requested period of time includes working days of the user. The user is identified as an employee of the enterprise in at least one of the plurality of systems. In response to the received request, an available balance amount that can be allocated to the user for the requested period of time is calculated. Calculating the available balance amount comprises determining data associated with the requested period of time to provide a base for calculating the available balance amount.
G06Q 10/0639 - Performance analysis of employees; Performance analysis of enterprise or organisation operations
G06Q 10/067 - Enterprise or organisation modelling
G06Q 40/02 - Banking, e.g. interest calculation or account maintenance
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
A method for exposing artificial intelligence content as a cloud service may include onboarding, by a service broker of a core platform hosting an artificial intelligence (AI) resource, a service provider tenant providing the artificial intelligence resource. The onboarding of the first service provider tenant includes creating, at the core platform, a function specific service broker associated with the artificial intelligence resource. The function specific service broker may then onboard one or more service consumer tenants for accessing the artificial intelligence resource associated with the first provider tenant. Moreover, in response to the one or more service consumer tenants accessing the artificial intelligence resource, the function specific service broker may authenticate the one or more service consumer tenants and meter a usage of the artificial intelligence resource by the one or more service consumer tenants. Related methods and computer program products are also disclosed.
In some implementations, there is provided a method including receiving a request to provide a local database system with smart data access to a database table stored at a remote database system; executing, by the local database system, a series of one or more fetches, each of which obtains a chunk of the database table stored at the remote database system, such that a corresponding result set for each fetch causes the remote database system to fetch and materialize a corresponding chunk of the database table rather than the database table in its entirety; and reading, by the local database system, a first chunk obtained from the database table stored at the remote database system to form, at least in part, the local copy at the local database system.
Computer-readable media, methods, and systems are disclosed for applying machine learning mechanisms to classify and validate documents based on expense rule sets and external data validation services. Document images associated with expenses are received in connection with a reimbursable event. For each received document image data associated with the received document image is transmitted to an optical character recognition image processor that can recognize contents and associated coordinates. OCR data is received and transmitted to a text tokenizer. Tokenized text is received corresponding to expense details, and the tokenized text and coordinates are sent to a text feature generator. Text feature vectors are received and transmitted to a document classifier and a document classification received. Document fields are extracted and based thereon a document is validates and a corresponding reimbursement instruction generated.
Computer-readable media, methods, and systems are disclosed for automatic generation of dynamic application trace logs associated with a running application. A log viewer presents application log entries associated with an application execution log having been generated in connection with a previous execution of the running application. The application execution log is analyzed to identify application execution log context descriptors. The application execution log context descriptors are extracted from the application execution log. The application execution log context descriptors are transmitted to the running application. Matching templates that match each of the one or more application execution log context descriptors are received from the running application. The tracing templates that have an associated context relevance score are received from the running application. Finally, the log viewer displays the tracing templates based on the associated context relevance score and starts a trace based on a selected tracing template.
A computer implemented method can receive an event object published by a source entity, parse the event object to retrieve an event message pertaining to an event awaiting processing and one or more target entities authorized to process the event, identify one or more receiving entities having subscribed to the event object from the one or more target entities, create a message queue connected with one or more message routes that directly link the source entity to the respective one or more receiving entities, and post the event message to the message queue.
In order to address the technical problems encountered with tenant-specific connection pools and global connection pools, in an example embodiment, an efficient connection pool is provided, which restricts the total number of connections per application runtime instance (as with the global connection pool) but at the same time groups and maintains the connections at the tenant level, using tenant-specific sub-pools.
A framework provides a detailed explanation regarding specific aspects of a (complex) calculation produced by an application (e.g., an analytical application). An explainability engine receives a request for explanation of the calculation. The explainability engine traverses homogenous data clusters according to the request, in order to produce a final path. The final path is used to select and then populate a template comprising explanation note(s). The outcome (comprising the final path and the template) is processed with a ruleset according to a covariance (COV) function in order to provide a first intermediate outcome. The first intermediate result is then processed with a second input according to a correlation (COR) function to provide a second intermediate outcome. The second intermediate result is processed according to a challenge function to provide a challenged outcome, and feedback (e.g., reward or penalization) to the ruleset. The challenged outcome provides detailed explanation to the user.
The present disclosure involves systems, software, and computer implemented methods for intelligent shelfware prediction and system adoption assistance. One example method includes identifying historical shelfware information for software products for customers of a software provider. The historical shelfware information is used to train machine learning models to generate a prediction that indicates a likelihood that a particular product for a particular customer will turn into shelfware. A request is received to generate a shelfware prediction for a first software product for a first customer of the software provider. A first trained machine learning model corresponding to the first software product and the first customer is identified. A first shelfware risk prediction is received from the first trained machine learning model that indicates a likelihood that the first software product turns into shelfware for the first customer. The first shelfware risk prediction is provided in response to the request.
Efficient transport of content packages comprising multiple objects, is achieved utilizing lineage analysis. User selection of an object in a package, triggers a dependency request to a landscape containing the object. The landscape returns a dependencies result, which includes dependencies between the selected object and others present within the landscape. The dependencies result is used to create a dependents tree structure. Based upon the dependents tree and the originally selected object, a lineage view is created and afforded to the user. Example lineage views may comprise spider charts with the selected object at the center. The user may further explore object dependencies by interacting with the lineage view to create other lineage views. Providing intuitive visualization of object dependencies, aids in efficient package transport (e.g., by allowing a user to identify dependent object(s) missing from the package, and/or particular objects having many dependent objects that also require transport).
Disclosed herein are system, method, and computer program product embodiments for allowing a software application subject to restrictions on table or column names work with a database management system (DBMS). An embodiment operates by determining that a table name or one or more column names of a table used in the DBMS violate one or more predefined rules for the software application. In response to the determining, the embodiment then creates a view for the table such that a view name of the view or the one or more column names of the view satisfy the one or more predefined rules for the software application.
Systems and methods provide reception of an identifier of a machine learning classification model and a prediction generated by the machine learning classification model, identification of model configuration data associated with the machine learning classification model, modification of the prediction based on the model configuration data to generate an enhanced prediction comprising calibrated probabilities, and returning of the enhanced prediction.
Sustainability reference content from a variety of sources can be imported into a canonical format for a variety of uses. Product footprint analysis can be performed by accessing the imported sustainability reference content. The canonical format can support a variety of features relating to normalization of units, validity time window, geographical indications, and content quality. API access can be provided to facilitate content update. Automated data import from lifecycle assessment content providers can be supported along with manual input of data from arbitrary sources such as users, vendors, suppliers, or the like. Scope can go beyond footprint analysis to environmental health compliance and other use cases.
Computer-readable media, methods, and systems are disclosed for assisting users in gaining and granting authorization roles by automating some or all of the processes. A method can include creating an authorization request for a first user, retrieving contextual information from a repository, generating suggested authorization roles for the first user based on the contextual information using a peer-based machine learning recommendation system, presenting the suggested authorization roles to the first user in a user interface, selecting at least one authorization role, and submitting the authorization request to an access management system to provide the first user with targeted access to at least one requested system.
A two phase move technique for moving groups of tables may reduce cross-host communication and length of table locks. A group including a first table and a second table may be moved to the destination host system. This is done by creating a third table replicating the first table and creating a fourth table replicating the second table on the destination host, and replicas of other tables in the group. The tables in the group are not locked against modifications during the creation of the replica tables. After the creation of the replicas, roles of the original tables and the created tables are switched such that the original tables are set to the replica role and the created tables stored on the destination are set to the source role. The original tables are dropped after the switching of the roles.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
72.
Tenant-specific extensions in distributed applications using an application model extension engine
The present disclosure involves systems, software, and computer implemented methods for tenant-specific extensions in distributed applications using an application model extension engine. One example method includes receiving a request from a customer of a distributed multitenant application to add an extension field to a document type used by the application. An activation command is posted to an asynchronous message topic that requests each microservice of the application to activate the extension field to support the extension field for the customer. Replies to the activation command are received from the microservices that indicate whether respective microservices have successfully activated the extension field. In response to determining that each microservice has successfully activated the extension field for the customer, an activation success command is sent to the asynchronous message topic that informs the microservices that the extension field can be used for the customer in the distributed multitenant application.
In some implementations, there is provided initiating an extract transform and load from a first system to a second system; in response to the initiating, performing the extract transform and load by extracting input data at the first system, transforming the input data using one or more rules to form generated data, and loading the generated data into the second system; and during at least a portion of the extract transform and load, storing a data object including a snapshot and a log, wherein the snapshot includes the input data and the log indicates which of the one or more rules was applied to the row in the input data, and storing an aggregation table including a row identifier identifying a row of the generated data and further including a source identifying which rows in the input data were used to form the row of the generated data.
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
74.
SCALABLE ENTITY MATCHING WITH FILTERING USING LEARNED EMBEDDINGS AND APPROXIMATE NEAREST NEIGHBOURHOOD SEARCH
Methods, systems, and computer-readable storage media for a machine learning (ML) system for matching a query entity to one or more target entities, the ML system that reducing a number of query-target entity pairs from consideration as potential matches during inference.
The present disclosure involves systems, software, and computer implemented methods for concurrent duplicated sub process control in a workflow engine. One example method includes executing a sub process of a workflow process using an instance of a node that represents the sub process. After executing the sub process, a determination is made as to whether the node is a join node that has multiple direct predecessor nodes in a graph of the workflow process. If the node is a join node, dependent nodes of the join node are identified for which traversal of the graph from a dependent node passes through the join node. A set of active dependent node instances of the dependent nodes are identified and a determination is made as to whether to wait for completion of any particular dependent node instances or if workflow execution can continue beyond the join node.
In an example embodiment a mechanism for consumer group versioning is introduced. Here, each application runtime provides a version for any consumer group during its deployment and keeps increasing the version whenever there is an enhancement or bug fix. Thus, both the application and the consumer group will have a version. Once it is recognized that a consumer group assigned to partitions in a topic has an outdated consumer group version number (i.e., a consumer group with the same name/application but a later consumer group version number has been registered with the message broker), the old consumer group is disconnected immediately. This allows the message broker to immediately assign partitions to the consumers in the newer consumer group, thus avoiding the aforementioned delays and associated technical problems.
A trained machine learning model can determine whether a portion of programming code contains a security event. The determination can be included in a security assessment. The category of security event can also be determined. During training, observed portions of programming code labeled according to whether they contain a security event and the category of security event can be tokenized. Vectors can be generated from the tokens. The machine learning model can generate a new vector for an incoming portion of programming code and compare against combined vectors for the observed portions of programming code. A security assessment can indicate whether the incoming portion of programming code contains a security event, the category of the event, or both. For training purposes, security logging statements can be removed from training code.
A method includes receiving by a chat server a client request from a client device communicating with the chat server. The chat server generates a server response that is transmitted to the client device. A nudge repository is searched for a nudge action based on a set of tokens generated from at least a portion of the client request. In response to finding the nudge action, a user cohort to receive the nudge action is determined. A nudge request including the nudge action and the user cohort is generated and transmitted to the chat server. The nudge action is deployed from the chat server to one or more client devices associated with one or more user identifiers in the user cohort. The nudge action is rendered as a nudge at the one or more client devices.
Systematic ordering of data and information in computer
databases for the internet relating to the development,
creating, programming, implementing, performance,
production, distribution, sale, application, use, function,
handling, modification, maintenance, rental, updating,
design and outsourcing of computer programs and computer
software; systemization of data and information in computer
databases for the internet in relation to the creation,
development and design of computer programs, for analysis
and direct data processing applications, and apparatus
therefor; conducting, arranging and organizing trade shows
and trade fairs for commercial and advertising purposes;
business consultation.
80.
DATA COMPRESSION FOR COLUMNAR DATABASES INTO ARBITRARILY-SIZED PERSISTENT PAGES
A method for compressing columnar data may include generating, for a data column included in a data chunk, a dictionary enumerating, in a sorted order, a first set of unique values included in the first data column. A compression technique for generated a compressed representation of the data column having a fewest quantity of bytes may be identified based at least on the dictionary. The compression technique including a dictionary compression applying the dictionary and/or another compression technique. A compressed data chunk may be generated by applying the compression technique to compress the data column included in the data chunk. The compressed data chunk may be stored at a database in a variable-size persistent page whose size is allocated based on the size of the compressed representation of the data column. Related systems and articles of manufacture are also provided.
A specialized topic in a message broker that is devoted to infrastructure messages is used. This infrastructure topic is then subscribed to by various applications, such as application microservices, including instances of applications running on different versions of the same application. Subscribing message consumers can then write a message to the infrastructure topic, with the message including the application identification and version number. Other message consumers subscribed to the infrastructure topic will receive notifications of this posted message. When a message consumer receives such a message, it checks to see if the message comes from an application that has the same application identification as itself. Then it checks to see if the version number included in the message is greater than its own version number. If so, then the application version of the recipient message consumer has been superseded and the application disconnects itself from the message broker.
An example method comprises forming a communication link between a software test orchestration tool and a testing dashboard; receiving from the software test orchestration tool an indication of software test results at the application level of granularity, wherein the results indicate reliability status for a plurality of software applications; and calculating a reliability metric based on the indication of software test results.
The source code of an HTML form can be analyzed to derive parameter rules that are subsequently enforced when apparent content of the HTML form is received. Such parameter rules can be drawn from client-side restrictions that are extracted from the HTML source, which are then enforced to prevent content violating the rules from reaching the backend. A proxy can sit between the application and the apparent browser. Dynamically generated HTML can be supported via a headless browser that mirrors HTML that would be present at a browser. Useful for preventing HTML form-based attacks and identifying clear cases of malicious HTML form requests.
The present disclosure involves systems, software, and computer implemented methods for identifying document generators by color footprints. An example method includes receiving a request to classify a first document. A document footprint is generated for the first document that includes a set of most frequently occurring color values in the first document. A classification for the first document is determined as either generated-by-the-document-generator or not-generated-by-the-document-generator based on comparing the document footprint for the first document to a document generator footprint. The document generator footprint includes a set of common color values that occur in a set of training documents for the document generator. The classification for the first document is provided in response to the request.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives, from a client device, a request for information associated with a category. In response to the request, the program further accesses a storage to retrieve a first value associated with the category. The program also determines a set of values associated with the category based on a plurality of transactions. The program further determines an optimization level value associated with the category. The program also determines a second value associated with the category based on the first value, the set of values, and the optimization level value. The program further provides, by an application operating on the device, a graphical user interface (GUI) to the client device, the GUI comprising the second value.
Disclosed herein are system, method, and computer program product embodiments for performing deterministic execution of background jobs in a load-balanced system. An embodiment operates by receiving, at a work server in a load-balanced system, job submission code from a client connected to the work server, wherein the job submission code performs a background job for the client. The embodiment then executes, at the work server, the job submission code. The execution of the job submission code obtains a name of the work server executing the job submission code, maps the name of the work server to a logical server name, and submits the background job for background processing using a job processing function that executes the background job on the logical server name.
Embodiments may facilitate event processing for an ABAP platform. A business object data store may include a RAP model, including a behavior definition, for a business object. A framework may automatically transform the behavior definition of the RAP model into a producer event vian event binding and a cloud event standardized format. Information about the producer event may then be passed to an ABAP application associated with a pre-configured destination at an enterprise business technology platform. In some embodiments, a standalone API enterprise hub data store may contain an event specification. An ABAP development tenant of a business technology platform may automatically parse the event specification and translate the parsed information into high-level programming language structures that reflect an event type at runtime. An event consumption model may then be generated based on the event type.
Embodiments may facilitate event processing for an ABAP platform. A business object data store may include a RAP model, including a behavior definition, for a business object. A framework may automatically transform the behavior definition of the RAP model into a producer event vian event binding and a cloud event standardized format. Information about the producer event may then be passed to an ABAP application associated with a pre-configured destination at an enterprise business technology platform. In some embodiments, a standalone API enterprise hub data store may contain an event specification. An ABAP development tenant of a business technology platform may automatically parse the event specification and translate the parsed information into high-level programming language structures that reflect an event type at runtime. An event consumption model may then be generated based on the event type.
A method of implementing a scenario library includes generating a first sourcing event scenario using a scenario library and based on a first request received from a first client device. The method also includes storing the first sourcing event scenario as one of the plurality of sourcing event scenarios at the scenario library. The method further includes modifying a sourcing event template for generating the sourcing event by at least adding a reference to the first sourcing event scenario to the sourcing event template. The method further includes generating the sourcing event using the modified sourcing event template and based on a second request from one of the plurality of client devices. The method further includes displaying, in response to the second request, the generated possible combination of suppliers for the sourcing event. Related systems and articles of manufacture are provided.
Systems and methods provide identification of a first configuration specifying a first column of a first data source, acquisition, based on the first configuration, of a first sequence of values stored in consecutive rows of the first column, and storage of the first sequence of values in a storage device in association with an identifier of the first data source and the first column.
A bipartite graph is created that represents related metadata and datasets. The graph comprises disjoint and independent sets D (datasets) and M (metadata entries). For each dataset in the collection that has corresponding metadata, a relation is created in the graph between the node in D for the dataset and the node in M for the metadata. A vector embedding is generated for each dataset and metadata entry. New relations are added to the graph to relate datasets having similar embeddings. The graph is enriched by the new relations, enabling new kinds of search queries as well as recommendations and suggestions.
Techniques and solutions are provided for updating or augmenting consolidated data that is produced using base data. The consolidated data can include data that is aggregated by various grouping criteria. After a set of consolidated data is determined, the base data may change, one or more rules used to calculate the consolidated data may change, or it may be desired to see data that is more granular than that included in the consolidated data. After consolidated data is provided to a user, a user issues a data augmentation request. The data augmentation request causes the base data, which may have been updated, to be processed to provide updated data, where the processing includes grouping operations used in producing the consolidated data. The updated data is provided to a client in response to the data augmentation request.
Computer-readable media, methods, and systems are disclosed for translating a user interface from a source language to a target language in near real-time. Specifically, disclosed is a method for determining a plurality of user interface (UI) elements in a user interface of a web application, sending a request to translate the plurality of UI elements and receiving first values in the target language from the translation component, sending a request to retrieve at least one translation scenario and receiving second values in the target language from a central server, allowing a user to provide a third value for each of the plurality of UI elements, allowing the user to select the values for the plurality of UI elements, and rendering the user interface with the selected values for the plurality of UI elements.
An application executing on a pod may generate a content request for a particular version of content. The application sends a request including an identifier of the version of the content to a repository in a cluster of pods. The repository determines, using the identifier, whether the version of the content is stored in the cluster or stored at a database in a different cluster. The application receives a response from the repository that may indicate that the version of the content is stored in the cluster and the response may include an identifier of a content pod storing the version of the content. The application sends the content request to the content pod storing the version of the content and may receive the version of the content without querying the database for the version of the content.
Computer-readable media, methods, and systems are disclosed for validating data associated with schemas. A user defines the object model of at least one asset and a first schema is generated in accordance with the defined object model, and a unique fingerprint is generated. Data is collected from one or more devices in accordance with the object model. The collected data is serialized, and a second schema is generated. The second schema is ordered in accordance with the first schema and a unique fingerprint is generated. The fingerprint of the first schema is compared to the fingerprint of the second schema to provide an efficient review process for determining whether the schemas are equal, and the associated data may be validated. A fingerprint cache may be updated with fingerprints associated with a plurality of schemas, as well as version history of each schema, to provide an efficient review process.
A system includes detection of a first allocation of a first memory size in an object store for storage of a first logical page, in response to detection of the first allocation, incrementing a count associated with the first memory size by a first data structure associating a respective count with each of a plurality of memory sizes, detection of a first deallocation of the first logical page, in response to detection of the first deallocation, decrementing a count associated with a second one of the plurality of memory sizes by the first data structure, and determination of a memory usage associated with the object store based on the counts associated with each of the plurality of memory sizes by the first data structure wherein the second one of the plurality of memory sizes is different from the first memory size.
Techniques and solutions are provided for improving query execution. Data models can be complex, which is often reflected in queries against such data models. The present disclosure provides a query refactoring technique where a complex query, such as a query expressed as a single select statement, can be formulated as a series of less complex queries. The workload of a database can be reduced by combining results of the less complex queries outside of the database. The present disclosure provides a framework for implementing these techniques, where the framework includes a virtual cube, a calculation engine, and one or more operations, which can all be implemented as classes in a programming language, and where a generic class or interface can help guide users in developing subclasses that provide a reformulation or refactoring of a complex query.
According to some embodiments, systems and methods are provided, including at least one end-to-end (E2E) scenario including a sequence of process steps; a plurality of automates, wherein an automate is executable for each process step; a memory storing processor-executable code; and a processing unit to execute the processor-executable program code to: execute the plurality of automates in a sequential order that matches a sequential order of the process steps; for each executed automate, determine whether the executed automate failed; in a case it is determined the executed automate failed, identify dependent transactional data input to the failed automate, wherein the dependent transactional data includes one or more data objects; identify a validity state of each data object; and resume execution of the process steps based on the identified validity state. Numerous other aspects are provided.
G05B 19/4155 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
100.
MACHINE LEARNING RECOMMENDATION FOR MAINTENANCE TARGETS IN PREVENTIVE MAINTENANCE PLANS
Automated management of tasks in a preventive maintenance context supports associating preventive maintenance targets with a preventive maintenance task. A trained machine learning model can predict which targets are most likely to be appropriate for a given header preventive maintenance target. A user interface can assist in target selection. Data integrity can be improved, and unnecessary expenditure of preventive maintenance resources can be avoided. A trained machine learning model can support features such as filtering and identifying outliers.