Example methods and systems are directed to determining topics of data objects. A machine learning model may be trained and used to determine topics of data objects. After topics for data objects are determined by the trained machine learning model, data objects having similar topics can be automatically related. A semantic web approach relies upon the metadata of the data objects being generated along with the metadata of the insights being generated (such as topic groups). Such a semantic association between various objects (using metadata) forms a metadata driven network of analytical representation of business entities/objects. A data-stream comprising the semantic web, indicating the relationships between the metadata of the data objects and the metadata for the topics, may be pushed continuously into a central tool or repository to allow users to generate seamless analytical dashboards with minimal efforts.
In an implementation, a request for one or more attachments stored in an application document store is received from a requestor and by an application agent associated with an application. For each attachment identified in the request, the application agent: 1) requests the attachment from a data privacy integration (DPI) kernel service; 2) receives a download link to an attachment in the application document store; 3) downloads, using the download link, the attachment from the application document store; 4) informs the DPI kernel service that a download of the attachment is complete; and 5) receives a message from the DPI kernel service that the download link has been deactivated. The application agent returns the one or more attachments to the requestor.
Provided is a system and method directed to a process of extending a software application using a semantic model as the logic of the extension. A parallel architecture is created by the extension which allows the software application to process logic from a semantic model (e.g., a graph) and process logic from source programming code. In one example, the method may include generating an extension comprising logic for a software application hosted on a host platform. The logic may include an entity-based semantic model, The method may further include deploying the extension within the software application on the host platform, wherein the deploying includes modifying programming logic of the software application to execute the entity-based semantic model, and activating the extension within the software application on the host platform.
Various systems and methods for selective revalidation of data objects are provided. In one example, a computer-implemented method includes updating a target data object of a database system according to a definition statement, and determining whether the definition statement changes one or more object properties of the target data object. In response to determining that the definition statement changes the one or more object properties of the target data object, the method includes revalidating data objects depending on the target data object. In response to determining that the definition statement does not change the one or more object properties of the target data object, the method includes not revalidating the data objects depending on the target data object. In this way, database management performance and speed may be improved while maintaining validity of data objects in a database.
Techniques and solutions for defining clusters of data objects are provided. An anchor data object for the cluster is determined. The anchor data object is associated with a semantic concept. Other data objects included in the cluster are also associated with the semantic concept. One or more data objects that are related to the anchor data object are added to the cluster. Additional data objects, related to the one or more other data objects, or to other data objects of the additional data objects, are added to the cluster. The cluster is associated with a name, which can be used to identify data objects that are part of the cluster. The cluster can be used for a variety of purposes, including defining a replication task, for the creation of an application program interface, or for defining a deployment task that deploys at least a portion of cluster data objects.
The present disclosure provides techniques and solutions for executing requests for database operations involving a remote data source in a system that includes an anchor node and one or more non-anchor nodes. A first request for one or more database operations is received, where at least a first database operation includes a data request for a remote data object. It is determined that the first database operation is not an insert, delete, or update operation, and therefore is assignable to the anchor node or one of the non-anchor nodes. The first database operation is assigned to a non-anchor node for execution. In a particular implementation, for a particular set of requests for a database operation, once an insert, delete, or update operation is received for the remote data object, subsequent operations for the remote data object in the set of requests are assigned to the anchor node for execution.
Computer-readable media, methods, and systems are disclosed for determining maintenance readiness of at least one system in a cloud environment including requesting performance of a maintenance event by a user via a user interface and analyzing data from the at least one system to determine a readiness for the performance of the maintenance event. Analyzing the data may comprise predicting an expected downtime for the maintenance event for the at least one system, determining an effort estimation variable for the at least one system, and determining a maintenance readiness rating (MRR) for the at least one system based on the effort estimation variable and the expected downtime.
In an implementation, one or more rules associated with a DO from a rules database is read by a rule user interface (UI) plug-in associated with a data object (DO) maintenance UI. The one or more rules for the DO to fields associated with the DO on the DO maintenance UI are related by the rule UI plug-in. The rule UI plug-in, using the related one or more rules, auto-populates and validates received values for the fields associated with the DO on the DO maintenance UI. The rule UI plug-in determines that one or more violations of the one or more rules has occurred and displays an additional UI with mutually exclusive options for mitigating the determined one or more violations of the one or more rules. A new rule is saved into the rules database.
Various examples are directed to systems and methods for utilizing relationship data in a computing system. The computing system may extract first relationship data from a document and determine a first confidence value describing the first relationship data. The computing system may write the first relationship data to a knowledge graph data structure. The computing system may serve a first user interface page to a user computing device associated with a first user and receive feedback data describing an accuracy of the first relationship data. The computing system may modify a first confidence subunit of a triple data unit associated with the relationship to describe an updated confidence value based on the feedback data and a trust score of the first user.
Techniques and solutions are provided for providing federated data access to parameterized data objects. At a local system, a virtual parameterized data object is created. A remote computing system is contacted to obtain parameters used by a parameterized data object of the remote computing system to which the virtual parameterized data object corresponds. Parameter information received from the remote system is stored in a definition of the virtual parameterized data object at the local system. When a request for a database operation involving the virtual parameterized data object is received, the parameter information can be used to determine whether the request is correctly formed, or can be used in preparing a request to be sent to the remote system to be performed using the parameterized data object to obtain information specified in the request for a database operation.
Embodiments integrate with an authorization service (e.g., OAUTH) to implement document protection. In response to a document scheduling request, a protection engine reads a protection policy including a sensitivity label, from the authorization service. The protection engine encrypts content of the document, and stores the document including the encrypted content and a header, in a non-transitory computer readable storage medium (e.g., a database). At a conclusion of the document scheduling phase, the protection engine may send a status (e.g., successful; failed) of the document scheduling. Next, in response to receiving a subsequent document view request, the protection engine references the header to communicate with the authorization service. The protection engine decrypts the content based upon information received from the authorization service, and provides the document including decrypted content for viewing.
Various examples are directed to systems and methods for utilizing relationship data. A computing system may receive a time-dependent query against a knowledge graph data structure. The computing system may access confirmation data from the knowledge graph data structure, the confirmation data describing a first plurality of confirmation points-in-time at which the first test relationship is true. The computing system may determine that at least one of a beginning or an end of a first time period associated with a test relationship of the time-dependent query is not defined by the knowledge graph data structure. The computing system may determine a response to the first time-dependent query indicating a veracity of the test relationship at a test point-in-time using the first plurality of confirmation points-and-time.
In some embodiments, there is provided a method including creating at least one reusable user interface metadata definition for at least one user interface object; storing the at least one reusable user interface metadata definition; creating at least a portion of a user interface page, which includes the at least one user interface object, using the at least one reusable user interface metadata definition; overriding the at least one reusable user interface metadata definition; bundling into a container the least one reusable user interface metadata definition with other metadata definitions; and deploying the container of the at least one reusable user interface metadata definition and the other metadata definitions to a device where a metadata interpreter can generate at least one user interface object associated with the at least one reusable metadata definition. Related systems and computer program products are also provided.
Provided are systems and methods for transforming an operation-centric API into a graph-based API. In one example, a method may include receiving a description of an application programming interface (API), translating the description into a proxy model that comprises a list of a plurality of operations performed by the API, executing one or more heuristic programs on the proxy model to determine a plurality of entities associated with the list of operations and relationships among the plurality of entities, generating a graph API based on the plurality of entities and the relationships among the plurality of entities, wherein the graph API comprises a plurality of nodes representing the plurality of entities and edges between the plurality of nodes representing the relationships between the plurality of entities, and storing the graph API in a storage.
According to some embodiments, systems and methods are provided including an n−1 Application Programming Interface (API) including n−1 API metadata; an API automate, wherein the API automate is generated for the n−1 API; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to cause the system to: receive an n API including n API metadata; execute the API automate with the n version API and output an API automate status; and in a case the API automate status is failed: compare the n API metadata and the n−1 API metadata; identify at least one difference between the n API metadata and the n−1 API metadata; generate an alert based on the identified at least one difference; and render a user interface, wherein the rendered user interface includes the alert. Numerous other aspects are provided.
Execution of an operation in a cloud environment is performed by a controller receiving an event from an eventing framework. The controller determines a number of phases of the cloud operation, and checks a status of each phase. Where a status of a phase of a cloud operation is open (e.g., having not been completed owing to interruption due to a communications failure in the cloud), the controller executes the phase of the cloud operation, records the completed status in a storage medium, and reports the status. Where a status of a phase of the cloud operation is determined to be complete, the controller iterates to the next phase. The controller may configure the eventing framework to receive the event. The eventing framework may be external to the controller (e.g., KUBERNETES). Alternatively, the eventing framework may be internal to the controller (for example communicating events based upon a polling mechanism).
G06F 11/14 - Détection ou correction d'erreur dans les données par redondance dans les opérations, p.ex. en utilisant différentes séquences d'opérations aboutissant au même résultat
Computer-readable media, methods, and systems are disclosed for proactively compiling in-memory database management system (DBMS) query plans upon startup of the in-memory DBMS. During normal operation of the in-memory DBMS, alternative query plans having associated execution statistics are collected and captured. Thereafter, the alternative query plans are selectively persisted and in response to detecting performance regressions, the regressed query plan is compared with prior query plans. In response to determining that a prior query plan performs better, the regressed query plan is replaced with the prior query plan. Upon a restart of the in-memory DBMS, a selected portion of the plurality of alternative query execution plans is loaded, and the plurality of alternative query execution plans are compiled. New queries are received and executed based on the proactively compiled query plans.
Methods, systems, and computer-readable storage media for data replication. Data records associated with business entities are obtained. A plurality of data fields is defined for each record. A first set of data records is determined as associated with a first identifier of a first business entity. Data from a first set of data fields is selected from each data record of the first set of the data records. The first set of data fields are a subset of the plurality of data fields and is defined for evaluation of the first set of data records associated with the first business entity to determine a first data record from the first set of data records to be replicated from the data management system into a database system.
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
Various examples are directed to systems and methods for upgrading a cloud-implemented application. A cloud service may receive a request from a user group to access the application. The cloud service may access consumer context data comprising a plurality of context properties of the user group and may access a rollout strategy map comprising a first rollout record. The cloud service may compare the plurality of context properties of the first user group to first selector data indicated by a first rollout record. Based on the comparing, the cloud service may add the first version of the application to a list of permissible versions for the first user group.
Methods, systems, and computer-readable storage media for retrieving, by a smart setup system, a component configuration metadata file corresponding to an application, the component configuration metadata file including component metadata representing components that the application uses during runtime, parsing, by a parser of the smart setup system, the component configuration metadata file to provide a set of data objects, each corresponding to a respective component in the set of components, providing, by an emitter of the smart setup system, a set of checking scripts and a set of installation scripts by, for each component in the set of components, providing a checking script and an installation script using a respective data object, and executing, by the smart setup system, each checking script in the set of checking scripts, and in response, receiving a set of check results, each check results indicating whether pre-requisites of a respective component are met.
In a scenario involving a primary and secondary server, resource requests can be managed to avoid sending multiple requests to the secondary server. In particular, requests for data object attributes can be queued when another request has already been made. Hashkey and locking mechanisms can be used to support scenarios involving multiple users and multiple data object instances. Performance of the overall system landscape can thus be improved by effectively consolidating resource requests.
Embodiments are described for a data processing tool configured to cease operations of a plurality of database readers when detecting a congestion condition in the data processing tool. In some embodiments, the data processing tool comprises a memory, one or more processors, and a plurality of database readers. The one or more processors, coupled to the memory and the plurality of database readers are configured to determine a congestion condition in at least one data pipeline of a plurality of data pipelines of the data processing tool. Each data pipeline of the plurality of data pipelines connects a database reader and a transformer of the data processing tool, a transformer and a database writer of the data processing tool, or two transformers of the data processing tool. The one or more processors are further configured to refrain from reading data from one or more databases responsive to the congestion condition.
Embodiments implement iterative quantum annealing to provide a solution of an optimization. An annealing engine is located upstream of a quantum annealer (or a digital annealer, simulated annealer, or classical solver). The annealing engine is configured to process an initial solution to an original Quadratic Unconstrained Binary Optimization (QUBO) model, and thereby construct a second QUBO model. The second model is then fed to the quantum annealer, which returns a computed solution. The annealing engine constructs an intermediate solution from the computed solution and the second QUBO model. If the annealing engine determines a stopping criterion is satisfied by the intermediate solution, a final solution is constructed therefrom. If the annealing engine determines the stopping criterion is not satisfied, the second QUBO model is overwritten with the intermediate solution to form the basis for another iteration of QUBO model creation, quantum annealing, and evaluation of satisfaction of the stopping criterion.
A method may include detecting an occurrence of a logistic exception during a fulfilment cycle of an order. An impact of the logistic exception may be determined based on a tracking model having a plurality of interconnected tracking objects. The plurality of interconnected tracking objects may include a first tracking object corresponding to the order, a second tracking object corresponding to an order item included in the order, a third tracking object corresponding to a delivery order item corresponding to the order item, a fourth tracking object corresponding to a first delivery order including the delivery order item, and a fifth tracking object corresponding to a transport event including the delivery order. One or more of the plurality of interconnected tracking objects may be updated based at least on the impact of the logistic exception. Related systems and computer program products are also provided.
G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
Systems and methods include reception of tabular data for display, determination of a display width, determination of a number of columns of the tabular data, determination of a column indicator width based on the display width and the number of columns, and simultaneous display of a subset of the columns of the tabular data and a column indicator corresponding to each column of the tabular data, wherein one or more of the displayed column indicators is of the determined width.
G06F 3/0485 - Défilement ou défilement panoramique
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 16/22 - Indexation; Structures de données à cet effet; Structures de stockage
Provided is a system and method for inverting custom rules of a software program to ensure compatibility with other possible rules of the software program. The inverted rules are always stored in a unified format. In one example, the method may include detecting a custom rule within a software program. The custom rule may include one or more input values, one or more output values, and a plurality of conditions for converting the one or more input values into the one or more output values, The method may also include generating an inverse of the custom rule, wherein the inverse of the custom rule comprises a plurality of inverse statements, and each inverse statement includes an output value mapped to an input value and a different condition among the plurality of conditions. The method may also include updating a rules repository associated with the software program with the inverse of the custom rule.
A method for text-image integration is provided. The method may include receiving a question related to pairable data comprising text data and image data. Embeddings are generated from the text tokens and image encodings. Embeddings are generated from the text tokens and image encodings. The embeddings include text embeddings and image embeddings. A spectral conversion of the text embeddings and the image embeddings is performed to generate spectral data. The spectral data is processed to extract text-image features. The text-image features are processed to generate inferred answers to the question.
Systems and methods provide reception of a plurality of data samples for training a machine learning model and a plurality of examples associated with each of a plurality of ground truth labels for training a machine learning model, identification of all examples of the plurality of examples within each of the data samples, determination, for each identified example, of an associated one of the plurality of labels and a location of the example in the data sample, annotation of the data sample with the associated one of the plurality of labels and the location, and training of a machine learning model using the annotated data sample.
Methods, systems, and computer-readable storage media for receiving, by an operation guard system executed within a cloud platform, session information representative of a session of a user within the cloud platform, the session information including user information and operation information, determining, by the operation guard system, that the user is signed into a technical group for execution of an operation represented in the operation information, and in response, providing, by the operation guard system, a risk score associated with the operation, and determining, by the operation guard system and at least partially based on the risk score, that the operation is a risk-oriented operation based on the risk score, and in response, preventing execution of the operation and transmitting an alert.
Interactive dialog communication via callbacks is provided by a dialog execution environment executed in response to a selection of a first user interface element in a panel execution environment. The dialog execution environment receives a first set of information including a callback method. A dialog includes a second user interface element selectable to show a second set of information. To obtain this information, the dialog execution environment sends a request for the second set of information to the panel execution environment via the callback method. The panel execution environment sends a request for the second set of information to a server and receives a response including the second set of information. The dialog execution environment receives a response including the second set of information from the panel execution environment and provides the second set of information in a user interface of the dialog.
H04M 3/51 - Dispositions centralisées de réponse aux appels demandant l'intervention d'un opérateur
H04M 3/523 - Dispositions centralisées de réponse aux appels demandant l'intervention d'un opérateur avec répartition ou mise en file d'attente des appels
Provided are systems and methods for simplifying a user interaction when inputting data into multiple pages/windows of a software application. In one example, the method may include executing a software application, displaying a plurality of rows of data values from columns of a database table via a user interface embedded in a page of the software application, detecting a request for a fast input submitted via the user interface, and in response to the detected request, displaying a plurality of interactive elements within the plurality of rows of data values on the user interface, and detecting a selection of an interactive element from among the plurality of interactive elements, and in response, displaying a fast input user interface with input fields extracted from one or more other pages of the software application via the user interface.
Techniques and solutions are provided for searching for documents upstream, downstream, or both upstream and downstream of a given document, and providing information about relationships between documents in the results. To help users understand relationships between documents, different “voices” can be used in a result display, such as an “active voice” being used for an upstream search and a “passive voice” being used for a downstream search. If desired, results can be limited or filtered, such as limiting a search to a particular relationship type or types, or providing a limit to an amount of indirection between documents. Disclosed techniques can provide more useful information about a document flow, and can reduce computing resources used in generating such displays.
Implementations include a schema stack management system that enables zero-downtime during execution of maintenance procedures on application systems having schema stacks including one or more customer-provided schema extensions.
Techniques and solutions are provided for generating allocation tasks for a plurality of tasks requesting one or more instances of an element, the element being associated with a plurality of allocation units. At least one allocation unit is an aggregation unit that comprises multiple instances of the element. Certain disclosed techniques allow for a combination of types of allocation tasks, such as an allocation task that directly allocates one or more instances of an allocation unit to a task, or an allocation task that has subtasks of withdrawing one or more instances of an aggregation unit and then distributing element instances of the aggregation unit or units among a plurality of tasks. Another technique determines whether a multiple of a given aggregation unit can exactly satisfy multiple tasks of the plurality of tasks. Another aspect provides for splitting tasks into groups to allow for more efficient allocation.
Various examples are directed to systems and methods for operating an application for use with an enterprise database system. A common format process may receive, from a user device, a first request directed to the enterprise database system, convert the first request into a common protocol, and send a first common protocol request to the application logic code. The application logic code may generate a second request in the common protocol and send the second request to the common format process. The common format process may convert the second request from the common protocol to a database query protocol to generate at least one database query and send the at least one database query to the enterprise database system.
In an example embodiment, an additional classifier is introduced to an autoencoder neural network. The additional classifier performs an additional classification task during the training and testing phases of the autoencoder neural network. More precisely, the autoencoder neural network learns to classify the domain (or origin) of each specific input sample. This leads to additional contextual awareness in the autoencoder neural network, which improves the reconstruction quality during both the training and testing phases. Thus, the technical problem of decreased autoencoder neural network reconstruction quality caused by high data variance is addressed.
Techniques and solutions are provided for determining changes to computing objects based on a change to a related computing object. A model of model objects is created, where a model object represents a computing object of a plurality of computing objects. The model stores information about relationships between the plurality of computing objects. A change to a computing object of the plurality of computing objects is received, and the model is used to determine one or more objects of the plurality of computing objects that are affected by the change, using the relationship information in the model. At least a portion of the plurality of the objects are of differing types.
Object-based transportation between tenants may provide advantages over persistence layer-based transportation on a cloud platform in situations where persistence layer storage space is limited. Object based transportation involves obtaining a selection from the target tenant application of a set of objects from the plurality of objects and determining objects identifiers for each of the selected set of objects. For each object in the selected set of objects, a request is sent to a source tenant. The requests includes the corresponding object identifier for that object. Corresponding object data is received from the source tenant. At least a portion of the corresponding object data is stored in a target tenant database. An existing object may be updated or a new object may be created.
Provided are systems and methods for transporting a bot from runtime of a first tenant into a runtime of a second tenant. The transport process can include transferring bot configurations, machine learning models, training data, and the like. In one example, a method may include exporting a file from a first tenant in a multi-tenant environment, wherein the file comprises a chatbot program, one or more machine learning models for generating a response from the chat bot program, and training data used to train the one or more machine learning models for generating the response, importing the file into a second tenant of the multi-tenant environment, and installing the bot program, the one or more machine learning models, and the training data within a directory of the second tenant of the multi-tenant environment.
Computer-readable media, methods, and systems are disclosed for facilitating source to target translation and synchronization of production details for manufacturing of one or more production articles. A plurality of versioned technical design inputs in structured and unstructured formats is received. A plurality of translation inputs corresponding to one or more parameters associated with the plurality of versioned technical design inputs is received. A mapping of the plurality of versioned technical design inputs to a plurality of manufacturing data elements associated with one or more data models of a manufacturing system is automatically generated, based on the plurality of translation inputs. Finally, a plurality of manufacturing routings for implementing production of the one or more production articles is generated.
G05B 19/4093 - Commande numérique (CN), c.à d. machines fonctionnant automatiquement, en particulier machines-outils, p.ex. dans un milieu de fabrication industriel, afin d'effectuer un positionnement, un mouvement ou des actions coordonnées au moyen de données d'u caractérisée par la programmation de pièce, p.ex. introduction d'une information géométrique dérivée d'un dessin technique, combinaison de cette information avec l'information d'usinage et de matériau pour obtenir une information de commande, appelée
41.
DATA SHARING BETWEEN TENANTS AT DIFFERENT NETWORK PLATFORMS
Methods, systems, and computer-readable storage media for sharing of data between tenants associated with network applications. A set of queues related to a set of tenants at a plurality of platforms is maintained. Each queue stores a set of messages related to events generated by a particular tenant from the set of tenants. Each queue of is divided into a respective subset of sub-queues. Each sub-queue of the respective subset of sub-queues is associated with a particular topic. Access control permissions and network connections defined for each tenant of the set of tenants are evaluated. Data federation logic is executed to distribute data from a first sub-queue of a first queue associated with a first tenant of the set of tenants to at least one other sub-queue of a second queue associated with a second tenant based on at least one matching topic defined in the data federation logic.
Systems and methods include configuration of a first database view to read from common versions of rows of a database table or from rows of the database table associated with a first version of the application, the database table storing application content, configuration of a first application server to write to the common versions of rows of the database table, configuration of a second database view to read from common versions of rows of the database table or from rows of the database table associated with a second version of the application, configuration of a second application server to write to rows of the database table associated with the second version of the application, modification of the database table to include rows associated with the second version of the application while the first application server executes the first version of the application and incoming user requests are directed to the first application server, configuration of the second application server to write to common versions of the rows of the database table, and re-direction of incoming user requests to the second application server executing the second version of the application.
A user experience repository may contain base layouts and variant metadata for applications of an enterprise. An application design platform may receive, from a designer, an indication of a selected base layout for a selected application and interact with the designer to create a user experience variant (e.g., a page layout). The designer may then define an assignment rule for the user experience variant, the assignment rule including custom logic and multiple user parameters (e.g., a user role, country, language, etc.), and the system may store information about the user experience variant and assignment rule. An enterprise application service platform may determine that a user is accessing the selected application and evaluate the custom logic of the assignment rule based on user parameters of the user accessing the selected application. In accordance with the evaluation, the system may arrange to provide the appropriate user experience variant to the user.
Techniques for archiving data using an additional auxiliary database are disclosed. In some embodiments, a computer system may store data in a primary database and archive the data stored in the primary database, where the archiving of the data comprises storing a first copy of the data in an archive database and storing a second copy of the data in an auxiliary database. Next, the computer system may detect a change to the primary database, determine that the detected change satisfies a condition, and, based on the determining that the detected change satisfies the condition, prevent the detected change from being applied to the archive database, and update the auxiliary database by applying the detected change to the auxiliary database. The computer system may then use the archive database to service a first type of request and the updated auxiliary database to service a second type of request.
Systems and methods include acquisition of a first image of a first activity record, determination of first text based on the first image, generation of a first embedding based on the first text, generation of a second embedding based on the first embedding using a first model, where a number of dimensions of the second embedding is less than a number of dimensions of the first embedding, determination of a first cluster based on the second embedding using a second trained model, the second trained model trained using unsupervised learning, and determination of a first activity type associated with the first activity record based on the first cluster, the second embedding and historical activity data associating the first cluster with a plurality of activity types and each of the plurality of activity types with a respective embedding metric.
Embodiments utilize distributed ledger technology (e.g., blockchain) to handle interaction flows for processes occurring between separate/distinct systems. A first system initiates an interaction flow, creating a first ledger entry (e.g., blockchain transaction) based upon an original timestamp and encryption keys. The interaction flow is then communicated across a boundary of the first system to a second system. The second system receives the interaction flow, and creates a second ledger entry (e.g., second blockchain transaction in the blockchain) based upon a subsequent timestamp and hash of the first ledger entry. Further cross-system interactions are similarly handled by creating additional immutable ledger entries (e.g., more blockchain transactions in the blockchain). Thus, the stored interaction flows serve as the single source of truth for communication across system boundaries. This allows verification of cross-system communications without resort to a central authority, and moreover can serve as the basis for analytical querying regarding cross-system interaction.
Systems and methods are provided for analyzing a commit comprising an updated version of software code against a previous version of software code to determine a plurality of methods in the commit that have been changed, identifying a previous version and an updated version for each method that has been changed, and generating graphical representations of each previous version and each updated version of each method that has been changed. The systems and methods further provide for extracting path contexts from each graphical representation for each previous version and each updated version of each method, determining path contexts that are different by comparing each path context for each previous version with an associated updated version of each method, and encoding each path context that is different to generate at least one commit vector representation of the commit.
A method may include receiving a first transaction inserting a record into a database and a second transaction deleting the record from the database. A validity period for the record may be determined based on a first commit time at which the first transaction is committed and a second commit time at which the second transaction is committed. A current table and/or a history table of a system versioned table may be updated to include the record based on the validity period of the record. One or more temporal operations may be performed based on the system versioned table. For example, a time travel operation may be performed to retrieve, based on the system versioned table, one or more records that are valid at a given point in time. Related systems and computer program products are also provided.
Connectivity between remote networks is managed by a central engine that collects and stores network data such as network addresses, URLs, hostnames, and/or other information. The engine creates a tunnel proxy, as well as separate respective tunnels with the remote networks. Based upon network data, the engine references the tunnel proxy to create a logical link joining the respective tunnels. Data can then flow between the remote networks through the logical link. The logical link may exist for only a limited time, e.g., as determined by a timer. Certain embodiments may be particularly suited to empower a customer network to manage connectivity with the remote network of a support provider. The customer can initiate connectivity changes without the manual involvement of the support provider. The customer can also authorize the support provider to manage connectivity and initiate changes under prescribed conditions.
A transaction processing protocol for serverless database management systems can use a transaction scheduler to guarantee consistent serializable execution though analysis of the access pattern of transaction types and appropriate ordering of the transaction's events at runtime. A transaction topology is determined for each type of transaction and these are combined and used to generate a serialization graph. Cycles in the serialization graph are identified and breaking transaction types which may break the cycles are determined. When transaction requests are received, a breaking type of transaction is scheduled as a last transaction in the current epoch and later transactions not having the breaking transaction type are scheduled to execute in the next epoch.
A system, method and computer product for managing distributed transactions of a database. A transaction manager is provided for each of a plurality of transactions of the database. Each transaction manager is configured to perform functions that include generating a transaction token that specifies data to be visible for a transaction on the database. The database contains both row and column storage engines, and the transaction token includes a transaction identifier (TID) for identifying committed transactions and uncommitted transactions. A last computed transaction is designated with a computed identifier (CID), record-level locking of records of the database is performed using the TID and CID to execute the transaction, and the plurality of transactions of the database are executed with each transaction manager.
Methods, systems, and computer-readable storage media for providing two or more map paths, each map path representing a set of models and relationships between models for data stored in a database system, combining the two or more map paths to provide a map path tree that at least partially defines a data structure for storing at least a portion of the data stored in the database system in the cache, querying the database system by recursively traversing the map path tree to retrieve data instances from the database system, and storing each data instance in the cache using the data structure.
Systems and methods provide identification of a code artifact, determination of logical entities of the code artifact, determination of references between the logical entities of the code artifact, determination, based on the determined references, of one or more methods of the code artifact that are referenced by no logical entities of the code artifact, and identification of ones of the one or more methods which were not executed by a production system by searching a code execution trace for each of the one or more methods.
The present disclosure provides techniques and solutions for defining, deploying, or using distributed neural networks. A distributed neural network includes a plurality of computing elements, which can include Internet of things (IOT) devices, other types of computing devices, or a combination thereof. At least one neuron of a neural network is implemented, for a given data processing request using the distributed neural network, on a single computing element. Disclosed techniques can manage data processing requests in the event of an unreachable computing element, such as by processing the request without the participation of such computing element. Disclosed techniques also include redefining distributed neural networks to replace an unreachable computing element. Information to configure computing elements as neurons can include one or more of definitions of computing elements that will provide input, weights to be associated with inputs, definitions of computing elements to receive output, or an activation function.
G06N 3/063 - Réalisation physique, c. à d. mise en œuvre matérielle de réseaux neuronaux, de neurones ou de parties de neurone utilisant des moyens électroniques
56.
CROSS DATA CENTER DATA FEDERATION FOR ASSET COLLABORATION
A method may include a collaboration controller receiving a collaboration request to share a data asset associated with a first customer onboarded at a first data center with a second customer. In response to the collaboration request, the collaboration controller may determine that the second customer is onboarded at a second data center but not the first customer. Moreover, the collaboration controller may replicate, at the second data center, the data asset associated with the first customer upon determining that a copy of the data asset is not already present at the second data center. The replicating of the data asset may provide the second customer access to the data asset by at least creating, at the second data center, the copy of the data asset. Related systems and computer program products are also provided.
A method includes receiving, at a search toolbar, a search query from a machine in a network. The machine has an associated machine profile for participating in the network as an entity. The machine profile includes a machine identifier and machine metadata. A query type is determined from the search query. A search context for the machine is determined using a semantic graph of the network. From a set of services for the network, one or more relevant services to respond to the search query are identified based on the query type and the search context. The search query is applied to the one or more relevant services to obtain a set of responses. A set of relevant results for the search query is determined from the set of responses. The set of relevant results is transmitted to the machine.
Methods, systems, and computer-readable storage media for integrating skills into computer-executable applications using a low-code/no-code (LCNC) development platform.
The present disclosure involves systems, software, and computer implemented methods for an artificial intelligence work center for ERP data. One example method includes receiving scenario and model settings for an artificial intelligence model for a predictive scenario. A copy of the dataset is processed based on settings to generate a prepared dataset that is provided with the settings to a predictive analytical library. A trained model trained and evaluation data for the trained model is received from the predictive analytical library. A request is received to generate a prediction for the predictive scenario for a target field for a record of the dataset. The record of the dataset is provided to the trained model and a prediction for the target field for the record the dataset is received from the model. The prediction is included for presentation in a user interface that displays information from the dataset.
Some embodiments are directed to obtaining computer network connection data of the client-side transaction entry program, the computer network connection data allowing a connection to be made with the client-side transaction entry program, receiving a begin transaction message from the client-side computer indicating a request to launch a transaction, the computer network connection data being obtained before receiving the begin transaction message, and setting up a connection between the transaction system and the client-side transaction entry program using the computer network connection data enabling the client-side transaction entry program and the transaction system to cooperate in performing the transaction.
Given a set of port stations, a collection of forwarding orders to/from the port stations, and a set of available rail rakes along with their current schedules, a rake plan that maps order containers to rakes is generated. The optimization model is constrained by rake availability, rake capacity (the number of containers that can be accommodated by a rake), and the indivisibility of containers. The scheduling of the rail rakes can be represented as a sparse 3-dimensional binary matrix. Each element of the matrix indicates whether a particular rail rake is assigned to carry a particular container from a particular order in the next trip. The rail rake scheduling data can be used to generate communications from a rail rake planning server to multiple client devices, each client device associated with a different port station.
G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
64.
MESSAGE QUERY SERVICE AND MESSAGE GENERATION FOR A SOCIAL NETWORK
A method includes receiving a message query from an entity identifier participating in a social network. The message query specifies one or more entities, one or more requirements, and one or more constraints. A set of message query parameters is generated based on the message query. A set of queries for a semantic graph of the social network is generated based on the set of message query parameters. The set of queries is applied to the semantic graph to obtain a set of query results. A message context of the entity identifier is determined based on the set of query results and the set of message query parameters. A set of messages from a message repository is determined based on the message context. The set of messages can be presented on a client computer associated with the entity identifier.
Various examples are directed to systems and methods for administering web management software. A system may access alert data describing a plurality of alerts generated by a cloud-implemented web management software and apply a set of problem code conditions to the alert data. Based on the applying of the set of problem code conditions to the alert data, the system may assign a first problem code to the first alert. The system may determine a risk score for the web management software based at least in part on a problem state for the web management software, where the problem state comprises at least the first problem code. The system may determine that the risk score for the web management software matches a threshold condition and send a problem message to a user computing device associated with a user, the problem message describing a problem state of the web management software.
The present disclosure involves systems, software, and computer implemented methods for evaluating machine learning on remote datasets using confidentiality-preserving evaluation data. In response to determining that data of the remote customer dataset is of sufficient quality and quantity, feature data corresponding to a machine learning pipeline is generated. The remote customer dataset into one or more data partitions and for each partition, one or more baseline models and one or more machine learning models are trained using a machine learning library included in the remote customer database. Aggregate evaluation data is generated for each baseline model and each machine learning model that includes model debrief data and customer data statistics. In response to determining that the customer has enabled sharing of the aggregate evaluation data with a software provider who provided the remote customer database, the aggregate evaluation data is provided to the software provider.
The present disclosure involves systems, software, and computer implemented methods for data confidentiality-preserving machine learning on remote datasets. An example method includes receiving connection information for connecting to a remote customer database and storing the connection information in a machine learning runtime. Workload schedule information for allowable time windows for machine learning pipeline execution on remote customer data of the customer is received from the customer. A determination is made that an execution queue includes a machine learning pipeline during an allowed time window. The connection information is used to connect to the remote customer database during the allowed time window. Execution is triggered by the machine learning runtime of the machine learning pipeline on the remote customer database. Aggregate evaluation data corresponding to the execution of the machine learning pipeline on the remote customer database is received and provided to a user.
In one aspect, a method may include receiving a query associated with a plurality of data sources, wherein the query includes a first attribute; identifying that a query operator, which is associated with execution of the query and the first attribute, includes a first input from a first data source of the plurality of data sources and a second input from a second data source of the plurality of data sources; determining that the first attribute at the second data source corresponds to null; pruning, based on the determined null, the second input from the second data source to inhibit a select from the second data source; and in response to the pruning, performing the query operator by selecting, from the first data source, a column corresponding to the first attribute. Related systems, methods, and articles or manufacture are also disclosed.
Described herein are systems and method for providing data transfer in a computer-implemented database from a database extension layer. A data server associated with a database receives a request to transfer data stored in a database extension layer of the database. Input data chunks are collected from the database extension layer until a configured row count limit is reached. Row positions are determined from the input data chunks. Value identifiers corresponding to the row positions are determined. Values corresponding to the value identifiers are retrieved. Output data is generated based on the values corresponding to the value identifiers.
Some embodiments may be associated with facilitating extensibility for an enterprise portal in a cloud computing environment. A computer processor of a multi-level extensibility framework server may provide to a user a graphical view of existing services of the enterprise portal using information from the business enterprise portal data store and a sample data model. The processor may also receive from the user extension information for at least one of the technical layers and, based on the received extension information, automatically generate and provide an intelligent extension proposal to the user. The processor may also display simulated results to the user based on the intelligent extension proposal and the sample data model. The processor may then receive from the user a confirmation of the intelligent extension proposal and automatically transfer extension fields, entities, and mapping to multiple technical layers of the enterprise portal.
Systems and methods include presentation of a subset of a result set of items received from a remote system, reception of a command to perform an operation on all items of the result set while presenting the subset, and determination, in response to the command, of whether a total number of items in the result set exceeds a threshold value. If it is determined that the total number of items in the result set exceeds the threshold value, a first request is transmitted to the remote system to perform the operation on all items of the result set, where the first request includes filter values associated with the result set. If it is determined that the total number of items in the result set does not exceed the threshold value, a second request is transmitted to the remote system to perform the operation on all items of the result set, where the second request includes an identifier of each item of the result set.
G06F 16/248 - Présentation des résultats de requêtes
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G06F 16/25 - Systèmes d’intégration ou d’interfaçage impliquant les systèmes de gestion de bases de données
09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Computer programs; computer software; manuals in electronic form in connection with computer software, hardware and peripherals. Business consultation. Design, development, programming, customization, integration, implementation, maintenance, trouble-shooting, updating, and rental of computer programs and software; computer software research and engineering; computer software consulting; cloud computing services; software as a service; information technology services provided on an outsourcing basis.
73.
Table user-defined function for graph data processing
A method may include receiving a definition of a table user-defined function (TUDF) in a graph query language. The table user-defined function may be created based on the definition. For example, the creation of the table user-defined function may include checking and compiling the definition to generate executable code associated with the table user-defined function. Upon receiving a query including a relational query language statement invoking the table user-defined function, such as a structured query language select statement, the query may be executed on at least a portion of a graph data stored in a database. The executing of the query may include calling the executable code to execute the table user-defined function included in the relational query language statement. Related systems and computer program products are also provided.
Computer-readable media, methods, and systems are disclosed for performing a method for partial validation of a tree-like hierarchy structure. A method includes selecting a first portion of the hierarchy structure to edit, updating a status associated with the first portion to a draft mode status, modifying the first portion to create an edited first portion, and executing a first validation process on the edited first portion to determine if the edited first portion is consistent with a plurality of rules of the hierarchy structure. If the edited first portion is inconsistent with at least one of the plurality of rules, the method includes displaying an error message and/or a warning message to a user on a user interface. If the edited first portion is consistent with the plurality of rules, the method includes updating a status associated with the edited first portion to an active mode status.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
Systems and methods include reception of an instruction to perform a consistency check on compressed column data. In response to the instruction, a compression algorithm applied to uncompressed column data to generate the compressed column data is determined, one or more consistency checks associated with the compression algorithm are determined, wherein a first one or more consistency checks associated with a first compression algorithm are different from a second one or more consistency checks associated with a second compression algorithm, the one or more consistency checks are executed on the compressed column data, and, if the one or more consistency checks are not satisfied, a notification is transmitted to a user.
A computer implemented method can receive a parameterized query written in a declarative language. The parameterized query comprises a parameter which can be assigned different values. The method can perform a first compilation session of the parameterized query in which the parameter has no assigned value. Performing the first compilation session can generate an intermediate representation of the parameterized query. The intermediate representation describes a relational algebra expression to implement the parameterized query. The method can perform a second compilation session of the parameterized query in which parameter has an assigned value. Performing the second compilation session reuses the intermediate representation of the parameterized query.
The present disclosure provides more efficient techniques for removing a host from a multi-host database system. An instruction to remove a host system may be received. In response, a determination of whether the first host system does or does not store any source tables is made based on a host-type identifier for the host system. This determination may not require obtaining landscape information for each of the hosts in the database system. If the host system stores replica tables and does not store source tables, those replica tables may be dropped based on the determination that the first host system does not store any source tables. As such, in cases where table redistribution is not needed the landscape information is not obtained, thereby making the host removal process more efficient.
G06F 16/22 - Indexation; Structures de données à cet effet; Structures de stockage
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
78.
DYNAMICALLY MIGRATING SERVICES BASED ON SIMILARITY
Techniques for dynamically migrating services based on similarity are disclosed. In some embodiments, a computer system may, for each online service in a plurality of online services of a source cloud environment, compute a corresponding edit distance value based on a stream of transaction log data of the online service. The edit distance value may comprise a minimum number of edit operations required to change a first log entry in the stream of transaction log data to a second log entry in the stream of transaction log data. Next, the computer system may determine a migration plan based on a measure of similarity between the edit distance values of the online services, where the migration plan specifies a distribution of the online services amongst a plurality of destination cloud environments, and then migrate the online services from the source cloud environment to the destination cloud environments using the migration plan.
An in-memory computing system for conducting on-line transaction processing and on-line analytical processing includes system tables in main memory to store runtime information. A statistics service can access the runtime information using script procedures stored in the main memory to collect monitoring data, generate historical data, and other system performance metrics while maintaining the runtime data and generated data in the main memory.
A method for executing a query may include generating a partition value identifier for a partitioned table. The partitioned table may include a main fragment including a main dictionary storing a first value and a main value identifier corresponding to the first value and a delta fragment including a delta dictionary storing a second value and a delta value identifier corresponding to the second value. The partition value identifier may be set based at least in part on the first value and the second value. The generated partition value identifier and a corresponding one of the main value identifier and the delta value identified may be maintained as part of a mapping. A query to group data stored in the partitioned table may be received. The query may be executed by at least using the mapping.
G06F 16/28 - Bases de données caractérisées par leurs modèles, p.ex. des modèles relationnels ou objet
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
81.
Machine-learned models for predicting database application table growth factor
In an example embodiment, machine learning models are trained and used to predict a growth classification of time fields and category fields of application tables of Enterprise Resource Planning (ERP) software databases. These predictions can then be used to forecast future technological needs or the future table size more precisely.
The present disclosure relates to computer-implemented methods, software, and systems for identifying trends in the behavior of execution of services in a cloud platform environment and support alert triggering for expected outages prior their occurrence. Metrics data for performance of the cloud platform is continuously obtained. Based on evaluation of the obtained metrics data, the performance of the cloud platform is tracked over time to identify a trend in a performance of a first service on the cloud platform. The identified trend in the performance is compared with a current performance rate of the first service. Based on an evaluated difference between the current performance rate and the identified trend, the difference is classified into an issue-reporting level associated with a prediction for an outage at the first service. A notification for the trend is reported.
H04L 41/0631 - Gestion des fautes, des événements, des alarmes ou des notifications en utilisant l’analyse de la corrélation entre les notifications, les alarmes ou les événements en fonction de critères de décision, p.ex. la hiérarchie ou l’analyse temporelle ou arborescente
H04L 41/149 - Analyse ou conception de réseau pour la prédiction de la maintenance
H04L 43/0876 - Utilisation du réseau, p.ex. volume de charge ou niveau de congestion
A system of configuring a database which is distributed across multiple nodes according to a table distribution, e.g., by storing respective tables of the database at respective nodes. A graph partitioning procedure is applied to a graph of the distributed database, with vertices representing tables and edges representing cross-table operations. A distribution of the tables across the nodes is determined based on the partitioning. The storage of the tables is configured according to the determined distribution.
G06F 16/22 - Indexation; Structures de données à cet effet; Structures de stockage
G06F 16/27 - Réplication, distribution ou synchronisation de données entre bases de données ou dans un système de bases de données distribuées; Architectures de systèmes de bases de données distribuées à cet effet
The present disclosure involves systems, software, and computer implemented methods for performing human capital management. One example method includes receiving a set of experience data of a user as unstructured data, converting the unstructured experience data into structured experience data of the user, receiving a set of personality data of the user as unstructured data, converting the unstructured personality data into structured personality data of the user, receiving a set of motivational and preferences data of the user as unstructured data, converting the unstructured motivational and preferences data into structured motivational and preferences data of the user. The structured experience data, the structured personality data, and the structured motivational and preferences data are combined into a user profile, which is stored in a database. An opportunity is recommended to the user based on the user profile.
Intelligent mapping from created item information to sustainability reference content from a variety of sources can be implemented to facilitate created item footprint management and other sustainability applications. The difficult task of finding appropriate emission factors across a portfolio can be automated. Assisted search can be implemented using enhanced search techniques. Fallback mappings can be implemented to accommodate different levels of granularity during search. A machine learning model can be trained based on a variety of input data, including confirmed mappings, mapping history, and rules. The process of mapping to emission datasets can thus be simplified, enabling footprint calculations to proceed.
Systems, methods, and computer media for determining compatible users through machine learning are provided herein. Previous interactions between some users in a group can be used to determine a first set of user-to-user compatibility scores. Both the first set of compatibility scores and attributes for the users in the group can be provided as inputs to a machine learning model that can be used to determine a second set of user-to-user compatibility scores for user pairs who do not have an interaction history. Along with input constraints, the first and second sets of user-to-user compatibility scores can be used to select compatible user groups.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
According to some embodiments, systems and methods are provided, including a repository storing at least an Application Programming Interface (API) mapping table; a memory storing processor-executable program code; and a processing unit to execute the processor-executable program code to: receive an input of one or more legacy API identification elements for a legacy API; determine whether the received legacy API identification elements correspond to a standard legacy API; in a case the received legacy API identification elements do correspond to a standard legacy API, determine whether a corresponding updated API is available; in a case the corresponding updated API is available, determine whether the legacy API includes at least one extension; and in a case the legacy API does include at least one extension, generate an updated corresponding API extension, and transmit the corresponding updated API and the updated corresponding API extension to the user. Numerous other aspects are provided.
Systems and methods include reception of an indication of a first event associated with a first object instance. In response to the indication of the first event, a first process chain comprising a first process associated with an object instance of a first meta domain model object type, a second process associated with an object instance of a second meta domain model object type, and a first process step adapter to map a response to a request are determined, the first process is executed based on a request associated with an object instance of the first meta domain model object type to generate a first response associated with an object instance of the first meta domain model object type, the first process step adapter is executed to map the first response associated with an object instance of the first meta domain model object type to a first request associated with an object instance of the second meta domain model object type, and the second process is executed based on the first request associated with an object instance of the second meta domain model object type to generate a second response associated with an object instance of the second meta domain model object type.
09 - Appareils et instruments scientifiques et électriques
Produits et services
Artificial intelligence software; Artificial intelligence software for analysis; Artificial intelligence and machine learning software; Interactive software based on artificial intelligence; Software for the integration of artificial intelligence and machine learning in the field of Big Data.
09 - Appareils et instruments scientifiques et électriques
Produits et services
Artificial intelligence software; Artificial intelligence software for analysis; Artificial intelligence and machine learning software; Interactive software based on artificial intelligence; Software for the integration of artificial intelligence and machine learning in the field of Big Data.
Disclosed herein are various embodiments for performing a delta merge with location data. An embodiment operates by receiving a command to merge a delta storage with an original main storage. A coordinate system corresponding to a plurality of data entries of data in the delta storage is identified. A coordinate system specification, corresponding to one of the identified coordinate system, is added to a metadata of a new version of the main storage. A merge operation is performed between the delta storage and the original main storage, in which the plurality of data entries of the delta storage are copied to a container portion of the new version of the main storage, separate from the metadata. The plurality of data entries of the delta storage are deleted and the original main storage is replaced with the new version of the main storage.
The present disclosure involves systems, software, and computer implemented methods for intelligent document processing in enterprise resource planning. One example method includes automatically determining that a document file is ready to be processed in an ERP (Enterprise Resource Planning) system. The document file is automatically processed and a request is sent to the ERP system to automatically create or update ERP data in the ERP system based on the document file. Status information is received from the ERP system regarding the request to create or update ERP data in the ERP system. The status information received from the ERP system is logged and information indicating that the document file has been processed in the ERP system is automatically recorded.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
Various embodiments for a triple integration and querying system are described herein. An embodiment operates by identifying a plurality of triples corresponding to a knowledge graph, and generating a table in a database into which to import the set of triples. The table includes a subject column, a predicate column, and multiple object columns across different datatypes. Values from the triples of the knowledge graph are loaded into the table and a query is executed on the table.
Various embodiments for a triple integration and querying system with dictionary compression are described herein. An embodiment operates by identifying a table of a database with four or more columns with triple formatted data including one subject column, one predicate column, and two or more object columns. It is determined that a master dictionary is to be generated for the both the subject column and the predicate column based on an identical datatype being used for both columns. A subject data dictionary and a predicate data dictionary are generated. A unique value is assigned a same unique identifier a in both the object data dictionary and the subject data dictionary. A master dictionary including both the unique values from the subject data dictionary and the predicate data dictionary is generated. Values in the subject column and the predicate column are replaced based on the unique values from the master dictionary.
Mechanisms are disclosed for modelling and quantitatively characterizing emissions inflows and outflows. Scoping inputs are received including a physical process defined scope of emission-producing physical inputs. Modeling inputs are received, including footprints associated with physical manufacturing inputs. The model energy flows may be provided via a graphical modeling user interface and support allocation rule definitions for distributing emissions footprint definitions. An estimated emission flow is calculated based on combined energy flows. The material flows may be derived from aggregated transaction data associated with emission-producing physical inputs. The calculated emission flow may be based on a calculated emission footprint at stages along a production process. Analytics user interfaces associated with the calculated emissions flows may provide insight into the highest emission producing emission drivers along the production chain in connection with a technical report.
G06F 30/28 - Optimisation, vérification ou simulation de l’objet conçu utilisant la dynamique des fluides, p.ex. les équations de Navier-Stokes ou la dynamique des fluides numérique [DFN]
Disclosed herein are various embodiments of a location data processing system. An embodiment operates by configuring a column of a table to store data across a plurality of different coordinate systems. The data to be stored in the configured column is received. The received data is divided into a plurality of fragments, including a first fragment comprising a plurality of data entries. A first data entry in the first fragment includes a coordinate specification including metadata indicating how to evaluate corresponding data of a first coordinate system represented by the first data entry. A query for data from the first fragment is received. The plurality of data entries of the first fragment are evaluated based on the coordinate specification to identify data that satisfies the query. The data is returned responsive to the query.
A61M 5/142 - Perfusion sous pression, p.ex. utilisant des pompes
A61M 5/32 - Seringues - Parties constitutives - Parties constitutives des aiguilles relatives au raccordement de celles-ci à la seringue ou au manchon; Accessoires pour introduire l'aiguille dans le corps ou l'y maintenir; Dispositifs pour la protection des aiguilles
Techniques for implementing secure tenant-based chaos experiments using certificates are disclosed. In some embodiments, a computer system may receive an indication of a scope of execution for a chaos experiment from a tenant of a multitenancy environment, identify a public key from a certificate chain based on the received indication of the scope of execution, and transmit the identified public key to the tenant. Next, the computer system may then receive an encrypted version of the chaos experiment from the tenant, where the encrypted version of the chaos experiment has been encrypted with the identified public key, and then transmit the encrypted version of the chaos experiment to one or more software agents.
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
98.
INDEPENDENTLY LOADING RELATED DATA INTO DATA STORAGE
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives a set of data for a record in a first table. The set of data comprises a set of values for a set of attributes. The first table comprises a first set of columns. A first column in the first set of columns in the first table is configured to refer to a second column in a second set of columns in a second table. The program further generates the record in the first table. The program also generates a value for the first column in the first set of columns in the first table based on a subset of the set of values for a subset of the set of attributes. The program further stores the value in the first column in the first set of columns of the record.
Some embodiments provide a program that receives a set of data for a first record in a first table. The set of data comprises a set of values for a set of attributes. In a data loading process configured to load a subset of the set of data into a subset of a first set of columns in the first table, the program determines that a first column in a first set of columns does not belong in the subset of the first set of columns. The program generates the first record in the first table. The program generates a value for the first column in the first set of columns that refers to a second record in the second table configured to represent a defined type of record. The program stores the value in the first column in the first set of columns of the first record.
Some embodiments provide a non-transitory machine-readable medium that stores a program. The program receives an image of a document, the document comprising a set of text. The program further provides the set of text to a machine learning model configured to determine, based on the set of text, a plurality of probabilities for a plurality of defined types of documents. Based on the plurality of probabilities for the plurality of defined types of documents, the program also determines a type of the document from the plurality of defined types of documents.