Systems and methods for mapping interactive UI (user interface) elements to an RPA (robotic process automation) object repository are provided. User input selecting a window of an application displayed on a display device is received. In response to receiving the user input selecting the window of the application, interactive UI elements in the window of the application are automatically identified. User input selecting one or more of the identified interactive UI elements in the window of the application is received. The one or more selected interactive UI elements are stored in an RPA object repository of an RPA system.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 9/451 - Execution arrangements for user interfaces
2.
SYSTEM AND COMPUTER-IMPLEMENTED METHOD FOR SEAMLESS CONSUMPTION OF AUTOMATIONS
A system and a method for accessing at least one automation from an automation store are provided. The method comprises receiving a user input indicative of selection of at least one automation for accessing from a plurality of automations displayed in the automation store, and automatically uploading, in response to receiving the user input, the selected automation to a personal workspace of the user from the automation store. The automations are accessed via one or more Application Programming Interface (API) calls directed to an automation cloud server. Further, the method comprises generating a notification indicative of upload of the selected automation for accessing the automation. The uploaded automation is displayed in a software robot assistant associated with the user. Furthermore, the method comprises displaying the generated notification in an application interface associated with the automation store and displaying the selected automation in the personal workspace in the application interface.
Disclosed herein is a system. The system includes a memory and a processor. The memory stores processor executable instructions for a migration engine. The processor is coupled to the memory. The processor executes the migration engine to cause the system to implement an export operation for an on premises system to mine for data corresponding to automations or user specific arrangements. The processor, also, executes the migration engine to cause the system to implement an import operation of the data to a cloud environment to replicate the automations or user specific arrangements.
Semantic matching between a source screen or source data and a target screen using semantic artificial intelligence (AI) for robotic process automation (RPA) workflows is disclosed. The source data or source screen and the target screen are selected on a matching interface, semantic matching is performed between the source data/screen and the target screen using an artificial intelligence / machine learning (AI/ML) model, and matching graphical elements and unmatched graphical elements are highlighted, allowing the developer to see which graphical elements match and which do not. The matching interface may also provide a confidence score of the individual matches, provide an overall mapping score, and allow the developer to hide/unhide the matched/unmatched graphical elements. Activities of an RPA workflow may be automatically created based on the semantic mapping that can be executed to perform the automation.
Systems and methods for performing process mining are provided. Data from one or more source systems is extracted by a data connector of a process app. The extracted data is transformed into a normalized data model by transforms of the process app. One or more process mining algorithms of the process app are applied to the normalized data. Results of the one or more process mining algorithms are presented to a user via a user interface of the process app.
A digital assistant may execute one or more tasks using robotic processing automation (RPA). The digital assistant (or robot) assigns a workflow to a robot to monitor for one or more triggers. The one or more triggers comprise one or more events causing a robot to perform an automated tasks with or without user involvement. The robot also identifies the one or more triggers during the monitoring of the one or more triggers, and loads a workflow associated with the one or more identified triggers. The robot further includes executing the loaded workflow to perform one or more tasks associated with the one or more triggers.
Systems and methods for configuring an RPA (robotic process automation) platform to perform a candidate process automation are provided. Discovery data relating to a candidate process automation is generated. RPA platform design components for configuring an RPA platform to perform the candidate process automation are generated based on the discovery data. The RPA platform is configured based on the RPA platform design components.
Systems and methods for configuring an RPA (robotic process automation) platform to perform a candidate process automation are provided. Discovery data relating to a candidate process automation is generated. RPA platform design components for configuring an RPA platform to perform the candidate process automation are generated based on the discovery data. The RPA platform design components are presented to a user via a user interface.
Controlling and provisioning a robot of a virtual machine (VM) includes transmitting a connection request between a first service installed in a virtual machine and a second service. The robot is associated with at least one process running on the virtual machine. The virtual machine is authenticated based on a token associated with the second service and the virtual machine. A connection is established between the first service and the second service. A command is transmitted associated with the controlling of the robot from the second service to the first service based on the authentication of the virtual machine. The command is associated with a corresponding command identifier for identifying a type of the command. The command is then executed for controlling the robot.
A system and a method for performing a test of an application using an automation hot are provided. The method comprises accessing the application to be tested. The method comprises executing the test of the application using the automation hot. The automation hot is configured to interact with one or more other applications. The one or more other applications are different from the application. The method comprises determining one or more test results of the application based on the execution of the test. Further, the method comprises generating a notification indicative of the determined one or more test results.
A system and a method for verification of execution of an activity are provided. The method comprises receiving a user input indicative of enablement of the verification, and displaying, in response to the reception of the user input, a target element comprising a menu for selecting an edit action. The method further comprises receiving, in response to the selection of the edit action, a verification element, and determining a status of the activity, wherein the status of the activity comprises either of successful execution of the activity or non-successful execution of the activity. Further, the method comprises generating a verification response based on the status of the activity and the verification element.
Systems and methods for generating a process tree of a process are provided. An event log of execution of a process is received. User constraints on one or more activities of the process are received from a user. A process tree is generated from the event log based on the user constraints. The process tree is output.
Systems and methods for visually representing a process graph are provided. A process graph representing execution of a process is received. One or more gateway nodes in the process graph are folded into their from-nodes based on a number of incoming edges and a number of outgoing edges of the one or more gateway nodes. The process graph according to the folded one or more gateway nodes is output.
G06F 40/106 - Display of layout of documents; Previewing
G06F 40/183 - Tabulation, i.e. one-dimensional positioning
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 9/451 - Execution arrangements for user interfaces
Some embodiments address unique challenges of provisioning RPA software to airgapped hosts, and in particular, provisioning RPA machine learning components and training corpora of substantial size, and provisioning to multiple airgapped hosts having distinct hardware and/or software specifications. To reduce costs associated with data traffic and manipulation, some embodiments bundle together multiple RPA components and/or training corpora into an aggregate package comprising a deduplicated collection of software libraries. Individual RPA components are then automatically reconstructed from the aggregate package and distributed to airgapped hosts.
Systems and methods for operating an RPA (robotic process automation) services delivery platform for implementing a plurality of RPA services on premises of a customer are provided. An installer for installing a plurality of RPA services on one or more computing systems located on premises of a customer is generating using the RPA services delivery platform. One or more of the plurality of RPA services installed on the one or more computing systems using the installer are maintained using the RPA services delivery platform.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Disclosed herein is a computing system. The computing system includes a memory and a processor. The memory stores processor executable instructions for a workflow recommendation assistant engine. The processor is coupled to the memory. The processor executes the workflow recommendation assistant engine to cause the computing device to analyze images of a user interface corresponding to user activity, execute a pattern matching of the images with respect to existing automations, and provide a prompt indicating that an existing automation matches the user activity.
Web-based robotic process automation (RPA) designer systems that allow RPA developers to design and implement web serverless automations, user interface (UI) automations, and other automations are disclosed. Such web-based RPA designer systems may allow a developer to sign in through the cloud and obtain a list of template projects, developer-designed projects, services, activities, etc. Thus, RPA development may be centralized and cloud-based, reducing the local processing and memory requirements on a user's computing system and centralizing RPA designer functionality, enabling better compliance. Automations generated by the web-based RPA designer systems may be deployed and executed in virtual machines (VMs), containers, or operating system sessions.
Web-based robotic process automation (RPA) designer systems that allow RPA developers to design and implement web serverless automations, user interface (UI) automations, and other automations are disclosed. Such web-based RPA designer systems may allow a developer to sign in through the cloud and obtain a list of template projects, developer-designed projects, services, activities, etc. Thus, RPA development may be centralized and cloud-based, reducing the local processing and memory requirements on a user's computing system and centralizing RPA designer functionality, enabling better compliance. Automations generated by the web-based RPA designer systems may be deployed and executed in virtual machines (VMs), containers, or operating system sessions.
Web-based robotic process automation (RPA) designer systems that allow RPA developers to design and implement web serverless automations, user interface (UI) automations, and other automations are disclosed. Such web-based RPA designer systems may allow a developer to sign in through the cloud and obtain a list of template projects, developer-designed projects, services, activities, etc. Thus, RPA development may be centralized and cloud-based, reducing the local processing and memory requirements on a user's computing system and centralizing RPA designer functionality, enabling better compliance. Automations generated by the web-based RPA designer systems may be deployed and executed in virtual machines (VMs), containers, or operating system sessions.
Web-based robotic process automation (RPA) designer systems that allow RPA developers to design and implement web serverless automations, user interface (UI) automations, and other automations are disclosed. Such web-based RPA designer systems may allow a developer to sign in through the cloud and obtain a list of template projects, developer-designed projects, services, activities, etc. Thus, RPA development may be centralized and cloud-based, reducing the local processing and memory requirements on a user' s computing system and centralizing RPA designer functionality, enabling better compliance. Automations generated by the web-based RPA designer systems may be deployed and executed in virtual machines (VMs), containers, or operating system sessions.
In some embodiments, a robotic process automation (RPA) robot is configured to search for a target element within a first part of a document currently exposed within a user interface. When the search fails, the robot may automatically actuate a scroll control of the respective UI to cause it to bring another part of the respective document into view. The robot may then continue searching for the RPA target within the newly revealed part of the document. In some embodiments, the robot automatically determines whether the respective document is scrollable, and identifies the scroll control according to a type of target application (e.g., spreadsheet vs. web browser).
A method and/or apparatus for creating and/or editing a machine pool with bring your own machine (BYOM) includes creating and/or editing a machine pool with a static list of machines. A user input machine list and an existing machine list are retrieved, and the user input machine list and existing machine list are compared to identify one or more changes between the user input machine list and existing machine list. Next, a new machine specification is created when the one or more changes between the user input machine list and existing machine list are identified. The one or more machines are then moved to the new machine specification.
Systems and methods for allocating computing environments for completing an RPA (robotic process automation) workload are provided. A request for completing an RPA workload is received. A number of computing environments to allocate for completing the RPA workload is calculated based on a selected one of a plurality of RPA autoscaling strategies. The calculated number of computing environments is allocated for allocating one or more RPA robots to complete the RPA workload. The computing environments may be virtual machines.
Disclosed herein is a method implemented by a task mining engine. The task mining engine is stored as processor executable code on a memory. The processor executable code is executed by a processor that is communicatively coupled to the memory. The method includes receiving recorded tasks identifying user activity with respect to a computing environment and clustering the recorded user tasks into steps by processing and scoring each recorded user task. The method also includes extracting step sequences that identify similar combinations or repeated combinations of the steps to mimic the user activity.
Disclosed herein is a computing system that includes a memory and a processor coupled to the memory. The memory storing processor executable instructions for an interface engine that integrates robotic processes into a graphic user interface of the computing system. The processor executes the interface engine to cause the computing system to receive inputs via a menu of the graphic user interface and to automatically determine the robotic processes for display in response to the inputs. The interface engine further generates a list including selectable links corresponding to the robotic processes and displays the list in association with the menu.
A system for managing one or more robots is provided. The system is configured to resolve the one or more issues or faults that lead to failure of execution of one or more automation processes executed by the one or more robots. The system is configured to receive information of an issue associated with at least one robot of the one or more robots and further configured to obtain job log data, associated with the at least one robot, for the issue. The system is further configured to determine, using a trained machine learning model, a corrective action, and its associated confidence score for resolving the received issue, based on the job log data and an analysis performed by the trained machine learning model. Further, system performs the corrective action based on the confidence score and the analysis, for managing the one or more robots.
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
Systems and methods for generating an enterprise process graph are provided. Sets of process data relating to an implementation of RPA (robotic process automation) acquired using a plurality of discovery techniques is received. An enterprise process graph representing the implementation of RPA is generated based on the received sets of process data.
G05B 19/408 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by data handling or data format, e.g. reading, buffering or conversion of data
G06F 16/901 - Indexing; Data structures therefor; Storage structures
A system and a computer-implemented method for validating label data includes receiving the label data and segmenting it into one or more parts using a first machine learning model. Further, from the segmented label data a first plurality of attributes, including text and images, are extracted. The method further includes receiving ground truth data associated with the label data and extracting a second plurality of attributes from the ground truth data. The first and second plurality of attributes are then compared using a second machine learning model and the result of comparison are displayed on a three pane user interface. Further, the label data is validated based on the displayed results.
Systems and methods for generating a process tree of a process are provided. An event log of the process is received. It is determined whether a base case applies to the event log and, in response to determining that the base case applies to the event log, one or more nodes are added to the process tree. In response to determining that the base case does not apply to the event log, the event log is split into sub-event logs and one or more nodes are added to the process tree. The steps of determining whether a base case applies and splitting the event log are repeatedly performed for each respective sub-event log using the respective sub-event log as the event log until it is determined that the base case applies to the event log. The process tree is output. The process may be a robotic process automation process.
Systems and methods for splitting an event log into sub-event logs are provided. The event log of a process is received. An activity relation score for a parallel relationship operator is calculated for each respective pair of activities of a plurality of pairs of activities in the event log based on 1) a frequency of occurrence of a first activity of the respective pair of activities between occurrences of a second activity of the respective pair of activities and 2) a frequency of occurrence of the second activity between occurrences of the first activity. A cut location in the event log is determined based on the activity relation scores. The event log is split into the sub-event logs based on the cut location.
The present system and method relate generally to the field of Robotic Process Automation, particularly to a form data extractor for document processing. The system and method relate to a form extractor for document processing using RPA workflows that can be easily configured for different document types. The form extractor includes a set of templates for identifying the document type (classification) and extracting data from the documents. The templates can be configured, i.e., by the user, by defining the fields to be extracted and the position of the field on the document. The form extractor is resilient to changes in the position of the template on a page, as well as to scan rotation, size, quality, skew angle variations and file formats, thus allowing RPA processes to extract data from documents that need ingestion, independent of how they are created.
Systems and methods for filtering a process graph are provided. Paths in a process graph representing execution of a process are identified. A measure of importance is calculated for each of the identified paths. The identified paths are sorted based on the calculated measures of importance. The process graph is filtered according to a level of complexity based on the sorted identified paths. The filtered process graph is output.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
33.
SECURITY AUTOMATION USING ROBOTIC PROCESS AUTOMATION
Security automation, such as penetration testing or security hardening, is performed using robotic process automation (RPA) by directly connecting one or more robots into an operating system of a platform. The one or more robots execute a workflow to simulate the penetration testing of the operating system to identify malicious activity or vulnerable configurations within the operating system. The one or more robots also generate a report for the user identifying the malicious activity, misconfigurations or vulnerabilities within the environment.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 21/55 - Detecting local intrusion or implementing counter-measures
G06F 21/51 - Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems at application loading time, e.g. accepting, rejecting, starting or inhibiting executable software based on integrity or source reliability
34.
SUPPLEMENTING ARTIFICIAL INTELLIGENCE (Al) / MACHINE LEARNING (ML) MODELS VIA ACTION CENTER, Al/ML MODEL RETRAINING HARDWARE CONTROL, AND AI/ML MODEL SETTINGS MANAGEMENT
Supplementing artificial intelligence (AI) / machine learning (ML) models via an action center, providing AI/ML model retraining hardware control, and providing AI/ML model settings management are disclosed. AI/ML models may be deployed on hosting infrastructure where the AI/ML models can be called by robotic process automation (RPA) robots. When the performance of an AI/ML model falls below a threshold, the result of the AI/ML model prediction and other data is sent to an action center where a human reviews the data using a suitable application and approves the prediction or provides a correction if the prediction is wrong. This action center- approved result is then sent to the RPA robot to be used instead of the prediction from the AI/ML model.
Systems and methods for performing process mining on a multi-instance process comprising one or more multi-instance subprocesses are provided. An event log of the multi-instance process is divided into a main log and one or more sublogs by collapsing events of each of the one or more multi-instance subprocesses into a single activity. A process graph is generated for the main log and for each of the one or more sublogs. The generated process graphs are combined into a combined process graph. The combined process graph is output.
A method is disclosed. The method is implemented by a robot engine that is stored as program code on a memory of a system. The program code is executed by a processor of the system, which enables robotic process automations of the robot engine. The processor is communicatively coupled to the memory within the system. The method includes initiating a guided operation of a platform presented by the system and monitoring the guided operation to observe an interaction with the platform or to receive a direct input by the robot engine. The method also includes executing a backend operation with respect to the interaction or the direct input.
Robotic process automation (RPA) architectures and processes for hosting, monitoring, and retraining ML machine learning (ML) models are disclosed. Retraining is an important part of the ML model lifecycle. The retraining may depend on the type of the ML model and the data on which the ML model will be trained. A secure storage layer may be used to store data from RPA robots for retraining. This retraining may be performed automatically, remotely, and without user involvement.
Robotic process automation (RPA) architectures and processes for hosting, monitoring, and retraining ML machine learning (ML) models are disclosed. Retraining is an important part of the ML model lifecycle. The retraining may depend on the type of the ML model and the data on which the ML model will be trained. A secure storage layer may be used to store data from RPA robots for retraining. This retraining may be performed automatically, remotely, and without user involvement.
Automatic anchor determination for target graphical element identification in user interface (UI) automation is disclosed. A context-based mechanism assists in discriminating between duplicate target UI element candidates. More specifically, additional anchors may be determined and automatically added for a target UI element that provide context and are visible in an area surrounding the target. During design time, a target UI element may be indicated by a user of a designer application and a corresponding anchor may be determined. When a pair of UI elements is found having the same or similar characteristics and/or relationships to the target- anchor pair, an additional anchor is automatically identified without requesting user input. The additional anchor may be selected from the UI elements within a radius of the target UI element.
G06F 9/451 - Execution arrangements for user interfaces
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
40.
AUTOMATED REMEDIAL ACTION TO EXPOSE A MISSING TARGET AND/OR ANCHOR (S) FOR USER INTERFACE AUTOMATION
Automatic anchor determination for target graphical element identification in user interface (UI) automation is disclosed. A context-based mechanism assists in discriminating between duplicate target UI element candidates. More specifically, additional anchors may be determined and automatically added for a target UI element that provide context and are visible in an area surrounding the target. During design time, a target UI element may be indicated by a user of a designer application and a corresponding anchor may be determined. When a pair of UI elements is found having the same or similar characteristics and/or relationships to the target- anchor pair, an additional anchor is automatically identified without requesting user input. The additional anchor may be selected from the UI elements within a radius of the target UI element.
G06F 9/451 - Execution arrangements for user interfaces
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
41.
QUANTIFYING USAGE OF ROBOTIC PROCESSS AUTOMATION RELATED RESOURCES
Systems and methods for consumption based billing for RPA (robotic process automation) are provided. Usage of RPA related resources by a user is quantified based on RPA execution data associated with the user. A bill for the user is generated based on the quantified usage of RPA related resources. The generated bill is output.
Techniques for training an artificial intelligence (AI) / machine learning (ML) model to recognize applications, screens, and UI elements using computer vision (CV) and to recognize user interactions with the applications, screens, and UI elements. Optical character recognition (OCR) may also be used to assist in training the AI/ML model. Training of the AI/ML model may be performed without other system inputs such as system-level information (e.g., key presses, mouse clicks, locations, operating system operations, etc.) or application-level information (e.g., information from an application programming interface (API) from a software application executing on a computing system), or the training of the AI/ML model may be supplemented by other information, such as browser history, heat maps, file information, currently running applications and locations, system level and/or application-level information, etc.
Task automation by support robots for robotic process automation (RPA) is disclosed. RPA robots may be located on the computing systems of two or more users and/or remotely. The RPA robots may use an artificial intelligence (AI) / machine learning (ML) model that is trained to use computer vision (CV) to recognize tasks that the respective user is performing with the computing system. The RPA robots may then determine that the respective user is performing certain tasks on a regular basis in response to a certain action, such as receiving a request via email or another application, determining that a certain task has been completed, noting that a time period has elapsed, etc., and automate the respective tasks.
nn-grams of user interactions and/or a beneficial end state. Recorded real user interactions may be analyzed, and matching sequences may be implemented as corresponding activities in an RPA workflow.
nn-grams of user interactions and/or a beneficial end state. Recorded real user interactions may be analyzed, and matching sequences may be implemented as corresponding activities in an RPA workflow.
Anomaly detection and self-healing for robotic process automation (RPA) via artificial intelligence (AI) / machine learning (ML) is disclosed. RPA robots that utilize AI/ML models and computer vision (CV) may interpret and/or interact with most encountered graphical elements via normal learned interactions. However, such RPA robots may occasionally encounter new, unhandled anomalies where graphical elements cannot be identified and/or normal interactions will not work. Such anomalies may be processed by an anomaly handler. The RPA robots may have self-healing functionality that seeks to automatically find information that addresses anomalies.
Embedded and/or pooled robotic process automation (RPA) robots are disclosed. A master robot initiates one or more RPA robots in a deterministic and/or probabilistic manner. For instance, when a step in an RPA workflow of the master robot is encountered where an action is not clear, some data is missing, there are multiple possible branches, etc., one or more embedded and/or pooled minion robots may be called upon by the master robot to determine the next action to take, to retrieve missing data, to determine which branch is appropriate, etc. The master robot may perform orchestration functionality with respect to the minion robot(s).
G05B 19/04 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
G06F 9/44 - Arrangements for executing specific programs
48.
ARTIFACTS REFERENCE CREATION AND DEPENDENCY TRACKING
A computing device includes a processor and a memory configured to create one or more forms for an application in an environment. The processor and the memory are further configured to create one or more environment variables related to the one or more forms. The processor is further configured to utilize one or more paths to track a dependency reference between the one or more environment variables, wherein a data model includes the one or more paths and the one or more environment variables. The processor is further configured to execute the data model to recreate the dependency reference, between the one or more environment variables, for the application in a target environment.
Systems and methods for instantiating a filter for a process graph are provided. A process graph of a workflow is received. Context data associated with the process graph is stored. A filter is instantiated to filter the process graph based on the stored context data. The filtered process graph is output.
Systems and methods for splitting an electronic file into sub-documents are provided. The electronic file is received. Portions of the electronic file are classified using a trained machine learning based model. The classifications represent relative positions of the portions within sub-documents of the electronic file. The electronic file is split into the sub-documents based on the relative positions of the portions. The sub-documents are output.
A system and a computer- implemented method for generating a test automation file for an application under test are disclosed herein. The computer- implemented method includes obtaining an image file associated with the application under test and identifying one or more control elements in the image file. The computer-implemented method further includes generating test automation recording data for the image file using a computer vision component, by recording one or more actions performed by a user on the one or more control elements of the image file. The computer-implemented method further includes using the test automation recording data to generate the test automation file at a design stage. The computer-implemented method further includes using the test automation file for testing a live application, at a development stage. The live application can be an RPA application.
G06F 11/36 - Preventing errors by testing or debugging of software
G06F 8/38 - Creation or generation of source code for implementing user interfaces
G06F 9/451 - Execution arrangements for user interfaces
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
52.
GRAPHICAL ELEMENT DETECTION USING A COMBINED SERIES AND DELAYED PARALLEL EXECUTION UNIFIED TARGET TECHNIQUE, A DEFAULT GRAPHICAL ELEMENT DETECTION TECHNIQUE, OR BOTH
Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
53.
LOCALIZED CONFIGURATIONS OF DISTRIBUTED-PACKAGED ROBOTIC PROCESSES
Disclosed herein is a computing device that includes a memory and a processor. The memory stores processor executable for a robotic process engine. The robotic process engine accesses a distributed packaged robotic process to procure code and generate a local robotic process. The code includes parameters, while local robotic process includes input fields in accordance with the parameters. The robotic process engine receives input arguments via the input fields of the local robotic process to generate a configuration and executes the local robotic process utilizing the configuration. The execution of the local robotic process mirrors an execution of the distributed packaged robotic process without changing the distributed packaged robotic process.
G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
54.
GRAPHICAL ELEMENT DETECTION USING A COMBINED SERIES AND DELAYED PARALLEL EXECUTION UNIFIED TARGET TECHNIQUE, A DEFAULT GRAPHICAL ELEMENT DETECTION TECHNIQUE, OR BOTH
Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
A user interface (UI) mapper for robotic process automation (RPA) is disclosed. The UI mapper may initially capture UI elements to fetch UI elements faster for later use and allow an RPA developer to "map" the UI elements for automating an application. This may enable subsequent developers who potentially do not have programming knowledge to build RPA workflows using these predefined "target" UI elements.
G06F 8/38 - Creation or generation of source code for implementing user interfaces
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 9/451 - Execution arrangements for user interfaces
57.
GRAPHICAL ELEMENT DETECTION USING A COMBINED SERIES AND DELAYED PARALLEL EXECUTION UNIFIED TARGET TECHNIQUE, A DEFAULT GRAPHICAL ELEMENT DETECTION TECHNIQUE, OR BOTH
Graphical element detection using a combined series and delayed parallel execution unified target technique that potentially uses a plurality of graphical element detection techniques, performs default user interface (UI) element detection technique configuration at the application and/or UI type level, or both, is disclosed. The unified target merges multiple techniques of identifying and automating UI elements into a single cohesive approach. A unified target descriptor chains together multiple types of UI descriptors in series, uses them in parallel, or uses at least one technique first for a period of time and then runs at least one other technique in parallel or alternatively if the first technique does not find a match within the time period.
Disclosed herein is a computing device that includes a memory and a processor, which is coupled to the memory. The memory stores processor executable instructions for a robotic process engine. In operation, the robotic process engine generates a robot tray comprising a canvas and dynamically configures the canvas based on inputs. The dynamic configuring includes adding a widget onto the canvas.
Systems and methods for generating a process tree of a process are provided. An event log of the process is received. It is determined whether a base case applies to the event log and, in response to determining that the base case applies to the event log, one or more nodes are added to the process tree. In response to determining that the base case does not apply to the event log, the event log is split into sub-event logs based on a frequency of directly follows relations and a frequency of strictly indirectly follows relations for pairs of activities in the event log and one or more nodes are added to the process tree. The steps of determining whether a base case applies and splitting the event log are repeatedly performed for each respective sub-event log using the respective sub-event log as the event log until it is determined that the base case applies to the event log. The process tree is output. The process may be a robotic process automation process.
Systems and methods for evaluating robotic process automation (RPA) are provided. RPA data associated with a first RPA related data source is received. The RPA data associated with the first RPA related data source is converted to a format associated with a second RPA related data source. The converted RPA data associated with the first RPA related data source is related with RPA data associated with the second RPA related data source to generate combined RPA data. One or more measures of interest are computed based on the combined RPA data.
A system and a computer-implemented method for viewing at least one robotic process automation (RPA) workflow using a web based user interface are disclosed herein. The computer-implemented method includes accessing the web based user interface and identifying the at least one RPA workflow for viewing. The computer-implemented method further includes generating, using a workflow object model component, a workflow diagram for the identified at least one RPA workflow. The computer-implemented method further includes rendering, using a web based visualization engine component, the generated workflow diagram for the identified at least one RPA workflow and displaying the rendered workflow diagram for viewing of the at least one RPA workflow on the web based user interface.
Graphical element detection using a combination of user interface (UI) descriptor attributes from two or more graphical element detection techniques is disclosed. UI descriptors may be used to compare attributes for a given UI descriptor with attributes of UI elements found at runtime in the UI. At runtime, the attributes for the UI elements found in the UI can be searched for matches with attributes for a respective RPA workflow activity, and if an exact match or a match within a matching threshold is found, the UI element may be identified and interacted with accordingly.
Disclosed herein is a computing device that includes a memory and a processor. The memory store processor executable instructions for an authentication system. The processor is coupled to the memory. The processor executes the authentication system to cause the computing device to generate a credential asset, which includes a unique name. The authentication system, also, fetches tokens for the credential asset using the unique name, calls a notification for each of the tokens, polls for a code of the credential asset, and utilizes the code for an authentication to run a job.
Artificial intelligence (AI) / machine learning (ML) model drift detection and correction for robotic process automation (RPA) is disclosed. Information is analyzed pertaining to input data for an AI/ML model to determine whether data drift has occurred, analyze information pertaining to results from execution of the AI/ML model to determine whether model drift has occurred, or both. When, based on the analysis of the information, a change condition is found, a change threshold is met or exceeded, or both, the AI/ML model is retrained. The retrained AI/ML model may then be deployed to provide better predictions on real world data.
Robot access control and governance for robotic process automation (RPA) is disclosed. A code analyzer of an RPA designer application, such as a workflow analyzer, may read access control and governance policy rules for an RPA designer application and analyze activities of an RPA workflow of the RPA designer application against the access control and governance policy rules. When one or more analyzed activities of the RPA workflow violate the access control and governance policy rules, the code analyzer prevents generation of an RPA robot or publication of the RPA workflow until the RPA workflow satisfies the access control and governance policy rules. When the analyzed activities of the RPA workflow comply with all required access control and governance policy rules, the RPA designer application may generate an RPA robot implementing the RPA workflow or publish the RPA workflow.
Robot access control and governance for robotic process automation (RPA) is disclosed. A code analyzer of an RPA designer application, such as a workflow analyzer, may read access control and governance policy rules for an RPA designer application and analyze activities of an RPA workflow of the RPA designer application against the access control and governance policy rules. When one or more analyzed activities of the RPA workflow violate the access control and governance policy rules, the code analyzer prevents generation of an RPA robot or publication of the RPA workflow until the RPA workflow satisfies the access control and governance policy rules. When the analyzed activities of the RPA workflow comply with all required access control and governance policy rules, the RPA designer application may generate an RPA robot implementing the RPA workflow or publish the RPA workflow.
User interface (UI) object descriptors, UI object libraries, UI object repositories, and UI object browsers for robotic process automation (RPA) are disclosed. AUI object browser may be used for managing, reusing, and increasing the reliability of UI descriptors in a project. UI descriptors may be added to UI object libraries and be published or republished as UI object libraries for global reuse in a UI object repository. The UI object browser, UI object libraries, and UI object repository may facilitate reusability of UI element identification frameworks and derivatives thereof.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
69.
AUTOMATION OF A PROCESS RUNNING IN A FIRST SESSION VIA A ROBOTIC PROCESS AUTOMATION ROBOT RUNNING IN A SECOND SESSION
Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
G05B 19/4155 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by programme execution, i.e. part programme or machine function execution, e.g. selection of a programme
Automation of a process running in a first session via robotic process automation (RPA) robot(s) running in a second session is disclosed. In some aspects, a form is displayed in a user session, but one or more attended RPA robots that retrieve and/or interact with data for an application in the first session run in one or more other sessions. In this manner, the operation of the RPA robot(s) may not prevent the user from using other applications or instances when the RPA robot(s) are running, but the data modifications made or facilitated by the RPA robot(s) may be visible to the user in the first session window.
User interface (UI) object descriptors, UI object libraries, UI object repositories, and UI object browsers for robotic process automation (RPA) are disclosed. AUI object browser may be used for managing, reusing, and increasing the reliability of UI descriptors in a project. UI descriptors may be added to UI object libraries and be published or republished as UI object libraries for global reuse in a UI object repository. The UI object browser, UI object libraries, and UI object repository may facilitate reusability of UI element identification frameworks and derivatives thereof.
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
72.
METHOD AND APPARATUS FOR VISUALIZING A PROCESS MAP
A method for visualizing a process map is executed by a process map server. The method includes receiving a flowchart and a step-by-step recording related to a process. Generating a process map by combining the flowchart and the step-by-step recording and displaying the process map. The process map displays a task, step, and action related to the process. A detail window shows information associated with the process, and portions of the process, in response to user input. The action is based on information from the step-by-step recording.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
73.
CONTEXT-AWARE UNDO-REDO SERVICE OF AN APPLICATION DEVELOPMENT PLATFORM BACKGROUND
A computing device is disclosed herein. The computing device includes a memory that stores processor executable instructions for an application development platform and a context-aware undo-redo service of the application development platform. The computing device includes a processor that executes the processor executable instructions to cause the computing device to receive a first invocation of an undo operation with respect to environment variables on screens. The computing device further navigates, according to an active context, to a configuration screen of the screens to make the configuration screen visible in response to the first invocation. The configuration screen shows a portion of the environment variables. The computing device also receives a second invocation of the undo operation and executes the undo operation in response to the second invocation to reverse changes to the portion of the environment variables shown by the configuration screen while the configuration screen is visible.
Application integration for robotic process automation (RPA) using a development application configured for development of RPA-enabled applications is disclosed. The development application in some embodiments may be used for application integration with attended robots that execute locally on the same computing system as an instance of the RPA-enabled application, unattended robots that execute on a remote computing system, or both, creating an RPA-enabled application. One or more user interface (UI) elements, variables, and/or events of an RPA-enabled application may be linked to one or more respective RPA processes, causing respective RPA robot(s) to carry out the associated functionality.
GRAPHICAL ELEMENT SEARCH TECHNIQUE SELECTION, FUZZY LOGIC SELECTION OF ANCHORS AND TARGETS, AND/OR HIERARCHICAL GRAPHICAL ELEMENT IDENTIFICATION FOR ROBOTIC PROCESS AUTOMATION
Graphical element search technique selection, fuzzy logic selection for anchors and targets, and hierarchical graphical element identification for robotic process automation (RPA) are disclosed. The fuzzy logic selection of anchors and targets may be part of a larger, tiered, or hierarchical process for identifying graphical elements in the UI. When a selector for a UI element is not found with at least a confidence threshold, similar elements potentially corresponding to the selector for a UI element target may be searched based on fuzzy matching of the target and corresponding anchor(s). Geometric matching may also be employed between the target UI element and its respective anchor(s). The combination of fuzzy matching and geometric matching may allow for more flexible and accurate identification of the exact selector with which an RPA robot is attempting to interact.
Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
77.
TEXT DETECTION, CARET TRACKING, AND ACTIVE ELEMENT DETECTION
Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
78.
MACHINE LEARNING MODEL RETRAINING PIPELINE FOR ROBOTIC PROCESS AUTOMATION
A machine learning (ML) model retraining pipeline for robotic process automation (RPA) is disclosed. When an ML model is deployed in a production or development environment, RPA robots send requests to the ML model when executing their workflows. When a confidence level of the ML model falls below a certain confidence, training data is collected, potentially from a large number of computing systems. The ML model is then trained using at least in part the collected training data, and a new version of the ML model is deployed.
Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
G06F 5/10 - Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising having a sequence of storage locations each being individually accessible for both enqueue and dequeue operations, e.g. using random access memory
Detection of typed and/or pasted text, caret tracking, and active element detection for a computing system are disclosed. The location on the screen associated with a computing system where the user has been typing or pasting text, potentially including hot keys or other keys that do not cause visible characters to appear, can be identified and the physical position on the screen where typing or pasting occurred can be provided based on the current resolution of where one or more characters appeared, where the cursor was blinking, or both. This can be done by identifying locations on the screen where changes occurred and performing text recognition and/or caret detection on these locations. The physical position of the typing or pasting activity allows determination of an active or focused element in an application displayed on the screen.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
81.
SCREEN RESPONSE VALIDATION OF ROBOT EXECUTION FOR ROBOTIC PROCESS AUTOMATION
Screen response validation of robot execution for robotic process automation (RPA) is disclosed. Whether text, screen changes, images, and/or other expected visual actions occur in an application executing on a computing system that an RPA robot is interacting with may be recognized. Where the robot has been typing may be determined and the physical position on the screen based on the current resolution of where one or more characters, images, windows, etc. appeared may be provided. The physical position of these elements, or the lack thereof, may allow determination of which field(s) the robot is typing in and what the associated application is for the purpose of validation that the application and computing system are responding as intended. When the expected screen changes do not occur, the robot can stop and throw an exception, go back and attempt the intended interaction again, restart the workflow, or take another suitable action.
G06F 3/023 - Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
Test cases for existing workflows (or workflows under test) may be created and executed. A test case may be created for a workflow in production or one or more parts of the workflow, and the created test case for the workflow, or the one or more parts of the workflow, may be executed to identify environmental and/or automation issues for the workflow. A failed workflow test may be reported when the environmental and/or automation issues are identified.
Test cases for existing workflows (or workflows under test) may be created and executed. A test case may be created for a workflow in production or one or more parts of the workflow, and the created test case for the workflow, or the one or more parts of the workflow, may be executed to identify environmental and/or automation issues for the workflow. A failed workflow test may be reported when the environmental and/or automation issues are identified.
Systems and methods for analyzing the influence of one or more attribute values on undesirable behavior exhibited in a set of cases of a process are provided. An observed frequency of occurrence of cases that are associated with one or more attribute values and exhibit undesirable behavior is determined. An expected frequency of occurrence of cases that are associated with the one or more attribute values and exhibit the undesirable behavior is calculated. The observed frequency of occurrence is compared with the expected frequency of occurrence to determine the influence of the one or more attribute values on the undesirable behavior. An impact metric quantifying the influence of the one or more attribute values on the undesirable behavior is computed.
A system, method and a computing device for performing a method for data augmentation allowing for document classification of a plurality of documents are disclosed. The system, method and computing device including a processor configured to convert the documents into images, a memory configured to store the images, the processor configured to obtain a vector representation for each page included in the documents, the processor configured to create clusters from the images based on similarity, where each cluster of the clusters represents a distinct page format, the processor configured to select one image from each cluster, the processor configured to compile the selected one image from each cluster to create a logically complete document, the memory configured to store the logically complete document, and the processor configured to train the classification based on the complete document.
A computing device may execute a robot service that receives process requests to store in a process queue in memory. The robot service may utilize user-defined preferences to prioritize the process requests in the process queue. The process requests may be scheduled based on the user-defined preferences. The robot service may initiate the scheduled process requests for robotic automation of the application.
A computing device may monitor, in relation to the robotic automation process, for an event or an activity associated with a trigger. The trigger may be defined by code, a definition file, or a configuration file. A match may be identified for the event or the activity associated with the trigger. The computing device may instruct, on a condition that the trigger is identified, a robot executor to initiate a process during the robotic automation process.
Systems and methods for analyzing an event log for a plurality of instances of execution of a process to identify a bottleneck are provided. An event log for a plurality of instances of execution of a process is received and segments executed during one or more of the plurality of instances of execution are identified from the event log. The segments represent a pair of activities of the process. For each particular segment of the identified segments, a measure of performance is calculated for each of the one or more instances of execution of the particular segment based on the event log, each of the one or more instances of execution of the particular segment is classified based on the calculated measures of performance, and one or more metrics are computed for the particular segment based on the classified one or more instances of execution of the particular segment. The identified segments are compared with each other based on the one or more metrics to identify one of the identified segments that is most likely to have a bottleneck.
Automation windows for robotic process automation (RPA) for attended or unattended robots are disclosed. A child session is created and hosted as a window including the user interfaces (UIs) of applications of a window associated with a parent session. Running multiple sessions allows a robot to operate in this child session while the user interacts with the parent session. The user may thus be able to interact with applications that the robot is not using or the user and the robot may be able to interact with the same application if that application is capable of this functionality. The user and the robot are both interacting with the same application instances and file system. Changes made via the robot and the user in an application will be made as if a single user made them, rather than having the user and the robot each work with separate versions of the applications and file systems.
Automation windows for robotic process automation (RPA) for attended or unattended robots are disclosed. A child session is created and hosted as a window including the user interfaces (UIs) of applications of a window associated with a parent session. Running multiple sessions allows a robot to operate in this child session while the user interacts with the parent session. The user may thus be able to interact with applications that the robot is not using or the user and the robot may be able to interact with the same application if that application is capable of this functionality. The user and the robot are both interacting with the same application instances and file system. Changes made via the robot and the user in an application will be made as if a single user made them, rather than having the user and the robot each work with separate versions of the applications and file systems.
Automation windows for RPA for attended or unattended robots are disclosed. A child session is created and hosted as a window including the UIs of applications of a window associated with a parent session. Running multiple sessions allows a robot to operate in this child session while the user interacts with the parent session. The user may thus be able to interact with applications that the robot is not using or the user and the robot may be able to interact with the same application if that application is capable of this functionality. The user and the robot are both interacting with the same application instances and file system. Changes made via the robot and the user in an application will be made as if a single user made them, rather than having the user and the robot each work with separate versions of the applications and file systems.
Inter-session automation for robotic process automation (RPA) is disclosed. A robot or another application or process running in the user session may interact with an application, but one or more attended RPA robots in one or more child sessions perform operations and fetch data that the user session robot will then use to interact with the application in the user session. Attended RPA robots in client sessions may share data through an Inter-Process Communication (IPC) protocol, by storing data in a persistent data store, such as a spreadsheet, an object-oriented database, a plain text file, another data store or file, etc. The user session robot or another application or process running in the parent session can then read this information and respond accordingly.
Automation windows for RPA for attended or unattended robots are disclosed. A child session is created and hosted as a window including the UIs of applications of a window associated with a parent session. Running multiple sessions allows a robot to operate in this child session while the user interacts with the parent session. The user may thus be able to interact with applications that the robot is not using or the user and the robot may be able to interact with the same application if that application is capable of this functionality. The user and the robot are both interacting with the same application instances and file system. Changes made via the robot and the user in an application will be made as if a single user made them, rather than having the user and the robot each work with separate versions of the applications and file systems.
A method and system are provided in which predictions are generated, using one or more machine learning-based prediction models, for one or more process parameters associated with a running process. Explanation-oriented data elements are generated that correspond to the generated predictions and include confidence indicators associated with the generated predictions. The explanation-oriented data elements are presented in one or more dashboards of a visualization platform. The explanation- oriented data elements are representative of an explanation framework for explaining the predicted business process parameters generated by a machine learning-based prediction model and in a manner so that a user can understand and trust the basis for the predictions to facilitate effective and appropriate intervention in a running process.
A computing device for compatibility in robotic process automation (RPA) includes a memory that includes a plurality of RPA tool driver versions, and a processor communicatively coupled with the memory. Upon the processor receiving a request for a first RPA tool driver version of the plurality of RPA tool driver versions, the processor loads the first RPA tool version for processing.
G05B 19/04 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
Systems and methods for representing execution of a process in an edge table are provided. Process execution data for a process including a plurality of activities is received. An edge table is generated representing execution of the process based on the process execution data. Each row of the edge table identifies a transition from a source event to a destination event.
Systems and methods for implementing robotic process automation (RPA) in the cloud are provided. An instruction for managing an RPA robot is received at an orchestrator in a cloud computing environment from a user in a local computing environment. In response to receiving the instruction, the instruction for managing the RPA robot is effectuated.
A system and a computer-implemented method for analyzing a robotic process automation (RPA) workflow are disclosed herein. The computer-implemented method may include obtaining the RPA workflow and analyzing the obtained RPA workflow to provide an analyzed RPA workflow. The computer-implemented method may further include determining one or metrics associated with the analyzed RPA workflow and performing one or more corrective activities for the analyzed RPA workflow based on the determined one or more metrics.
G05B 19/04 - Programme control other than numerical control, i.e. in sequence controllers or logic controllers
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
A system and a computer-implemented method for analyzing workflow of test automation associated with a robotic process automation (RPA) application are disclosed herein. The computer-implemented method includes receiving the workflow of the test automation associated with the RPA application and analyzing, via an Artificial Intelligence (AI) model associated with a workflow analyzer module, the workflow of the test automation based on a set of pre-defmed test automation rules. The computer-implemented method further includes determining one or more metrics associated with the analyzed workflow of the test automation and generating, via the AI model, corrective activity data based on the determined one or more metrics.
A system and method provide an automation solution for guiding a contact center agent during a communication session by providing contextual in-line assistance. Robotic process automation (RPA) is used for automating workflows and processes with robots that capture information from multiple applications of a contact center system and generate contextual guidance for the contact center agent via callout activities during the communication session.