A system includes a processor configured to perform operations, including determining, for each respective software defect of software defects identified in a software product, corresponding attribute values that represent a software development history of the respective software defect, and determining, for each respective defect, using a machine learning model, and based on the corresponding attribute values, a corresponding escalation value representing a likelihood of the respective defect being escalated for resolution after release of the software product. The machine learning model may have been trained using corresponding software development histories of training defects that were escalated for resolution after release of a prior version of the software product. The operations also include, based on the corresponding escalation value of each respective defect, selecting, for resolution prior to the release of the software product, a defect subset of the software defects, and storing a representation of the defect subset.
Metrics that characterize one or more computing devices are received. A time value associated with a performance of the one or more computing devices based on the received metrics is determined. A first scheduling parameter based on the time value is determined, wherein the first scheduling parameter is associated with a first discovery process that is associated with at least a portion of the one or more computing devices. Execution of the first discovery process is executed according to the first scheduling parameter.
An embodiment may involve persistent storage containing one or more tables, wherein the tables include entries that specify automations, wherein the automations are software applications. One or more processors are configured to: receive a specification for a new automation, wherein the specification includes a frequency at which the new automation is to be executed, and expected time or resources saved per execution; generate an automation request within the tables, wherein the automation request includes the frequency and the expected time or resources saved; generate a reference from the automation request to an automation configuration item (CI) in the tables, wherein the automation CI represents a software application used to perform the new automation; cause the software application to execute at least part of the new automation and in accordance with the frequency; and measure actual time or resources saved per execution of the new automation.
A new alert is received. Machine learning is used to identify a plurality of resolved alerts similar to the new alert. One or more processors are used to automatically identify among properties of the identified similar resolved alerts, one or more common properties of the identified similar resolved alerts having one or more statistical metrics meeting one or more corresponding thresholds. The new alert is caused to inherit the identified one or more common properties.
H04L 41/06 - Management of faults, events, alarms or notifications
G06F 18/23213 - Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
H04L 41/12 - Discovery or management of network topologies
H04L 41/142 - Network analysis or design using statistical or mathematical methods
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
5.
Software Architecture and User Interface for Process Visualization
An embodiment may involve defining a process by first and second process design applications, wherein the process includes a set of stages; receiving, by a process visualization application, a reference to a parent entry and a definition of the process referencing the parent entry; based on the parent entry, identifying a first transformer class associated with the first process design application and a second transformer class associated with the second process design application, wherein the individual classes contain executable functions; converting an output of the first and second process design applications in a first configuration into a second configuration based on a first transformer class associated with the first process design application and a second transformer class associated with the second process design application, wherein the second configuration is accessible to the process visualization application; generating a display of the process in a hierarchical arrangement, wherein the hierarchical arrangement reflects the sets of stages and the associated activities.
Persistent storage may contain definitions of: configuration items (CIs) each having attributes that characterize a respective hardware or software component, a list of data sources used to update at least some of the CIs, and an auto-attestation time period; and one or more processors configured to: identify a plurality of the CIs for auto-attestation; for each respective CI, determine a respective condition of whether: (i) a data-source attribute of the respective CI indicates that it was updated by a trusted data source, and (ii) a most-recent-update attribute of the respective CI indicates that it was updated within the auto-attestation time period; and mark each respective CI based on its respective condition, wherein the respective CI is marked as auto-attested when its respective condition is true, and wherein the respective CI is marked as not auto-attested when its respective condition is false.
H04L 41/16 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
A natural language input is received. The natural language input is provided to a machine learning model to identify a content page associated with an intent of the natural language input. The content page is dynamically analyzed to determine interactive entities of the content page. Among the interactive entities of the content page, a matching interactive entity corresponding to the natural language input is identified, and an indication of the matching interactive entity is provided.
Persistent storage contains an original document and a plurality of summaries of the original document produced by summarization models. One or more processors: provide, to an entity extractor, the original document; receive, from the entity extractor, a list of entities within the original document; provide, to a query generator, the original document and the list of entities; receive, from the query generator, a set of queries answerable by the original document; provide, to a query answerer, the set of queries, the original document, and the plurality of summaries; receive, from the query answerer and for the set of queries, a set of document answers corresponding to the original document and sets of summary answers corresponding to the plurality of summaries; provide, to an answer matcher, the set of document answers and the sets of summary answers; and receive, from the answer matcher, scores for the plurality of summaries.
In an embodiment, persistent storage contains one or more cryptographic keys. One or more processors may be configured to perform operations comprising: receiving a request for an encrypted record stored within a computational instance, wherein the request includes a plaintext value related to the encrypted record; obtaining a hash value by applying a hash function to the plaintext value; transmitting, to the computational instance, the hash value; receiving, from the computational instance, the encrypted record, wherein the encrypted record includes one or more encrypted values; obtaining an unencrypted version of the encrypted record by applying a cryptographic function to the encrypted record, wherein applying the cryptographic function includes use of a cryptographic key of the one or more cryptographic keys; and transmitting at least part of the unencrypted version of the encrypted record.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
Persistent storage contains a state machine for a request routing process and a plurality of requests related to the request routing process. One or more processors are configured to apply the state machine through operations including: obtaining a request from the plurality of requests; providing, to a criticality detection application and to an intent detection application, a representation of the request, wherein the criticality detection application is configured to detect a criticality of the request, and wherein the intent detection application is configured to determine a semantic intent of the request; receiving, from the criticality detection application and the intent detection application, respective indications of a detected criticality of the request and a detected intent of the request; determining whether to route the request to a channel that is one of a live agent, a virtual agent, or a search-based application; and routing the request to the channel as determined.
A software architecture within a public cloud network may include units of: (i) a plurality of computational instances respectively related to managed networks, (ii) a plurality of servers configurable as load simulators, (iii) administrative components configured to deploy and update the software architecture, and (iv) shared infrastructure services, wherein the units of the software architecture are implemented on virtual machines of the public cloud network and are connected to but logically isolated from one another by way of different access controls or policies. A provider network, coupled to the software architecture by way of network gateways within the shared infrastructure services, may be configured to deliver the configuration, software packages, and database schema to the infrastructure-as-code platform.
A computing system includes persistent storage configured to store representations of software applications installed on computing devices, and a software application configured to perform operations, including retrieving, from the persistent storage, a first plurality of representations of a first plurality of software applications installed on a particular computing device and a second plurality of representations of a second plurality of software applications installed on a reference computing device. The operations also include determining a device fingerprint of the particular computing device based on the first plurality of representations and a reference device fingerprint of the reference computing device based on the second plurality of representations, and comparing the device fingerprint to the reference device fingerprint. The operations further include, based on the comparing, determining a disparity between software applications installed on the particular computing device and the reference computing device, and storing, in the persistent storage, a representation of the disparity.
A request associated with access to a restricted computer resource by a computer application of a device is received via a first communication medium. It is determined that the request is provided by the device with an IP address not included in a group of authorized IP addresses. A registration secret is generated. A representation associated with the registration secret is provided via a second communication medium. A token signed using the registration secret is received. In response to successfully validating the token, a communication secret is generated and associated with an identifier associated with the device. The communication secret is provided for use by the computer application of the device to access the restricted computer resource.
A system includes persistent storage containing predefined user interface (UI) component templates and a representation of a web page that includes a runtime UI component configured to reserve an empty portion of the web page to be populated by UI components generated at runtime. The system also includes a processor configured to perform operations, including receiving, from a client device, a request for the web page, and determining, based on the request, that the web page includes the runtime UI component. The operations also include determining runtime parameter values associated with the request, and determining, based on the runtime parameter values and the predefined UI component templates, context-specific UI components to populate the empty portion of the web page. The operations further include generating a context-specific representation of the web page based on the context-specific UI components, and transmitting, to the client device, the context-specific representation.
G06F 8/38 - Creation or generation of source code for implementing user interfaces
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 9/451 - Execution arrangements for user interfaces
15.
Automated preventative controls in digital workflow
A system includes a processor and memory storing instructions that cause the processor to receive, from a client device, inputs defining associations between one or more control objectives and one more policies, wherein the one or more control objectives define one or more functions to be performed to comply with the one or more policies. The processor may map the one or more policies associated with the one or more control objectives to an application environment and receive, from the client device or a different client device, a change set to an application in the application environment, wherein the change set comprises one or more modifications to the application. The processor may then determine whether the change set adheres to the one or more policies and restrict implementation of the change set in response to determining that the change set does not adhere to the one more policies.
H04L 41/0823 - Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
An online cloud application platform for global navigation of multiple online cloud-based applications is provided. The online cloud application platform is configured to provide a plurality of online cloud-based applications. While providing a current application among the plurality of online cloud-based applications, an event identifying a function request decoupled from a static navigation route is received via a cross-application routing handler. Based on a set of dynamically modifiable configuration data of the online cloud application platform, which application among the plurality of online cloud-based applications to handle the function request is dynamically determined. Based on the set of dynamically modifiable configuration data of the online cloud application platform, a corresponding dynamically determined cross-application navigation route to handle the event is determined. The dynamically determined cross-application navigation route is provided within a context of the current application among the plurality of online cloud-based applications.
An embodiment may involve a server-side log collected by a server device, where the server-side log includes a set of entries indicating a unique identifier, and wherein the unique identifier is assigned to a work item of a server-based application executed by the server device. The embodiment may further involve one or more processors configured to: receive, from a client device disposed upon a network, a client-side log, wherein the client-side log includes operational data related to usage of a client-based application executed by the client device; identify, from the operational data, the client-based application and one or more activities performed by the client-based application; determine that the one or more activities are related to the unique identifier; based on the one or more activities, determine an action that can be taken to improve efficacy of the server-based application; and write, to the persistent storage, a representation of the action.
Persistent storage may contain a list of discovery commands, the discovery commands respectively associated with lists of network addresses. A discovery validation application, when executed by one or more processors, may be configured to: read, from the persistent storage, the list of discovery commands and the lists of network addresses; for each discovery command in the list of discovery commands, transmit, by way of one or more proxy servers deployed external to the system, the discovery command to each network address in the respectively associated list of network addresses; receive, by way of the one or more proxy servers, discovery results respectively corresponding to each of the discovery commands that were transmitted, wherein the discovery results either indicate success or failure of the discovery commands; and write, to the persistent storage, the discovery results.
A skills ontology includes classes of data that define skills possessed or demonstrated by employees of an enterprise. Specifically, sets of data may be received and segmented into strings of text referred to as utterances. The utterances may be provided to an NLU service/engine, which uses NLU techniques to process the utterances to extract intents and/or entities. Skills may be identified from within the extracted entities. New skill records may be added to the skills ontology for newly extracted skills. Employee profiles of the skills ontology may also be updated based on the actions being performed. Further, the skills ontology may be utilized to identify employees having the skills associated with tasks to be performed and assign the task to an employee for completion. Once the task as been completed, skills profiles of the employee in the skills ontology may be updated to reflect performance of the task.
Persistent storage may contain a list of discovery commands, the discovery commands respectively associated with lists of network addresses. A discovery validation application, when executed by one or more processors, may be configured to: read, from the persistent storage, the list of discovery commands and the lists of network addresses; for each discovery command in the list of discovery commands, transmit, by way of one or more proxy servers deployed external to the system, the discovery command to each network address in the respectively associated list of network addresses; receive, by way of the one or more proxy servers, discovery results respectively corresponding to each of the discovery commands that were transmitted, wherein the discovery results either indicate success or failure of the discovery commands; and write, to the persistent storage, the discovery results.
H04L 43/0817 - Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
H04L 41/12 - Discovery or management of network topologies
A determination is made to obfuscate a protected dataset including data elements that are to remain comparable with one another after the obfuscation. An obfuscation function for the protected dataset is selected wherein the obfuscation function is a monotonic one-way function. One or more parameters for the obfuscation function are automatically determined based at least in part on a secret value. Using one or more processors, the protected dataset is automatically obfuscated to generate an obfuscated version using the obfuscation function with the determined one or more parameters. Computer access to the obfuscated version of the protected dataset is provided as a comparable alternative for the protected dataset.
An indication to predict one or more additional steps to be added to a partially specified computerized workflow based at least in part on the partially specified computerized workflow is received. Text descriptive of at least a portion of the partially specified computerized workflow is generated. Machine learning inputs based at least in part on the descriptive text are provided to a machine learning model to determine an output text descriptive of the one or more additional steps to be added. One or more processors are used to automatically implement the one or more additional steps to be added to the partially specified computerized workflow.
Present embodiments are directed to a virtual agent with improved natural language understanding (NLU) capabilities. The disclosed virtual agent enables topic selection and topic changes during natural language exchanges with a user. The virtual agent is designed to select suitable topic flows to execute based on intents identified in received user utterances, including selection of an initial topic flow in response to a topic identified in a first user utterance, as well as switching between topic flows mid-conversation based on identified topic changes. The virtual agent is also capable of considering all intents and entities conveyed during the conversation, which enables the virtual agent to avoid prompting the user to provide redundant information. Furthermore, the virtual agent is capable of executing topic flows as part of a global topic flow, which enables the virtual agent to perform a number of predefined activities as part of each interaction with the user.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
An example embodiment may involve persistent storage containing definitions of (i) assignments of bots to endpoints, (ii) software packages for execution by the bots, (iii) schedules for the bots to execute the software packages, and (iv) processes that associate the bots with the software packages and the schedules. This embodiment may also involve processors configured to: receive, from a computing device, a request for bot deployment, wherein the computing device includes a bot runtime; identify, in the processes, a bot assigned to an endpoint that is associated with the computing device, wherein the bot is associated with a software package and a schedule; and transmit, to the computing device, data including a representation of the bot, a copy of the software package, and a copy of the schedule, wherein reception of the data causes the bot to execute, using the bot runtime, the software package in accordance with the schedule.
An example embodiment may involve identifying local traces of related events within a plurality of event data repositories, wherein each of the event data repositories is respectively associated with a software application; using a clustering model, assigning the local traces into clusters; determining positive rules that define when pairs of the local traces are linked to a common global trace, and negative rules that define when the pairs are linked to different global traces; linking the pairs into global traces; iteratively training a similarity model to project the local traces into a vector space such that the pairs that are linked to common global traces exhibit a greater similarity with one another than the pairs that are linked to different global traces; and based on the similarity model as trained, linking further local traces to the global traces.
A system includes a machine learning model configured to, based on textual representations of queries, classify the queries among query intents, which may be mapped to predetermined solutions to problems. The system also includes a software application configured to receive a query that includes a textual representation of a problem, and generate, by the machine learning model and based on the textual representation of the query, a query intent therefor. When the query intent is determined to be one of the query intents mapped to a predetermined solution, the predetermined solution for the query may be selected from the predetermined solutions based on the mapping. When the query intent is determined to be a no-solution query intent, the query may be added to a no-solution query set and, when this set accumulates a threshold number of queries, a solution to the problem may be requested from a technician.
A user provided text description of at least a portion of a desired workflow is received. Context information associated with the desired workflow is determined. Machine learning inputs based at least in part on the text description and the context information are provided to a machine learning model to determine an implementation prediction for the desired workflow. One or more processors are used to automatically implement the implementation prediction as a computerized workflow implementation of at least a portion of the desired workflow.
A datacenter that hosts a client instance may receive an input to perform discovery against a containerized application orchestration infrastructure that includes computing clusters associated with one or more resource providers. The datacenter retrieves cluster data associated with each computing cluster from the one or more resource providers, automatically creates respective discovery schedules for the computing clusters based on the cluster data, automatically executes the respective discovery schedules for the computing clusters, automatically retrieves respective authentication bearer tokens associated with the computing clusters, automatically performs respective discovery processes against the computing clusters using the respective authentication bearer tokens, and stores the resource data received from the computing clusters in a database.
Content of a dialog between at least two communication parties to resolve a task is received. A specification associated with at least a portion of eligible steps of a workflow is received. Machine learning input data is determined based on the received content of the dialog and the received specification. The determined machine learning input data is processed using a trained machine learning model executing on one or more hardware processors to automatically predict a sequence of workflow steps representing the dialog.
A system could include persistent storage containing application components. A plurality of software applications could be installed on the system. The software applications could be respectively associated context records that include references to application components that provide some behavior or data for the software applications. The system could also include processors configured to perform operations. The operations could include receiving a request to generate a topology map for a software application and identifying, based on a context record for the software application, a subset of application components that provide some behavior or data for the software application. The operations could further include determining relationship types between pairs of application components and generating a topology map for the software application. The subset of application components may be represented as nodes in the topology map, and edges between the nodes may be defined from relationship types between corresponding pairs of application components.
A web content page is provided, wherein the web content page is configured to dynamically provide a new web component streamed from a server after the web content page has been initially loaded by a client. An indication associated with a desired web component is received. The desired web component among a plurality of web components developed on a platform-as-a-service environment separately from the web content page is obtained. The desired web component is streamed to the web content page.
An identification of a specification that identifies one or more data sources is received. The one or more data sources are respectively associated with one or more database queries. Each of the one or more database queries is associated with a different embedded screen. An end-user application that is configured to generate selectable user interface elements for the one or more different embedded screens is generated. Generating the end-user application is based on the specification. In response to selection of a particular selectable user interface element of the selectable user interface elements, an embedded screen associated with the particular selectable user interface element is identified, and a user interface including the identified embedded screen is provided.
Origin text content to be analyzed using natural language processing is received. A two-dimensional item sequence representation for at least a portion of the received origin text content is generated. Using one or more processors, one or more evaluation metrics are determined based on an analysis of the two-dimensional item sequence representation. A reduced version of the origin text content is automatically generated based on the one or more evaluation metrics to assist in satisfying a constraint of a natural language processing model. The reduced version of the origin text content is used as an input to the natural language processing model.
Origin text content to be analyzed using natural language processing is received. The received origin text content is preprocessed using one or more processors including by vectorizing at least a portion of the received origin text content and identifying a closest matching centroid to automatically generate a reduced version of the origin text content to assist in satisfying a constraint of a natural language processing model. The reduced version of the origin text content is used as an input to the natural language processing model. A result of the natural language processing model is provided for use in managing a computerized workflow.
Executing and managing flow plans by performing at least the following: receiving an indication to initiate a task flow including a plurality of discrete but related operations at a customer instance environment of a cloud-based computing platform; obtaining a definition of the task flow identifying run-time requirements for each of the plurality of operations; determining a first execution environment for the first of the plurality of operations; initiating execution of the first operation in the first execution environment; and determining the proper execution environment for subsequent operations of the task flow until all operations of the task flow are complete. Factors, such as look-ahead optimization, environmental operational capabilities, access and security requirements, current load, future load, etc. may be considered when determining the proper execution environment for a given operation. Operations may be executed in environments hosted in the public cloud or in environments present in a dedicated private network.
A computer-generated data entry is received. The computer-generated data entry is segmented into a set of tokens. A plurality of different token permutation groupings are determined. Each of the different token permutation groupings includes a different subset of tokens from the set of tokens of the computer-generated data entry. For the computer-generated data entry, a plurality of token permutation grouping identifiers associated with at least a portion of the plurality of different token permutation groupings is obtained. It is determined whether the computer-generated data entry belongs to any data entry cluster among a plurality of previously identified data entry clusters based on a search performed using the token permutation grouping identifiers of the computer-generated data entry.
A cloud-based platform may receive an indication of one or more event associated with a service provider and modify a configuration associated with providing a schedule of respective service appointments offered by the service provider based on the indication of the one or more events. The cloud-based platform may then receive a request for a new service appointment after modifying the configuration and in response to receiving the request for the new service appointment, determine a number of scheduled appointments associated with a first time window, determine whether the number of scheduled appointments is less than a maximum number of appointments associated with the first time window, and automatically schedule the new service appointment during the first time window in response to determining that the number of scheduled appointments is less than the maximum number of appointments associated with the first window.
H04L 41/5006 - Creating or negotiating SLA contracts, guarantees or penalties
H04L 41/5061 - Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the interaction between service providers and their network customers, e.g. customer relationship management
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
39.
LOCATION-BASED PATTERN DETECTION FOR PASSWORD STRENGTH
A password is received for evaluation. For each of at least a portion of characters included in the password, a corresponding location coordinate of a corresponding physical key location on a physical input device key layout is determined. Using the determined location coordinates, an ordered series of data representing the password is generated. One or more processors are used to determine a strength of the password including by utilizing the generated ordered series of data to perform an analysis based on location pattern detection.
A request is received from a client for data to render a modular contained widget component of an application user interface. Whether the requested data is cached at an intermediary server is determined at the intermediary server, wherein the requested data is based at least in part on one or more database records stored at a backend server. In response to a determination that the requested data is cached, the requested data is obtained from an identified cache instance that cached the requested data. The cached requested data is based at least in part on the one or more database records provided by the backend server to the intermediary server to maintain an updated version of the requested data at the identified cache instance. The requested data is provided to the client from the intermediary server.
A controller computing device may include one or more processors and memory containing controller data representing controller computing device capabilities. The one or more processors may be configured to transmit on a first instance of a request-response protocol, a controller request, including the controller data, to an agent computing device. The controller computing device may then receive, from the agent computing device, an agent request on a second instance of the request-response protocol. The agent request may include agent data representing agent computing device capabilities. The controller computing device may store the agent data in the memory, and transmit on the second instance of the request-response protocol, to the agent computing device, a controller response acknowledging receipt of the agent request. The controller computing device may then receive on the first instance of the request-response protocol from the agent computing device, an agent response acknowledging receipt of the controller request.
An example embodiment may involve identifying local traces of related events within a plurality of event data repositories, wherein each of the event data repositories is respectively associated with a software application; using a clustering model, assigning the local traces into clusters; determining positive rules that define when pairs of the local traces are linked to a common global trace, and negative rules that define when the pairs are linked to different global traces; linking the pairs into global traces; iteratively training a similarity model to project the local traces into a vector space such that the pairs that are linked to common global traces exhibit a greater similarity with one another than the pairs that are linked to different global traces; and based on the similarity model as trained, linking further local traces to the global traces.
Embodiments of the present disclosure are directed to systems and methods for managing a database and performing database operations. An exemplary method in accordance with embodiments of this disclosure comprises: receiving a request to perform one or more database operations on a dataset comprising one or more data items; inputting the dataset into a statistical model, wherein the statistical model is configured to identify one or more storage locations associated with the one or more data items based on a similarity between one or more properties of the one or more data items; receiving the one or more storage locations associated with the one or more data items; updating the one or more data items based on the received one or more storage locations; and performing the one or more database operations on the one or more updated data items based on the one or more storage locations.
G06F 16/22 - Indexing; Data structures therefor; Storage structures
G06F 40/44 - Statistical methods, e.g. probability models
G06F 18/2321 - Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
G06F 30/27 - Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
An embodiment may include determining user interface (UI) screens of an application that have been navigated to by way of a UI of the application. The embodiment may also include receiving an interaction with a UI component of a current UI screen of the UI screens and, based on receiving the interaction, determining a next UI screen of the UI screens that is expected to be revisited after the current UI screen. The embodiment may additionally include, prior to receiving a request to navigate to the next UI screen, transmitting, to a server device, a query for an updated version of the next UI screen, receiving, from the server device, a response including the updated version of the next UI screen, and, based on receiving the request to navigate to the next UI screen, displaying, based on the response, the updated version of the next UI screen.
G06F 3/04842 - Selection of displayed objects or displayed text elements
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
A quarantine system could be disposed between an outer firewall and an inner firewall. The quarantine system may include persistent storage containing mappings between computing devices disposed within the inner firewall and data sources disposed outside the outer firewall. The quarantine system may include one or more processors configured to perform operations that include requesting and receiving, based on the mappings, a software-related update from a data source, the software-related update being targeted for deployment on the computing devices. The operations may also include assigning the software-related update for review by a group of one or more agents authorized to approve or reject the software-related update. The operations may also receiving an indication that the software-related update has been approved by the one or more agents and, responsive to receiving the indication, transmitting, based on the mappings, the software-related update to a recipient device within the inner firewall.
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 41/082 - Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
A first type of service map may be converted to a second type of a service map by adding a conversion tag to a set of configuration items (CIs) presented by the service map. The conversion tag includes data that may be used to link historical information associated with the service map of the first type, such as information related to incidents, alerts, change requests, and other events, to the second type. A second service map may be generated using the conversion tags and/or tag-based filtering processes such that the second service map displays different CIs as compared to the first service map.
A program is provided to automatically train using a training dataset a machine learning model for detecting anomalies. The machine learning model is automatically applied to a validation dataset to determine anomaly detection results. A histogram of the anomaly detection results of the machine learning model is automatically generated. The histogram is automatically analyzed, and a first peak and a second peak of the histogram is automatically identified. A threshold activation of the machine learning model is automatically determined based at least in part on the automatically identified second peak of the histogram.
A server receives a first hypertext transfer protocol (HTTP) from a client device requesting for content associated with a webpage. The server retrieves a plurality of cache keys associated with respective sets of application metadata identified in the first HTTP request from an application metadata database and transmits the cache keys to the client device. The server receives a second HTTP request from the client device identifying one or more cache keys that are not stored in a local HTTP cache of the client device. The server device retrieves the sets of application metadata corresponding to the missing cache keys from the application metadata database and transmits the application metadata to the client device.
A server receives a first hypertext transfer protocol (HTTP) from a client device that requests first and second items of content associated with a webpage and applies a defer directive to the second item. The server retrieves, from a database, via a single worker thread, first data associated with the first item and transmits a first message comprising the first data associated with the first item. The server retrieves, from the database, via the worker thread, second data associated with the deferred item and transmits a second message comprising the second data associated with the second item.
A first type of service map may be converted to a second type of a service map by adding a conversion tag to a set of configuration items (CIs) presented by the service map. The conversion tag includes data that may be used to link historical information associated with the service map of the first type, such as information related to incidents, alerts, change requests, and other events, to the second type. A second service map may be generated using the conversion tags and/or tag-based filtering processes such that the second service map displays different CIs as compared to the first service map.
H04L 41/12 - Discovery or management of network topologies
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
A chat message is received from a user to a primary virtual agent service. A secondary virtual agent service to handle the chat message is automatically evaluated and selected. The secondary virtual agent service is selected from a plurality of candidate secondary virtual agent services that includes at least one virtual agent service provided by a third-party entity external to an entity providing the primary virtual agent service. The chat message is transformed from a first format of the primary virtual agent service to a second format of the selected secondary virtual agent service. The chat message is forwarded in the second format to the selected secondary virtual agent service.
H04L 51/02 - User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 40/103 - Formatting, i.e. changing of presentation of documents
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
An embodiment may include determining user interface (UI) screens of an application that have been navigated to by way of a UI of the application. The embodiment may also include receiving an interaction with a UI component of a current UI screen of the UI screens and, based on receiving the interaction, determining a next UI screen of the UI screens that is expected to be revisited after the current UI screen. The embodiment may additionally include, prior to receiving a request to navigate to the next UI screen, transmitting, to a server device, a query for an updated version of the next UI screen, receiving, from the server device, a response including the updated version of the next UI screen, and, based on receiving the request to navigate to the next UI screen, displaying, based on the response, the updated version of the next UI screen.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 3/0483 - Interaction with page-structured environments, e.g. book metaphor
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 9/451 - Execution arrangements for user interfaces
An asset, such as a module, may be used to provide access to a resource. The asset may include identifiers that indicate a type, version, or format of the asset. At least in some instances, an enterprise may desire to store and maintain assets of a particular type. Accordingly, the assets may be converted subsequent to receiving a request for the resource for which the asset is used to provide access. The converted assets may be output to a cache memory of a device attempting to access the resource and/or saved on a database to provide to additional computing devices attempting to access the resource.
H04L 67/5683 - Storage of data provided by user terminals, i.e. reverse caching
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
A controller computing device may include one or more processors and memory containing controller data representing controller computing device capabilities. The one or more processors may be configured, to transmit on a first instance of a. request-response protocol, a controller request, including the controller data, to an agent computing device. The controller computing device may then receive, from the agent computing device, an agent request on a second instance of the request-response protocol. The agent request may include agent data representing agent computing device capabilities. The controller computing device may store the agent data in the memory, and transmit on the second instance of the request-response protocol, to the agent computing device, a controller response acknowledging receipt of the agent request. The controller computing device may then receive on the first instance of the request-response protocol from the agent computing device, an agent response acknowledging receipt of the controller request.
An embodiment may involve persistent storage containing a machine learning trainer application configured to apply one or more learning algorithms. One or more processors may be configured to: obtain alert data from one or more computing systems; generate training vectors from the alert data, wherein elements within each of the training vectors include: results of a set of statistics applied to the alert data for a particular computing system of the one or more computing systems, and an indication of whether the particular computing system is expected to fail given its alert data; train, using the machine learning trainer application and the training vectors, a machine learning model, wherein the machine learning model is configured to predict failure of a further computing system based on operational alert data obtained from the further computing system; and deploy the machine learning model for production use.
A system includes an application configured to: receive, from a client device, a query for a first web page of a plurality of web pages; generate a response including a shared content that is common to a plurality of web pages, a first page-specific content that defines the first web page, and a predefined token separating the shared content from the first page- specific content; and transmit, to the client device, the response. Reception of the response is configured to cause the client device to: write, to a cache memory, the shared content, render the first web page based on the response, and in response to reception of a subsequent event that references a second web page, read the shared content from the cache memory and begin rendering the shared content before receiving, from the server application, a second page- specific content that defines the second web page.
An embodiment may involve persistent storage containing a machine learning trainer application configured to apply one or more learning algorithms. One or more processors may be configured to: obtain alert data from one or more computing systems; generate training vectors from the alert data, wherein elements within each of the training vectors include: results of a set of statistics applied to the alert data for a particular computing system of the one or more computing systems, and an indication of whether the particular computing system is expected to fail given its alert data: train, using the machine learning trainer application and the training vectors, a machine learning model, wherein the machine learning model is configured to predict: failure of a. further computing system based on operational alert data obtained from the further computing system; and deploy the machine learning model for production use.
A system includes an application configured to: receive, from a client device, a query for a first web page of a plurality of web pages; generate a response including a shared content that is common to a plurality of web pages, a first page-specific content that defines the first web page, and a predefined token separating the shared content from the first page-specific content; and transmit, to the client device, the response. Reception of the response is configured to cause the client device to: write, to a cache memory, the shared content, render the first web page based on the response, and in response to reception of a subsequent event that references a second web page, read the shared content from the cache memory and begin rendering the shared content before receiving, from the server application, a second page-specific content that defines the second web page.
Persistent storage may contain: (i) a database table containing entries, (ii) a definition of a communication endpoint of a remote system, and (iii) outbound flow processing. One or more processors may be configured to: detect a state change associated with a local entry in the database table; read, from the database table, a set of data representing the local entry; transform, using the outbound flow processing, the set of data into a format receivable by the remote system; create, for the set of data, a correlation record that contains a local correlation identifier, wherein the correlation record specifies the local entry; transmit, to the remote system, the set of data as transformed and the local correlation identifier; receive, from the remote system and for the set of data, a remote correlation identifier; add, to the correlation record, the remote correlation identifier; and write, to a correlation table, the correlation record.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/25 - Integrating or interfacing systems involving database management systems
G06F 16/22 - Indexing; Data structures therefor; Storage structures
An embodiment may involve storage containing incident logs and mappings between incident logs and vector representations generated by a machine learning (ML) model. The embodiment may further involve one or more processors configured to: receive, from a client device, a classification request corresponding to an additional incident log; transmit, to the ML model, additional values as appearing in the additional incident log, wherein reception of the additional values causes the ML model to generate an additional vector representation of the additional incident log; obtain confidence measurements respectively representing similarities between the additional vector representation and each of the vector representations corresponding to the incident logs; determine, based on the confidence measurements, a set of one or more incident logs that are semantically relevant to the additional incident log; and transmit, to the client device, representations of the one or more incident logs and their corresponding confidence measurements.
A discovery application on a computing system is provided. The discovery application receives a user input, which is for discovery of resources associated with a cloud operating system of a cloud computing system. The user input includes an authentication credential and account information associated with the cloud operating system. Based on the received input, the discovery application executes a discovery pattern comprising operations for the discovery of resources. The cloud operating system includes a group of services to access such resources. At least one of the operations corresponds to an API call to an API endpoint associated with a service of the group of services. The discovery application receives a response to the API call from the cloud operating system. The response includes a payload of information associated with the resources. The discovery application updates, based on the received response, one or more configuration items in a configuration management database.
An embodiment may involve: transmitting, by a non-production computational instance and to a central computational system, a configuration for a service provided by the central computational system, wherein the non-production computational instance is arranged to test the configuration; appending, to the configuration at the non-production computational instance, a synchronization identifier to indicate that the configuration has been synchronized with the central computational system; receiving, by a production computational instance and from the non-production computational instance, a copy of the configuration; reading, by the production computational instance, the synchronization identifier from the copy of the configuration; determining that the synchronization identifier is not reflected as part of a synchronization history maintained at the production computational instance; and, in response to determining that the synchronization identifier is not reflected in the synchronization history, transmitting, by the production computational instance, the copy of the configuration to the central computational system.
An embodiment may involve web page metadata that defines a web page, first sub-page metadata that defines a first sub-page, and second sub-page metadata that defines a second sub-page, wherein the web page metadata includes specification of a viewport, and wherein the viewport is associated with an identifier. One or more processors may be configured to: receive a request for the web page; resolve the web page metadata into web content, wherein resolving the web page metadata includes: (i) determining, based on the identifier, a route associated with the viewport, (ii) determining, based on the route, a set of conditions associated with the viewport, (iii) determining that a particular condition is satisfied, wherein the particular condition is associated with the first sub-page and (iv) placing, based on the particular condition being satisfied, the first sub-page metadata in the viewport; and transmit the web content.
An embodiment involves receiving a request specifying a particular process, wherein an event table associates event identifiers of events, process identifiers of processes that generated to the events, timestamps of times when the events occurred, states of the processes at the times, and references to related processes; generating nodes of a graph, wherein the particular process and each of its related processes are represented by entity nodes annotated with respective process identifiers, and the events are represented by event nodes annotated with respective event identifiers; generating edges between the entity nodes and the event nodes for which the events of the event nodes either were: generated by the processes represented by the entity nodes, or refer to the processes represented by the entity nodes; and generating edges between pairs of the event nodes that: generated by a common process, and the events of which occurred sequentially according to their timestamps.
An embodiment may involve a plurality of configuration items and an unmatched configuration item, wherein the unmatched configuration item is associated with a first set of attribute values and a first vulnerability, wherein the first vulnerability is associated with a first set of field values. The embodiment may further involve one or more processors configured to: (i) determine that the unmatched configuration item and a particular configuration item both represent a specific component, wherein the particular configuration item is associated with a second set of attribute values and a second vulnerability, wherein the second vulnerability is associated with a second set of field values; (ii) merge the unmatched configuration item into the particular configuration item; (iii) determine that the first vulnerability and the second vulnerability both represent a specific vulnerability; (iv) merge the first vulnerability into the second vulnerability; and (v) delete the unmatched configuration item and the first vulnerability.
A server configured to provide web-based services over a network may include one or more processors configured to receive a request from a user device for access to a web-based service. In response, the server may download, to the user device, information for rendering an initial web resource by a web client of the user device, and software instructions configured to cause the web client to: intercept a web request to the server; determine, based on the web request, a main web document and ancillary web documents designated to be downloaded for rendering a particular web resource; send, to the server, the web request for the main web document and, without waiting for reception of the main web document, send respective document requests for each of the ancillary web documents; receive the main web document and the ancillary web documents; and render the particular web resource using the received documents.
A system and method for generation of records for execution of process workflows is provided. The system receives a first input associated with a record creation activity and extracts process execution data associated with the process workflow and configuration data associated with the record creation activity from one or more database tables. The system controls the electronic device to render a second UI based on the configuration and process execution data. The second UI includes a record creation form and a set of UI elements to represent activities associated with the process workflow in a pending state of execution. The system further receives a second input and triggers execution of the activities based on the received second input. The set of UI elements on the second UI are updated based on the triggered execution to represent at least one of the activities in an executional state.
An indication associated with a request to access a protected object by a subject is received. Using one or more processors, application level behavioral patterns of the subject, context of the request by the subject, usage patterns associated with the protected object, and a current system state are automatically analyzed using one or more machine learning models to determine an analysis result associated with whether to grant the subject access to the protected object. An access control mechanism for the protected object is automatically modified based on the analysis result.
A discovery application on a computing system is provided. The discovery application executes a discovery pattern comprising a sequence of operations for discovery of resources within a load balancing system, wherein execution of the discovery pattern corresponds to making one or more application programming interface (API) calls to an API associated with a network address of the load balancing system; receives a response to the one or more API calls from the load balancing system, wherein the response comprises a payload of information associated with the resources; and updates, based on the response and in a configuration management database (CMDB), one or more configuration items (CIs) associated with the resources.
A computer-implemented method of presenting a graphical user interface (GUI) includes receiving an indication of a data object related to an enterprise and identifying one or more data classifications related to the data object and one or more relationship types between the data object and the one or more data classifications. Additionally, the computer-implemented method includes generating and presenting the GUI via a client device. The GUI includes a central section indicating the data object and one or more sections disposed around the central section. The one or more sections indicate the one or more data classifications and the one or more relationship types between the data object and the one or more data classifications.
An embodiment may involve receiving an account identifier, wherein the account identifier is associated with a service account; transmitting a first API query to a remote computing system based on the account identifier; receiving first information associated with a first resource based on the first API query, wherein the first resource corresponds to a cloud orchestrator associated with a first service provided by the remote computing system; transmitting a first set of queries to the remote computing system based on the first information; receiving second information about a cluster of resources, associated with the first resource, based on the first set of queries, wherein a set of services related to the first service are deployed in one or more resources of the cluster of resources; generating a relationship map between the first resource and the cluster of resources based on the second information; and outputting the relationship map.
H04L 41/0853 - Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
H04L 41/0859 - Retrieval of network configuration; Tracking network configuration history by keeping history of different configuration generations or by rolling back to previous configuration versions
One or more associated identifiers are determined based on one or more associated tag types of an interactable element of web content. The determined one or more associated identifiers are associated with the interactable element. Based on the association of the determined one or more associated identifiers with the interactable element, one of the interactable element is matched to a received speech input. An action is performed with respect to the interactable element based on the matching.
Systems, methods, and media are used to identify phishing attacks. A notification of a phishing attempt with a parameter associated with a recipient of the phishing attempt is received at a security management node. In response, an indication of the phishing attempt is presented in a phishing attempt search interface. The phishing attempt search interface may be used to search for additional recipients, identify which recipients have been successfully targeted, and provide a summary of the recipients. Using this information, appropriate security measures in response to the phishing attempt for the recipients may be performed.
H04L 41/22 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
H04L 51/42 - Mailbox-related aspects, e.g. synchronisation of mailboxes
H04L 51/212 - Monitoring or handling of messages using filtering or selective blocking
In accordance with the present approach, a library management system identifies third-party libraries that developers request to incorporate into a software release. The library management system may determine whether a master ticket or usage ticket for a new third-party library exists. If a master or usage ticket is not already existing and approved for the third-party library, the third-party library management system may automatically analyze the third-party library to determine whether it corresponds to third-party libraries that are already approved and stored in a central repository. After approval of a master ticket, the third-party library may be incorporated into the central repository and referenced by subsequent usage tickets that are particular to an individual software release. If not approved, the library management system provides the third-party library to a manual approval system. Moreover, the library management system provides efficient reporting of and access to statuses of the requested third-party libraries.
A graphical interface is provided by a second-party service provider for specifying parameters for content of a third-party configured for a first-party, wherein the first-party, the second-party service provider, and the third-party are different parties. The parameters for the content of the third-party are received from the first-party. A code snippet that references a service resource of the second-party service provider is provided to include on a source encoding of a web content of the first-party. The code snippet provides in an overlay on the web content of the first-party, the content of the third-party obtained using the received parameters of the first-party.
A computer-implemented method includes obtaining a plurality of textual records divided into clusters and a residual set of the textual records, where a machine learning (ML) clustering model has divided the plurality of textual records into the clusters based on a similarity metric. The method also includes receiving, from a client device, a particular textual record representing a query and determining, by way of the ML clustering model and based on the similarity metric, that the particular textual record does not fit into any of the clusters. The method additionally includes, in response to determining that the particular textual record does not fit into any of the clusters, adding the particular textual record to the residual set of the textual records. The method can additionally include identifying, by way of the ML clustering model, that the residual set of the textual records contains a further cluster.
A system includes a database containing database tables. The system also includes one or more processors configured to: (i) determine, for a software application, a set of the database tables containing information used by the software application; (ii) for an item associated with the software application, query the set of the database tables for entries related to the item, wherein the entries are in a first language; (iii) generate, for display, a representation of a first pane and a second pane, wherein the first pane contains the entries, and wherein the second pane contains data input elements for translations of the entries into a second language; (iv) transmit the representation; (v) receive data entered into the data input elements of the second pane; and (vi) store, in the set of the database tables, the data entered into the data input elements as a translation to the second language.
The present disclosure relates to a data augmentation system and method that uses a large pre-trained encoder language model to generate new, useful intent samples from existing intent samples without fine-tuning. In certain embodiments, for a given class (intent), a limited number of sample utterances of a seed intent classification dataset may be concatenated and provided as input to the encoder language model, which may generate new sample utterances for the given class (intent). Additionally, when the augmented dataset is used to fine-tune an encoder language model of an intent classifier, this technique improves the performance of the intent classifier.
A system for root cause analysis based on process optimization data is provided. The system receives log data associated with a first trace between a first activity and a second activity of a process. The system further determines a state of inefficiency between the first activity and the second activity based on the received log data. The system further applies a first machine learning (ML) model on the received log data. The system further determines a first label and a first value to be associated with the first trace of the process based on the application of the first ML model. The system further generates presentation data associated with the determined state of inefficiency of the first trace based on the determination of the first label and the first value and further transmits the generated presentation data on a user device.
Persistent storage may contain a plurality of user interface (UI) component definitions. One or more processors may be configured to: receive, by way of a platform UI builder, selection of a UI component definition from the plurality of UI component definitions; bind, by way of input entered into the platform UI builder, data to the UI component definition, wherein the data is from a data source, and wherein the input is a programmatic statement that references the data source or a set of values that references the data source; generate, by way of the platform UI builder, metadata representing the input; create, by a platform runtime, a UI component that incorporates the data into the UI component definition in accordance with the metadata; generate, by the platform runtime, a graphical user interface including the UI component; and provide, for display on a client device, a representation of the graphical user interface.
An example embodiment may involve a main database; a main memory; and one or more processors configured to: retrieve, by a data collector application, records from the main database, wherein the data collector application includes an embedded database; aggregate, by the data collector application, values in the records relating to a key performance indicator (KPI) to form partial KPI data stored in one or more blocks of the main memory; determine, by the data collector application, that utilization of the main memory exceeds a pre-defined threshold; in response to the utilization of the main memory exceeding the pre-defined threshold, write, by the data collector application, the partial KPI data to a row of the embedded database; and release, by the data collector application, the one or more blocks of the main memory used to store the partial KPI data.
Content to be summarized is received and analyzed using an extractive summarizer to determine a reference extractive summary of the content. The content is further analyzed using a plurality of different abstractive summarizers to determine candidate abstractive summaries of the content. Each of the candidate abstractive summaries is compared with the reference extractive summary to determine corresponding evaluation metrics. Based at least in part on the evaluation metrics, one of the candidate abstractive summaries is selected as a selected summary to be provided.
An embodiment may involve a database containing a first user profile that specifies a first preferred language of a first user and a second user profile that specifies a second preferred language of a second user. The embodiment may also involve one or more processors configured to: receive, from the first user and within a chat session, a first set of messages in the first preferred language; cause the first set of messages to be translated into the second preferred language; provide, to the second user and within the chat session, the first set of messages as translated; receive, from the second user and within the chat session, a second set of messages in the second preferred language; cause the second set of messages to be translated into the first preferred language; and provide, to the first user and within the chat session, the second set of messages as translated.
A request is received from a client for data to render a modular contained widget component of an application user interface. Whether the requested data is cached at an intermediary server is determined at the intermediary server, wherein the requested data is based at least in part on one or more database records stored at a backend server. In response to a determination that the requested data is cached, the requested data is obtained from an identified cache instance that cached the requested data. The cached requested data is based at least in part on the one or more database records provided by the backend server to the intermediary server to maintain an updated version of the requested data at the identified cache instance. The requested data is provided to the client from the intermediary server.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
H04L 67/02 - Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
Persistent storage contains a training dataset and a test dataset, each with units of text labelled from a plurality of categories. A machine learning model has been trained with the training dataset to classify input units of text into the plurality of categories. One or more processors are configured to: read the training dataset or the test dataset; determine distributional properties of the training dataset or the test dataset; determine, using the machine learning model, saliency maps for tokens in the training dataset or the test dataset; perturb, by way of token insertion, token deletion, or token replacement, the training dataset or the test dataset into an expanded dataset; obtain, using the machine learning model, classifications into the plurality of categories for the expanded dataset; and based on the distributional properties, the saliency maps, and the classifications, identify causes of failure for the machine learning model.
A system and method for rendering of engagement UI for pages accessed using web clients is provided. The system detects a page of a website or a web application as active or loaded within a web client of a user device. The system further determines a set of attributes associated with the detected page and searches a catalog of actions items based on the determined set of attributes to determine a set of actions items. Thereafter, the system controls the web client of the user device to render an engagement UI on the page and presents the determined set of actions items as UI elements of the engagement
A embodiment may involve receiving a contact tracing request for a first user identifier that corresponds to a first portable device identifier of a first portable device. The second example embodiment may also involve requesting and receiving, from a first computing device associated with the first user identifier, device adjacency data, wherein the device adjacency data contains a plurality of contact entries, wherein one of the contact entries identifies a second portable device identifier of a second portable device that was wirelessly detected by the first portable device and a timestamp of when the wireless detection of the second portable device occurred. The second example embodiment may involve determining, from the mappings, a second user identifier that corresponds to the second portable device identifier. The second example embodiment may further involve transmitting, to a second computing device associated with the second user identifier, a contact tracing notification.
G16H 50/80 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu
G16H 10/60 - ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
G16H 70/60 - ICT specially adapted for the handling or processing of medical references relating to pathologies
H04W 4/02 - Services making use of location information
H04W 4/20 - Services signalling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel
H04W 12/088 - Access security using filters or firewalls
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
89.
Common Interface for Supporting Virtualized Architectures
Persistent storage contains a plurality of configuration items characterizing attributes of a virtualized architecture and containing representations of relationships between the plurality of configuration items. One or more processors may be configured to: obtain, by way of a common interface, specifications of respective locations in the persistent storage that maintain sets of configuration items representing clusters, hosts, and virtual machines of the virtualized architecture; obtain, by way of the common interface, one or more scripts that are executable to retrieve the sets of configuration items from the persistent storage; apply, by a client application, the specifications of the respective locations to the scripts; and retrieve, by way of the client application executing the scripts, the sets of configuration items representing the clusters, the hosts, and the virtual machines of the virtualized architecture from the respective locations and a subset of the relationships between the sets of the configuration items.
Systems and method for improving query performance by querying an appropriate database engine based on the operation of the query request is provided. In one aspect, this approach involves querying a row-oriented database, querying a column-oriented database, or blacklisting the query request. In particular, updating the column-oriented database involves delete and insert operations. By maintaining updated databases and querying appropriate database engines, the response time of a query request may be improved.
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
G06F 16/21 - Design, administration or maintenance of databases
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/22 - Indexing; Data structures therefor; Storage structures
Request definitions associated with respective actions of a workflow identify characteristics of objects utilized in performance of the respective action. Hypothetical in-flight changes that modify the characteristics are anticipated and implemented into the workflow. The actions within the workflow are subscribed to the hypothetical in-flight changes based upon the characteristics identified in the request definitions and modified by the in-flight changes by identifying which in-flight changes affect which workflow actions. Accordingly, when an in-flight is received, the workflow is automatically updated to account for the modifications to the characteristics made by the in-flight change. Specifically, actions that should be undone and/or redone in response to the modification to the characteristics are automatically identified and new tasks are created to undo and/or redo the identified actions.
A computing system includes a processor and memory. The memory includes instruction code that causes the processor to generate first and second parser instances and associate the first parser and the second parser with respective first and second search queries. The processor controls the first parser to repeatedly obtain data from the data stream in blocks until the first parser finishes identifying elements in the data stream associated with its search path. The processor controls the second parser to repeatedly obtain blocks from the first parser when the blocks obtained by the first parser have not been searched by the second parser, and controls the second parser to obtain blocks from the data stream when the blocks obtained by the first parser have been searched by the second parser and the first parser has finished searching.
A database query is received at a primary database in a query language of the primary database. A determination is made whether the database query is to be handled by a secondary database different from the primary database but storing synchronized records of at least a portion of the primary database. In response to determining that the database query is to be handled by the secondary database, the database query is translated to a query language of the secondary database, including by determining a tree data structure representation of the database query, translating one or more elements of the tree data structure representation, and synthesizing the tree data structure representation to automatically generate the database query in the query language of the secondary database. The automatically generated database query is provided in the query language of the secondary database to the secondary database.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/22 - Indexing; Data structures therefor; Storage structures
A determination is made whether a received database query is to be processed by either a first database, a second database, or at least in part by both the first and second databases including by determining whether the query meets criteria to split the query for processing across the first and second databases. The first and second databases store shared synchronized records, the first database configured to store the records in a column-oriented format and the second database configured to store the records in a row-oriented format. In response to a determination that the query meets the criteria to split the query, a first and second component query of the database query are generated for the first and second databases, respectively, the second component query based at least in part on a result of the first component query. The execution of the first and second component queries is pipelined.
An embodiment may involve, based on a pre-defined trigger associated with a particular application, reading error data from a resource that is used by the particular application, wherein persistent storage contains definitions of a plurality of error scenarios, a plurality of fix scripts, and associations between each of the plurality of error scenarios and one or more of the plurality of fix scripts; applying one or more rules to the error data, wherein the rules involve pattern matching or parsing; based on applying the one or more rules, determining a particular error scenario represented in the error data, wherein the particular error scenario is one of the plurality of error scenarios; determining, based on the associations, a particular fix script associated with the particular error scenario, wherein the particular fix script is one of the plurality of fix scripts; and causing execution of the particular fix script.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
A test query of a database is performed in response to determining that a performance associated with a user database query of the database does not satisfy a first performance threshold. In response to a determination that the performance of the test query satisfies a second performance threshold, a database buffer cache of the database is resized. Resizing the database buffer cache includes: determining a metric based at least in part on a storage size of the database and an index side of the database, and resizing the database buffer cache of the database based on the metric and a size of the database buffer cache.
Each instance environment of a plurality of computing instance environments is associated with its corresponding set of users belonging to one or more user groups, its corresponding processes, and its corresponding data access privileges. For at least one of the computing instance environments, database tables accessible by the corresponding computing instance environment are analyzed to determine whether each of the database tables includes data belonging to one or more sensitive data categories. Based at least in part on a result of the analysis determining whether each of the database tables includes data belonging to the one or more sensitive data categories, a data risk metric is determined for the corresponding computing instance environment.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
98.
SUPPORT FOR MULTI-TYPE USERS IN A SINGLE-TYPE COMPUTING SYSTEM
Persistent storage contains a parent table and one or more child tables, the parent table containing: a class field specifying types, and one or more filter fields. One or more processors may: receive a first request to read first information of a first type for a. first entity, determine that, in a first entry' of the parent table for the first entity, the first type is specified in the class field; obtain the first information from a. child table associated with the first type; receive a second request to read second information of a second type for a second entity; determine that, in a second entry' of the parent table for the second entity, the second type is indicated as present by a filter field that is associated with the second type; and obtain the second information from a set of additional fields in the second entry.
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06F 16/22 - Indexing; Data structures therefor; Storage structures
H04L 41/40 - Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
99.
Support for Multi-Type Users in a Single-Type Computing System
Persistent storage contains a parent table and one or more child tables, the parent table containing: a class field specifying types, and one or more filter fields. One or more processors may: receive a first request to read first information of a first type for a first entity; determine that, in a first entry of the parent table for the first entity, the first type is specified in the class field; obtain the first information from a child table associated with the first type; receive a second request to read second information of a second type for a second entity; determine that, in a second entry of the parent table for the second entity, the second type is indicated as present by a filter field that is associated with the second type; and obtain the second information from a set of additional fields in the second entry.
A machine learning model is trained based at least on previous change requests, wherein each of the previous change requests are associated with a controlled management of a lifecycle of a change to an information technology environment. A security vulnerability of the information technology environment is identified. Using the trained machine learning model, a corresponding match score for each of a plurality of pending change requests is determined for the security vulnerability. An indication of whether a resolution specification for the security vulnerability is to be linked with one of the plurality of pending change requests selected based on a factor associated with its corresponding match score is received.