Techniques are provided for intrusion detection on a computer system. In an example, a computer host device is configured to access data storage of the computer system via a communications network. It can be determined that the computer host device is behaving anomalously because a first current access by the computer host device to the data storage deviates from a second expected access by the computer host device to the data storage by more than a predefined amount. Then, in response to determining that the computer host device is behaving anomalously, the computer system can mitigate against the computer host device behaving anomalously.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
Methods, systems, and computer readable mediums for generating a curated user interface (UI) marker are disclosed. According to one exemplary embodiment, a method includes receiving information for generating a curated UI marker associated with a converged infrastructure management application, wherein the curated UI marker includes a hyperlink to locally stored information associated with the converged infrastructure management application. The method also includes generating, using the information, the curated UI marker associated with the converged infrastructure management application.
A platform is provided for uniform parsing of configuration files for multiple product types. One method comprises obtaining, by a parser of a given product type, a given request from a message queue based on a metadata message of an incoming configuration file from a remote product of a given product type, wherein the message queue stores metadata messages for a plurality of product types; extracting information from the incoming configuration file based on product-specific business logic obtained from a table store comprising tables for the plurality of product types, wherein the business logic provides a mapping between information extracted from the incoming configuration file and destination database tables; and storing the contents in the destination database tables of a product-specific predefined database schema.
Processing of continuously generated data using a rolling transaction procedure is described. For instance, a system can process a data stream comprising a first segment and a second segment. A transaction associated with the data stream can be initiated and in response to the transaction being initiated, a first transaction segment for the first segment and a second transaction segment for the second segment are generated. Further, a scaling event that modifies the second segment into a third segment and a fourth segment can be detected, and a data stream transaction procedure is executed to end the transaction.
Described is a system for detecting corruption in a deduplicated object storage system accessible by one or more microservices while minimizing costly read operations on objects. A similarity group verification path is selected by a controller module based upon detection of an object storage memory size condition. The similarity group verification path includes controller phases to verify whether objects have been corrupted without having to incur costly read operations.
A method, computer program product, and computing system for receiving a plurality of physical layer blocks (PLBs). A subset of PLBs may be selected from the plurality of PLBs for combining into a combined PLB based upon, at least in part, a utilization of each PLB of the plurality of PLBs, an average compression per active virtual, and a number of free PLBs generated when combining into the combined PLB. One or more PLBs of the subset of PLBs may be compressed based upon, at least in part, the average compression per active virtual. The one or more PLBs of the subset of PLBs may be combined into the combined PLB.
A method, computer program product, and computing system for defining a normal IO write mode for writing data to a storage system, the normal IO writing mode including: writing the data to a cache memory system, writing the data to a journal, in response to writing the data to the journal, sending an acknowledgment signal to a host device, and writing the data from the cache memory system to a storage array. A request may be received to enter a testing IO write mode. In response to receiving the request, the data may be written to the cache memory system. The writing of the data to the journal may be bypassed. The acknowledgment signal may be sent to the host device in response to writing the data to the cache memory system. The data may be written from the cache memory system to the storage array.
Embodiments for retention locking a deduplicated file stored in cloud storage by defining object metadata for each object of the file, and comprising a lock count and a retention time based on an expiry date of the lock, with each object having segments, the object metadata further having a respective expiry date and lock count for each segment, where at least some segments are shared among two or more files. Also updating the lock count and retention time for all segments of the file being locked; and if the object is not already locked, locking the object using a retention lock defining a retention time and updating the object metadata with a new lock count and the retention time, otherwise incrementing the lock count and updating the retention time for the expiry date if expiry date of a previous lock is older than a current expiry date.
G06F 16/176 - Support for shared access to files; File sharing support
G06F 16/17 - File systems; File servers - Details of further file system functions
G06F 16/174 - Redundancy elimination performed by the file system
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/16 - File or folder operations, e.g. details of user interfaces specifically adapted to file systems
9.
Method, computer program product, and computing system for defining a normal IO write mode and handling requests to enter a testing IO write mode
A method, computer program product, and computing system for defining a normal IO write mode for writing data to a storage system including: writing the data to a cache memory system of a first storage node, writing the data to a journal of the first storage node, sending a notification concerning the data to a second storage node, writing one or more metadata entries concerning the data to a journal of the second storage node, sending an acknowledgment signal to the host device, and writing the data to the storage array. A request may be received to enter a testing IO write mode. In response to receiving the request, the data may be written to the cache memory system. The writing of the data to the journal may be bypassed. The acknowledgment signal may be sent to the host device. The data may be written to the storage array.
A request is received from a user at a client to access a file of a set of files backed up to a backup server. Upon verifying a password provided by the user, the client is issued another request for authentication. A first data structure is received responsive to the request. The first data structure is generated using identifiers corresponding to a set of files at the client of which at least some presumably have been backed up to the server. A second data structure is generated. The second data structure is generated using identifiers corresponding to the set of files backed up to the server. The first and second data structures are compared to assess a degree of similarity between the files at the client and the files backed up to the backup server. The user is denied access when the degree of similarity is below a threshold.
The described technology is generally directed towards managing data retention policy for stream data stored in a streaming storage system. When a request to truncate a data stream from a certain position (e.g., from a request-specified stream cut) is received, an evaluation is made to determine whether the requested position is within a data retention period as specified by data retention policy. If any data prior to the stream cut position (corresponding to a stream cut time) is within the data retention period, the truncation request is blocked. Otherwise truncation from the stream cut point is allowed to proceed/is performed. Also described is handling automated (e.g., sized based) stream truncation requests with respect to data retention.
Techniques for handling data with different lifetime characteristics in stream-aware data storage systems. The data storage systems can include a file system that has a log-based architecture design, and can employ one or more solid state drives (SSDs) that provide log-based data storage, which can include a data log divided into a series of storage segments. The techniques can be employed in the data storage systems to control the placement of data in the respective segments of the data log based at least on the lifetime of the data, significantly reducing the processing overhead associated with performing garbage collection functions within the SSDs.
One example method includes telemetry based state transition and prediction. Telemetry data is used to generate a transition matrix. The transition matrix is used to predict a state transition for a system or an application. A log level is predictively adjusted based on the transition matrix. The telemetry data is thus adaptively collected based on predicted transitions.
Embodiments of the present disclosure relate to a method for storage management, an electronic device, and a computer program product. According to an example implementation of the present disclosure, a method for storage management is provided, which comprises receiving an access request for target metadata from a user at a node among a plurality of nodes included in a data protection system, wherein the access request includes an identification of the target metadata; based on the identification, acquiring target access information corresponding to the identification from a set of access information for the user, wherein the target access information records information related to access to the target metadata; and if the target access information is acquired, determining the target metadata based on the target access information.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
Generating customized documentation is disclosed, including: receiving a set of meta information describing an aspect of an application; and generating a document to provide guidance specific to the application based at least in part on at least a subset of the set of meta information.
In general, embodiments relate to a method for generating synthetic full backups, the method comprising: performing a verification that a previous backup of source data stored in a data domain is a failed synthetic full backup, obtaining based on the verification a latest snapshot of the source data, obtaining based on the verification a prior snapshot of the source data making a determination, using a copy list that a first portion of the data items in the copy list exists in the previous backup and a second portion of the data items does not exist in the previous backup, and performing based on the determination a copy operation to copy the second portion of the data items to the data domain to obtain a synthetic full backup.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
17.
Garbage collection integrated with physical file verification
System generates data structure based on unique identifiers of objects in storages and sets indicators in positions corresponding to hashes of unique identifiers of objects. The system copies active objects from one storage to another, if number of active objects in storage does not satisfy threshold, and resets indicators in positions in data structure corresponding to hashes of unique identifiers of active objects copied to the other storage. The system generates another data structure based on unique identifiers created while generating data structure, positions in other data structure corresponding to hashes of the unique identifiers. System sets indicators in positions in the other data structure corresponding to hashes of unique identifiers of data objects in active storages while generating data structure. System resets indicators in positions in data structure corresponding to hashes of the unique identifiers corresponding to indicators set in positions of the other data structure.
A system can determine timeseries telemetry data of resource utilization of respective data centers of a group of data centers maintained by the system. The system can predict respective hardware requests based on future resource utilization based on the timeseries telemetry data, the hardware requests comprising respective hardware requests at respective data centers of the group of data centers. The system can predict respective future times at which the respective hardware requests will occur. The system can determine respective physical location sources of hardware, respective physical location destinations of hardware, and respective amounts of hardware based on the respective hardware requests and the respective future times. The system can store an indication of the respective physical location sources of hardware, respective physical location destinations of hardware, and respective amounts of hardware.
Embodiments of the present disclosure include receiving one or more input/output (IO) requests at a storage array from a host device. Furthermore, the IO requests can include at least one data replication and recovery operation. In addition, the host device's connectivity access to a recovery storage array can be determined. Data replication and recovery operations can be performed based on the host device's connectivity to the recovery storage array.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 1/12 - Synchronisation of different clock signals
20.
FILE LIFETIME TRACKING FOR CLOUD-BASED OBJECT STORES
Tracking changes to a document by defining a document record having a unique document record and comprising an index and a file name of the document, and defining a backup record for the document in a series of backups, which includes a timestamp for each backup, and a bitmask for the document. The bitmask has a single bit position for each document in the container which is set to a first binary value to indicate that the corresponding document is unchanged and a second binary value to indicate whether the document is changed or deleted. A primary query is received and resolved for the document by analyzing the document record to find the file name. A secondary query using the document record ID is resolved to find all tracked versions of the document, and the results are returned to the user in the form of a version history list.
G06F 21/62 - Protecting access to data via a platform, e.g. using keys or access control rules
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
G06F 16/11 - File system administration, e.g. details of archiving or snapshots
G06F 16/2458 - Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
G06F 9/30 - Arrangements for executing machine instructions, e.g. instruction decode
G06F 16/38 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
Destination namespace and file copying: a namespace service receives communication of namespace update for file from file's source. and communicates namespace update for file to an access object service identified for file. The access object service receives communication of fingerprints stream, corresponding to file's segments, from file's source, and identifies sequential fingerprints in fingerprints stream as fingerprints group. The access object service identifies group identifier for fingerprints group, and communicates fingerprints group to a deduplication service associated with group identifier range including group identifier. The deduplication service identifies fingerprints in fingerprints group which are missing from fingerprint storage, and communicates identified fingerprints to the access object service, which communicates request for file's segments, corresponding to identified fingerprints, to file's source. The deduplication service receives communication of requested segments from file's source, and stores requested segments. The access object service stores namespace update for file in distributed namespace data structure.
One example method includes performing, at a central node operable to communicate with edge nodes of an edge computing environment, operations that include signaling the edge nodes to share their respective data distributions to the central node, collecting the data distributions, performing a Bayesian clustering operation with respect to the edge nodes to define clusters that group some of the edge nodes, and one of the edge nodes in each cluster is a representative edge node of that cluster, and sampling data from the representative edge nodes.
One example method includes determining representation bias in a data set. A bias detection engine is trained using a data set that is sufficiently diversified and/or unbiased. Once trained, test data sets can be evaluated by the bias detection engine to determine an amount of representation bias in the test data sets. The representation bias can be visually conveyed to a user and suggestions on how to reduce the representation bias may be provided and/or implemented to reduce the representation bias in the test data set. Suggestions can be implemented by adding or removing data from the test data that will reduce the representation bias.
One example method includes scanning, at a cloud storage site, metadata associated with an object stored at the cloud storage site, fetching, from the metadata, an object creation time for the object, and determining whether the object is out of a minimum storage duration. When the object is out of the minimum storage duration, it is copy-forwarded and then marked for deletion, and when the object is not out of the minimum storage duration, the object is deselected from a list of objects to be copied forward.
One example method includes performing delta operations to protect data. During a delta operation, a primary bitmap and a secondary bitmap are processed using bit logic. The delta generated by the delta operation is transmitted to a receiver. The receiver enqueues the delta into a delta queue configured to allow the replica volume at the target site to be moved to any point in time represented by the deltas in the delta queue.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 3/06 - Digital input from, or digital output to, record carriers
26.
NEAR CONTINUOUS DATA PROTECTION WITHOUT USING SNAPSHOTS
One example method includes performing delta operations to protect data. A delta queue is provided that allows a replica volume to be rolled forwards and backwards in time. When rolling the replica volume forward, an undo delta is created such that the replica volume can be moved backwards after being moved forward. When rolling the replica volume backwards, a forward delta is created such that the replica volume can be moved forwards after being moved backwards.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
27.
System and method for lockless aborting of input/output (IO) commands
A method, computer program product, and computing system for receiving an input/output (IO) command for processing data within a storage system. An IO command-specific entry may be generated in a register based upon, at least in part, the IO command. An compare-and-swap operation may be performed on the IO command-specific entry to determine an IO command state associated with the IO command. The IO command may be processed based upon, at least in part, the IO command state associated with the IO command.
A method, computer program product, and computer system for implementing a backend service for blocking free processing of physical entities events, including add, remove, update, query. Physical entities blocking delays may be delegated to maintenance tasks, which may run under a single thread with a scheduler and may merge successive pending events.
Systems and methods for generating a unified metadata model, that includes selecting a first source metadata model, copying a first class, from the first source metadata model, to a first modified metadata model using a unified metadata mapping, and after copying the first class, selecting a second source metadata model, copying a second class, from the second source metadata model, to a second modified metadata model using the unified metadata mapping, and creating the unified metadata model using the first modified metadata model and the second modified metadata model.
A system can generate a neural network, wherein an output of the neural network indicates whether a first test of a computer code will pass given an input of respective results of whether respective tests, of a group of tests of the computer code, pass, and wherein respective weights of the neural network indicate a correlation from a group of correlations comprising a positive correlation between a respective output of a respective node of the neural network and the output of the neural network, a negative correlation between the respective output and the output, and no correlation between the respective output and the output. The system can apply sets of inputs to the neural network, respective inputs of the sets of inputs identifying whether the respective tests pass or fail. The system can, in response to determining that a first set of inputs of the sets of inputs to the neural network results in a failure output, storing an indication that the first test is dependent on a subset of the respective tests indicated as failing by the first set of inputs.
A system can determine to restore a datacenter that comprises a group of virtualized workloads. The system can determine respective associations between respective virtualized workloads and respective datastores. The system can determine to restore a first virtualized workload of the group of virtualized workloads first. The system can restore a first portion of infrastructure that corresponds to the first virtualized workload first among a group of infrastructure. The system can, after restoring the first portion of infrastructure, restore a first portion of data that corresponds to the first virtualized workload first among a group of data. The system can, after restoring the first portion of data, restore a first portion of a virtualization layer that corresponds to the first virtualized workload first among a group of virtualization layers. The system can, after restoring the first portion of the virtualization layer, restore the first virtualized workload.
A system can maintain a first data center that comprises a virtualized overlay network and virtualized volume identifiers. The system can determine to perform a restore of data of the first data center to a second data center, the data comprising first instances of virtualized workloads. The system can transfer the data to the second data center. The system can configure the second data center with the virtualized overlay network and the virtualized volume identifiers. The system can operate the virtualized workloads on the second data center, the second instances of the virtualized workloads invoking the second instance of the virtualized overlay network and the second instance of the virtualized volume identifiers.
G06F 11/20 - Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 9/455 - Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
A system can maintain a first data center that comprises a virtualized overlay network and virtualized volume identifiers, and store data comprising virtualized workloads. The system can determine a service level agreement associated with providing a second data center as a backup to the first data center. The system can, based on the service level agreement, divide, into a first portion of tasks and a second portion of tasks deploying the data to a secondary storage of the second data center, deploying the data to a primary storage of the second data center, and configuring the second data center with the virtualized overlay network and the virtualized volume identifiers. The system can perform the first portion of tasks before determining to restore the first data center to the second data center. The system can perform the second portion of tasks in response to determining to restore the first data center.
A system can maintain a first data center in a first physical location that comprises first compute hardware, and a second data center in a second physical location that comprises second compute hardware. The system can establish an overlay network that spans the first data center and the second data center. The system can establish a group of virtualized volume identifiers that spans the first data center and the second data center, and that virtualizes physical storage volumes. The system can determine whether to process a customer virtualized workload on the first data center or on the second data center to produce a selected location, wherein the customer virtualized workload is configured to be processed on the first data center and to be processed on the second data center. The system can process the customer virtualized workload at the selected location.
A method, computer program product, and computing system for copying a storage protection configuration for one or more storage resources from a first storage array to at least a second storage array in a storage cluster. A communication failure between at least a pair of storage arrays may be detected, thus defining a surviving storage array and at least one failed storage array. The communication failure between the surviving storage array and the at least one failed storage array may be resolved. The storage protection configuration may be synchronized from the surviving storage array to the at least one failed storage array. The storage protection configuration for the one or more storage resources of each storage array of the at least a pair of storage arrays may be arbitrated.
A method, computer program product, and computing system for receiving a selection of one or more secure snapshots to remove from a storage system. A snapshot deletion key may be received from the storage system. The selection of the one or more secure snapshots and the snapshot deletion key may be provided to a storage system support service. A snapshot deletion response may be received from the storage system support service. The snapshot deletion response and the selection of the one or more secure snapshots may be authenticated via the storage system. In response to authenticating the snapshot deletion response and the selection of the one or more secure snapshots, the one or more secure snapshots may be unlocked for deletion.
A method, computer program product, and computer system for identifying, by a computing device, a number of extents needed for a create snapshot operation to create a snapshot. The number of extents may be added to an in-memory cache. The number of extents needed for the create snapshot operation may be allocated from the in-memory cache to execute the create snapshot operation. Freed extents may be added to the in-memory cache based upon, at least in part, executing a delete snapshot operation to delete the snapshot.
A method, computer program product, and computer system for receiving, by a computing device, a snapshot create operation of a volume to create a first snapshot. Existing dirty data of the volume for the first snapshot may be flushed from an in-memory cache. New writes to the volume for the first snapshot may be maintained in the in-memory cache as dirty. A snapshot create operation to the volume may be received to create a second snapshot. The new writes to the volume for the first snapshot may be combined as part of the second snapshot.
A method, computer program product, and computing system for allocating a first number of tokens from a plurality of tokens for processing read IO requests from a read IO queue, thus defining a number of allocated read tokens. A second number of tokens may be allocated from the plurality of tokens for processing write IO requests from a write IO queue, thus defining a number of allocated write tokens. It may be determined that the processing of the write IO requests is throttled. In response to determining that the processing of the write IO requests from the write IO queue is throttled, a maximum allowable number of write tokens may be defined. Additional tokens may be allocated for processing the read IO requests from the read IO queue based upon, at least in part, the maximum allowable number of write tokens and the number of allocated write tokens.
A method, computer program product, and computing system for defining a first flow for one or more processing threads with access to shared data within the storage system. The one or more processing threads may be executed using the first flow. A processing thread reference count may be determined for the one or more processing threads being executed using the first flow. One or more management threads may be executed on the shared data within the storage system based upon, at least in part, the processing thread reference count.
In general, embodiment relate to a method for provisioning a plurality of client application nodes in a distributed system using a management node, the method comprising: creating a file system in a namespace; associating the file system with a scale out volume; mounting the file system on a metadata node in the distributed system, wherein mounting the file system comprises storing a scale out volume record of the scale out volume; storing file system information for the file system in a second file system on the management node, wherein the file system information specifies the file system and the metadata node on which the file system is mounted; wherein storing the file system information triggers distribution of the file system information to at least a portion of a plurality of client application nodes.
A method for storing data, the method comprising receiving, by an offload component in a client application node, a request originating from an application executing in an application container on the client application node, wherein the request is associated with data and wherein the offload component is located in a hardware layer of the client application node, and processing, by the offload component using an advanced data services pipeline, the request by a file system (FS) client and a memory hypervisor module executing in a modified client FS container on the offload component, wherein processing the request results in at least a portion of the data in a location in a storage pool.
A method for storing data, the method comprising receiving, by an offload component in a client application node, an augmented write request originating from an application executing in an application container on the client application node, wherein the augmented write request is associated with data and wherein the offload component is located in a hardware layer of the client application node, and processing, by the offload component, the augmented write request by a file system (FS) client and a memory hypervisor module executing in a modified client FS container on the offload component, wherein processing the request results in at least a portion of the data being written to a location in a storage pool.
In general, embodiments relate to a method for storing data, the method comprising generating, by a memory hypervisor module executing on a client application node, at least one input/output (I/O) request, wherein the at least one I/O request specifies a location in a storage pool and a physical address of the data in a graphics processing unit (GPU) memory in a GPU on the client application node, wherein the location is determined using a data layout, and wherein the physical address is determined using a GPU module and issuing, by the memory hypervisor module, the at least one I/O request to the storage pool, wherein processing the at least one I/O request results in at least a portion of the data being stored at the location.
Embodiments of the present disclosure relate to a method, a system, and a computer program product for streaming. The method includes: acquiring, during transmission of a stream, information indicating resources of a receiver of the stream available for compensating for degradation of a transmission quality of the stream; and determining at least a target transmission quality of the stream based at least on the resources of the receiver and network resources available for transmitting the stream. This solution provides a more flexible adaptive balance mechanism for streaming, and further optimizes utilization of various resources and user experience in streaming.
Methods, devices, and computer program products for authenticating a peripheral device are provided in embodiments of the present disclosure. In one method, a peripheral device sends, to an edge device, a first authentication request for at least the peripheral device to use resources of the edge device, the first authentication request comprising at least a first identifier associated with the peripheral device and location information of the peripheral device. Then, the peripheral device receives an authentication success or failure indication from the edge device. In this way, effective authentication of a peripheral device can be realized with a less complicated authentication process, so that the security of access of the peripheral device to a virtual desktop can be improved while ensuring good user experience.
Embodiments of the present disclosure relate to a computer-implemented method, a device, and a computer program product. The method includes extracting respective themes of a set of documents with release time within a first period; determining respective semantic information of the themes and frequencies of the themes appearing in the set of documents; and determining the number of documents associated with the themes within a second period according to a prediction model and based on the semantic information and frequencies of the themes. The second period is after the first period. Embodiments of the present disclosure can better predict the tendency of the themes appearing in the future based on the semantic information and frequencies of the themes.
Embodiments of the present disclosure provide a method and an apparatus for training a model, an electronic device, and a medium. This method includes: generating a first group of features and a second group of features respectively from a first sample set and a second sample set based on the model, wherein the first sample set is of a first category, and the second sample set is of a second category different from the first category; generating a first similarity matrix for the first sample set and the second sample set based on the first group of features and the second group of features; determining a first loss for the first sample set and the second sample set based on the first similarity matrix; and updating the model based on the first loss.
Implementations of the present disclosure relate to a method, an electronic device, and a computer program product for managing an inference process. Here, the inference process is implemented based on a machine learning model. A method includes: determining, based on a computational graph defining the machine learning model, dependency relationships between a set of functions for implementing the inference process; acquiring, in at least one edge device located in an edge computing network, a set of computing units available to execute the inference process; selecting at least one computing unit for executing the set of functions from the set of computing units; and causing the at least one computing unit to execute the set of functions based on the dependency relationships. With example implementations of the present disclosure, the inference process is implemented by making use of a variety of computing units in the edge computing network, thereby improving performance.
A method includes: acquiring a plurality of historical routes of movement of a plurality of historical users in a geographic area, wherein each historical route includes at least a portion of a plurality of locations; determining, based on the plurality of historical routes and a plurality of current positions of a plurality of users in the geographic area, a set of predicted locations among the plurality of locations that the plurality of users will visit in the future, respectively, wherein the plurality of users use a tour guiding service associated with the plurality of locations that is provided by a mobile network; selecting a set of popular locations from the set of predicted locations based on the number of users among the plurality of users who will visit each predicted location in the set of predicted locations; and scheduling tour guiding resources associated with the set of popular locations.
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for model training and duration prediction. The method includes acquiring a first set of parameter values related to a first snapshot of a data object, the first snapshot being deleted from a storage system through a first deletion operation. The method further includes acquiring a first duration during which the first deletion operation is performed. The method further includes generating a prediction model based on at least the first set of parameter values and the first duration, the prediction model being used for determining a predicted duration required for deleting the snapshot from the storage system.
Embodiments of the present disclosure relate to a data read method, a data storage method, an electronic device, and a computer program product. The data read method includes: receiving a data read request, the data read request comprising a data identifier associated with target data; determining a storage device of the target data based on the data identifier; and acquiring the target data from the storage device based on the data identifier. The data storage method includes: receiving a data storage request, the data storage request comprising a data identifier associated with data to be stored; determining, based on the data identifier, a target storage device for the data to be stored; and storing, based on the data identifier, the data to be stored to the target storage device. With the technical solutions of the present disclosure, a named data network with good performance and efficient operation can be achieved.
Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for data processing. The method disclosed herein includes receiving, at an edge device, new data for training a model, the edge device having stored distilled data used to represent historical data to train the model, the historical data being stored in a remote device, and the amount of the historical data being greater than the amount of the distilled data. The method further includes training the model based on the new data and the distilled data. With the data processing solution of the present disclosure, the model can be trained at the edge device with fewer storage resources based on the distilled data, thereby achieving higher model accuracy.
Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for code defect detection. The method described here includes determining log information associated with a defect based on the defect reported during testing of a software product. The method further includes determining a nature of the defect based on the log information. The method further includes determining, based on the nature, the log information, and a memory image file generated when the defect is reported, target code in code of the software product that causes the defect, in response to the nature indicating that the defect is caused by the code of the software product and needs to be repaired. By using the solution of the present application, different analysis strategies for defects may be adopted based on natures of the defects, thereby improving the efficiency of detecting code defects.
A method, computer program product, and computing system for determining whether a storage awareness service provider node of a storage system has failed. In response to determining that the storage awareness service provider node has failed, an intermediate storage awareness service may be deployed within the storage system. At least one request may be processed on the storage system via the intermediate storage awareness service.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
H04L 67/1097 - Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
H04L 67/146 - Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
56.
METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR APPLICATION TESTING
Techniques for application test involve: acquiring a character string in an application interface of a target application; determining a current language corresponding to the character string based at least on a comparison between encoding representation of the character string and a set of predetermined encoding segments, each encoding segment in the set of predetermined encoding segments indicating a corresponding language; and determining a language test result for the character string based on a comparison between the current language and a target language to be presented in the target application, the language test result being used for indicating whether the character string is adapted to the target language. Accordingly, efficient detection on whether a text in the target application is displayed abnormally can be guaranteed.
Techniques manage extents in a storage system having storage devices supporting a redundant storage strategy. A reserved area of the storage system is generated based on a set of first-type reserved extents respectively located in the storage devices, and the set of first-type reserved extents supports a reconstruction operation for a failed storage device when the failed storage device appears in the storage devices. A data area is generated based on a set of data extents respectively located outside the reserved area in the storage devices, and the data area provides data storage for a user. Here, a reserved extent size of the set of first-type reserved extents is smaller than a data extent size of data extents in the data area of the set of data extents. The quantity of extents can be reduced, thereby reducing overhead of storage and computing resources involved by associated metadata.
One example method includes performing sound quality operations. Microphone arrays are used to cancel or reduce or suppress background noise and to enhance speech. Subjective user input is received by an orchestration engine. The orchestration engine generates an output that includes at least adjustments to a microphone array. Controlling the microphone array based, in part, on subjective user feedback, allows desired speech or desired sound to be heard more clearly by the user.
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
59.
LATENCY-CAPACITY-AND ENERGY-AWARE VNF PLACEMENT IN EDGE COMPUTING ENVIRONMENTS
One example method includes creating an ILP model that includes a delay model, an energy model, and a QoS model, and modeling, using the integer linear programming model, a VNF placement problem as an ILP problem, and the modeling includes: using the delay model to identify propagation, transmission, processing, and queuing, delays implied by enabling an instance of the VNF at an edge node to accept a user VNF call; using the energy model to identify energy consumption implied by enabling an instance of the VNF at an edge node to accept a user VNF call; and using the QoS model to identify end-to-end delay, bandwidth consumption, and jitter, implied by enabling an instance of the VNF at an edge node to accept a user virtual network function call. The problem modeled by the ILP model may be resolved by a heuristic method.
One example method includes generating sound information regarding sound sources in an environment. The sound information is generated by separating and localizing the sound sources. The sound information is then presented as guidance to a user. The guidance may be presented graphically in a user interface, haptically, or in another manner.
One example method includes gathering, by a domain adversarial neural network model deployed in an autonomous vehicle operating in a domain, a dataset that comprises unsegmented and unlabeled image data about the domain, sampling the dataset to create an adapted domain dataset, detaching a domain classifier from the domain adversarial neural network, using the domain classifier as a domain change detector model to predict a class of the unsegmented and unlabeled image data in the adapted domain dataset, and based on the class, either: determining that the domain is changed or is unknown; or, determining that the domain has not changed.
One example method includes identifying an IO event comprising an IO made by a customer against a service hosted at an XaaS platform, and the IO event is associated with IO event metadata generated by the service, associating the IO event metadata with a billable customer operation, analyzing the IO event metadata and, based on the analyzing, associating the IO event with the customer, and generating a customer bill based on the associating of the IO event metadata with the billable customer operation, and based on the associating of the IO event with the customer.
One example method includes deploying a discriminator, where the discriminator is trained to recognize an adversarial image received by the discriminator as adversarial, and the adversarial image is generated based upon an original image, the adversarial image including a perturbation that cannot be detected by a human eye but which is effective to deceive an image segmentation model to misclassify the original image, receiving, by the discriminator, an image captured by an autonomous vehicle, and determining, by the discriminator, whether the image received from the autonomous vehicle is adversarial.
One example includes a digital twin of a microphone array. The digital twin acts as a digital copy of a physical microphone array. The digital array allows the microphone array to be analyzed, simulated and optimized. Further, the microphone array can be optimized for performing sound quality operations such as noise suppression and speech intelligibility.
A collaborative distributed microphone array is configured to perform or be used in sound quality operations. A distributed microphone array can be operated to provide sound quality operations including sound suppression operations and speech intelligibility operations for multiple users in the same environment.
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
66.
INTELLIGENT AND ADAPTIVE MEASUREMENT SYSTEM FOR REMOTE EDUCATION
One example method includes performing learning management. A learning management system receives student related input including sensor data, profile data, and learning history. The learning management measures student interest levels, student engagement levels, and learning effectiveness. Educators view the measurements in real-time and are able to adapt to the real-time student statuses and measurements.
G09B 5/14 - Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations with provision for individual teacher-student communication
G09B 5/06 - Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
67.
Method to efficiently transfer support and system logs from air-gapped vault systems to replication data sources by re-utilizing the existing replication streams
One example method includes, at a replication data source, initiating a replication process that includes transmitting a replication stream to a replication destination vault, and data in the replication stream is transmitted by way of a closed airgap between the replication data source and the replication destination vault, switching, by the replication data source, from a transmit mode to a receive mode, receiving, at the replication data source, a first checksum of a file, and the first checksum and file were created at the replication destination vault, receiving, at the replication data source, the file, calculating, at the replication data source, a second checksum of the file, and when the second checksum matches the first checksum, ending the replication process.
One example method includes performing delta operations to protect data. During a delta operation, a primary map and a secondary map are processed using bit logic. The bit logic determines how to handle data stored at a location on the volume associated with an entry in the primary map and included in the current delta operation when a new write for the same location is received as the corresponding entry in the primary map is processed.
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
69.
NEAR CONTINUOUS DATA PROTECTION WITHOUT USING SNAPSHOTS
One example method includes performing delta operations to protect data. Each delta generated by a data protection operation includes data. The deltas are stored in a delta queue, when moving a current replica to another point in time represented by the selected delta in the delta queue, the deltas are processed so that all relevant data can be applied in a batch. This ensures that when the same extents are represented in multiple deltas, only the oldest version is applied to the replica volume to move the current replica to the selected point in time.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
70.
System and Method for Distributed Data Consolidation
A method, computer program product, and computing system for deploying an agent configured to communicate with a centralized database and a plurality of remote databases. The plurality of remote databases may be polled, via the agent, for data for storage in the centralized database. The data may be consolidated from the plurality of remote databases to the centralized database.
A method, computer program product, and computing system for deploying a storage processor of a storage system as a target of a non-volatile memory express (NVMe) over fabric (NVMe-oF) network. One or more NVMe storage devices coupled to the storage processor may be identified, thus defining one or more local NVMe storage devices. A smart network interface card may be coupled to the NVMe-oF network. The smart network interface card may be provided with access to the one or more local NVMe storage devices via the NVMe-oF network.
A method, computer program product, and computing system for defining, a plurality of dependency groups for one or more objects of an application, wherein at least two dependency groups of the plurality of dependency groups include one or more common objects. One or more injectors associated with the one or more common objects may be identified. A first dependency group with at least one common object of the one or more common objects may be processed. For each common object of the first dependency group, a reference to an injector associated with the respective common object from a different dependency group may be generated for deferred processing of the respective common object.
Optimizing file system resource reservation is presented herein. The method comprises dividing a virtual file system address space into subspaces, initializing the subspaces with volume slices of a group of volume slices comprising a first volume slice, a second volume slice, and a collection of reserved volume slices allocated based on an allocation pattern that allocates volume slices as a function of a quantitative relationship between a first value associated with a first volume slice and a second value associated with a second volume slice, determining that a data block count is insufficient to service a write operation of user data to the second volume slice; and provisioning a second subspace with a free volume slice obtained from the collection of reserved volume slices, and wherein the provisioning of the second subspace with the free volume slice is performed without invoking a memory exclusion mechanism.
A chassis node coupling system includes a chassis node configured to be received at a first end of a chassis assembly, wherein the chassis node size exceeds the chassis assembly size. A latch assembly with one or more coupling assemblies may be configured to releasably couple the chassis node to the chassis assembly.
A method, computer program product, and computer system for receiving, by a computing device, a plurality of IO requests. A portion of the plurality of IO requests may be aggregated based upon a block size. The portion of the plurality of IO requests may be committed to persistent storage in a batch based upon, at least in part, aggregating the portion of the plurality of IO requests based upon the block size.
A system can determine timeseries telemetry data of a first resource utilization of a data center maintained by the system. The system can predict, from the timeseries telemetry data, a second resource utilization of the data center will occur at a future time, the second resource utilization exceeding a threshold amount of resource utilization of the data center. The system can determine, based on an amount of time available until the future time, a selected location indicative of whether to install additional hardware at a first physical location of the data center, or a second physical location of the data center, wherein an amount of time associated with installing the additional hardware at the first physical location is less than an amount of time associated with installing the additional hardware at the second physical location. The system can install the additional hardware at the selected location.
Methods, system, and non-transitory processor-readable storage medium for higher-level service health are provided herein. An example method includes monitoring, by a monitoring tool, status metrics associated with at least one of a plurality of physical devices and a plurality of component services, where alerts generated from the monitoring tool are stored in a database associated with the monitoring tool. At least one of the component services executes on the at least one of the plurality of physical devices. A microservice identifies at least one logical service. The logical service is comprised of at least one of the plurality of physical devices, and/or at least one of the plurality of component services. The microservice determines a health metric associated with at least one logical service based on the generated alerts. The microservice transmits the health metric associated with the logical service from the microservice to the monitoring tool.
A method of performing synchronous replication from a primary storage system apparatus (PSSA) to a secondary storage system apparatus (SSSA) is provided. The method includes (a) in response to write requests received by the PSSA, (i) calculating metadata changes by the PSSA for accommodating the write requests, (ii) generating, by the PSSA, metadata journal log entries that describe the metadata changes, and (iii) mirroring the metadata journal log entries from the PSSA to the SSSA; (b) regenerating the metadata changes by the SSSA based on the metadata journal log entries mirrored from the PSSA to the SSSA; and (c) writing the regenerated metadata changes to persistent storage of the SSSA. A method performed by the SSSA is also provided. An apparatus, system, and computer program product for performing similar methods are also provided.
In general, embodiments relate to a method for configuring client application nodes in a distributed system, the method comprising: detecting, by a client application node, a file system, wherein the file system is not mounted on the client application node; in response to the detecting, determining a metadata node on which the file system is mounted; sending a request to the metadata node to obtain a scale out volume record associated with the file system; generating a mapping between a plurality of storage devices and the scale out volume using the scale out volume record received from the metadata node and a topology file; and completing, after the mapping, mounting of the file system, wherein after the mounting is completed an application in an application container executing on the client application node may interact with the file system.
In general, embodiments relate to a method for distributing topology information to client application nodes in a distributed system, the method comprising: creating a file system on a management node, enabling a plurality of client application nodes to access the file system on the management node, obtaining a topology file, wherein the topology file comprises information about a plurality of storage devices to enable the plurality of client application nodes to issue input/output (IO) requests directly to the plurality of storage devices, and storing, by the management node, the topology file in the file system, wherein the topology file is accessible to the plurality of client application nodes once the topology file is stored in the file system.
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for task processing. The method includes: processing, in response to receiving a target task, the target task by a first device using a deployed first model; acquiring a first result determined by the first model, the first result having a first confidence; processing, in response to determining that the first confidence is lower than a first threshold, the target task by a second device using a deployed second model; and acquiring a second result determined by the second model, the first model being constructed by compressing the second model. In this way, the accuracy of task processing can be ensured.
Embodiments of the present disclosure provide a method, an electronic device, and a computer program product for managing an operating system. The method includes receiving a version upgrade request for the system. The method further includes using a target system image to upgrade the system from a first version to a second version corresponding to the target system image. The method further includes storing, in response to determining that the system operates normally within a first time period, the target system image to a first storage device for the system without updating a historical system image stored in a second storage device for the system, wherein the historical system image corresponds to the first version. In this way, by storing image files of different versions for selectively resetting the operating system in case of a failure, stability of the system after an upgrade is improved.
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
G06F 11/07 - Responding to the occurrence of a fault, e.g. fault tolerance
In techniques for flushing data, based on a maturity level of a storage segment, the storage segment is inserted into a list to be flushed corresponding to the maturity level in a plurality of lists to be flushed, the plurality of lists to be flushed respectively correspond to different maturity levels, and the maturity level at least indicates a proportion of the number of data-written blocks to the total number of blocks of the storage segment; and the list to be flushed for the corresponding maturity level in the plurality of lists to be flushed is flushed to a disk array according to a descending order of the maturity levels. In this way, the bandwidth utilization of the disk array can be improved.
Embodiments of the present disclosure include a method, an electronic device, and a computer program product for training a failure analysis model. In a method for training a failure analysis model in an illustrative embodiment, at least one set of log files including multiple preprocessed log files is obtained, the at least one set of log files including a marked failure cause of a storage system, and preprocessed log files in the multiple preprocessed log files including one or more potential failure causes of the storage system and scores associated with the potential failure causes; a failure cause of the storage system is predicted according to a failure analysis model and based on the potential failure causes and the scores in the multiple preprocessed log files; and parameters of the failure analysis model are updated based on a probability that the predicted failure cause is the marked failure cause.
One example method includes intercepting an IO issued by an application, writing the IO and IO metadata to a splitter journal in NVM, forwarding the IO to storage, and asynchronous with operations occurring along an IO path between the application and storage, evacuating the splitter journal by sending the IO and IO metadata from the splitter journal to a replication site. In this example, sending the IO and IO metadata from the journal to the replication site does not increase a latency associated with the operations on the IO path
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 11/14 - Error detection or correction of the data by redundancy in operation, e.g. by using different operation sequences leading to the same result
One example method includes detecting grey market orders with a detection model. Data from historical orders, which include confirmed grey market orders, can be clustered based on engineered features of the data such that similar orders are clustered together. A new order can be assigned to one of the clusters and a similarity score of the new order to the orders in the assigned cluster can be generated. The score reflects the likelihood that the new order is a grey market order. Action can be taken on the new order based on the score.
One example method includes mapping of a set of environment constraints to various elements of a dataset distillation process, and then performing the dataset distillation process based upon the mapping, and the dataset distillation process is performed in a distributed manner by a group of edge nodes and a central node with which the edge nodes communicate. The environment constraints may include computing resource availability, and privacy constraints concerning the data of the dataset.
A method, comprising: receiving a first search query that is associated with a video file; retrieving one or more search results in response to the first search query, each of the search results corresponding to a different section in the video file; and displaying the search results on a display device, wherein displaying any of the search results includes displaying a link that points to the section of the video file, which corresponds to the search result.
G06F 16/735 - Filtering based on additional data, e.g. user or group profiles
G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
89.
METHOD AND SYSTEM TO MANAGE TECHNICAL SUPPORT SESSIONS USING HISTORICAL TECHNICAL SUPPORT SESSIONS
In general, embodiments relates to a method for managing a technical support session, comprising: obtaining customer identification information for a technical support session, extracting at least one keyword for the technical support session, identifying a plurality of historical technical support sessions using the at least one keyword and the customer identification information, and displaying at least one of the plurality of historical technical support sessions to a technical support person (TSP) during the technical support session.
In general, embodiments relate to a method for managing a technical support session, comprising in response to satisfying a duplicate technical support question threshold for a technical support session: extracting at least one keyword for the technical support session, identifying a plurality of historical technical support sessions using the at least one keyword, and displaying at least one of the plurality of historical technical support sessions to a technical support person (TSP) during the technical support session.
In general, embodiments relate to a method for managing a technical support session, comprising: determining a technical support issue (TSI) for a technical support session; identifying a question path graph (QPG) associated with the TSI; and displaying at least a portion of the QPG to a technical support person (TSP) during the technical support session.
A system can train a neural network model at a first edge device regarding respective amounts of time to process data at the first edge device compared to corresponding amounts of time to process the data at cloud computing equipment that is connected to the first edge device via a communications network, wherein the data is generated at the first edge device. The system can update the neural network model to produce an updated neural network model based on information received from a second edge device regarding a performance of the cloud computing equipment in processing the data, wherein the first edge device and the second edge device having respective different processing capabilities. The system can determine whether to process first data, generated at the first edge device, locally at the first edge device.
Name caching can be used external to a distributed file system (DFS), such as a network file system (NFS), to reduce the complexity of facilitating a file operation at the DFS. A system that can use the name caching and request the file operation of the DFS can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise, in response to a lookup operation or a file operation performed by a DFS, storing, using the processor, a name cache entry for each directory element accessed during the lookup operation or the file operation. The operations further can comprise sending, to the DFS, a single request that comprises a compound command comprising requests for a set of lookup operations and for a file operation.
Technology described herein can perform deletion of a snapshot or portion thereof. In an embodiment, a system can comprise a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise, to delete a snapshot, or a portion of a snapshot, of a real filesystem, reading an inode mapping file (IMF) of the snapshot that indexes a virtual inode number (VIN) corresponding to a real inode. The operations further can comprise identifying the real inode of the snapshot referenced by the VIN, identifying a file object corresponding to the real inode, and deleting the file object from the snapshot.
One example method includes at an infrastructure computing system, receiving a request for infrastructure resources from a user of the infrastructure computing system. Sustainability values for at least one entity are accessed. Based on the accessed sustainability values for the at least one entity, one or more sets of resources are identified that can satisfy the request for infrastructure resources. At least one set of resources from the one or more sets of resources are deployed in the infrastructure computing system to satisfy the request for infrastructure resources.
Facilitating the embedding of block references for reducing and/or mitigating file access latency in file systems is provided herein. A system includes a processor and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations. The operations include populating a data structure of the system with information indicative of a block pointer that identifies a first location of a first data block of an object. The first location is a location within a storage system. The operations also can include, based on a receipt of a read request for the object, enabling access to the first data block of the object based on the block pointer. Enabling access can include bypassing a reading of a block map for access to the first data block.
Facilitation of reclaiming of storage space is enabled relative to one or more data streams employing the storage space. A system can comprise a processor, and a memory that stores computer executable instructions that, when executed by the processor, facilitate performance of operations. The operations can comprise determining posting or non-posting of one or more specified cut positions from readers of events thus far appended to a stream of a stream storage system. The operations can further comprise, in response to the one or more specified cut positions being posted, truncating the stream based on the specified cut positions of the stream, and, in response to no specified cut positions being posted, truncating the stream based on a time limit or a space limit relative to a respective time quantity or a respective space quantity of the stream.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 3/06 - Digital input from, or digital output to, record carriers
98.
GLOBAL TECHNICAL SUPPORT INFRASTRUCTURE WITH REGIONAL DATA COMPLIANCE
In general, embodiments relates to a method for managing technical support sessions, the method comprising generating a first plurality of local technical support sessions, applying a sharing compliance rule to at least a portion of the first plurality of local technical support sessions to generate a second plurality of modified technical support sessions, transmitting the second plurality of modified technical support sessions to a technical support hub, and receiving a local technical support session originating from a second technical support system, wherein the local technical support session is presented to a technical support person (TSP) during a technical support session performed on a first technical support system.
In general, in one aspect, embodiments relate to a method for managing technical support sessions, the method comprising: generating a first plurality of local technical support sessions, transmitting at least a portion of the first plurality of local technical support sessions to a technical support hub, and receiving a local technical support session from a second technical support system, wherein the local technical support session is presented to a technical support person (TSP) during a technical support session performed on a first technical support system.
In general, embodiments relates to a method for managing a technical support session, comprising: obtaining technical support question from a technical support person (TSP) that is conducting the technical support session; determining that the technical support question is a duplicate of a prior technical support question; in response to the determination, obtaining a quality score for the technical support question; and displaying the quality score to the TSP in a user interface on a technical support system that the TSP is using the conduct the technical support session.