The present disclosure provides an information interaction method, an apparatus, a device, a storage medium and a program product. In the information interaction method provided by the present disclosure, by acquiring a reply instruction acting on target information in an information display interface, inputting a target video used to reply to the target information in a shooting interface, thereby making a video reply to the target information in the information display interface. Taking advantage of a large amount of information carried by a video, reply content has a more obvious intention, so that while improving reply efficiency of a responder, it also makes it easier for a user to understand.
Disclosed in the present application are a method and apparatus for displaying friend activity information, an electronic device, and a storage medium. The method for displaying friend activity information comprises: receiving an activity information viewing instruction, the activity information viewing instruction being generated when a user triggers an entry control in a message page; and displaying an active friend list, the active friend list displaying activity information of an active friend of the user, and the active friend being a friend who went online within a latest preset time period. The present application can simplify the viewing process of friend activity information and shorten the time spent on viewing the friend activity information.
An information display method, a device and a storage medium. The method includes: acquiring a first image including a first object in a video, determining whether a second object is present in the first image, and when it is determined that the second object is present in the first image and that the second object satisfies a preset positional relationship with the first object, superimposing a first material on an area where the second object is located in the first image. Using the above method, it is realized that when the second object is detected in the image, any material is superimposed on the area where the second object is located, so as to avoid the problem of not being able to use part of special effects or express information when the second object satisfies the preset positional relationship with the first object.
The present disclosure provides a video dubbing method, a device, an apparatus, and a storage medium. The method includes: upon receiving an audio recording start trigger operation for a first time point of a target video, playing, starting from a video frame corresponding to the first time point, the target video on the basis of a timeline, and receiving audio data; and upon receiving an audio recording end trigger operation for a second time point, generating an audio recording file, wherein the audio recording file has a linked relationship with a timeline of a video segment having a video frame corresponding to the first time point as a start frame, and a video frame corresponding to the second time point as an end frame. The present disclosure enables timeline-based audio recording to be performed while playing a target video, and in turn generates an audio recording file having a linked relationship with the timeline of a corresponding video segment, such that timelines of the audio recording file and the video segment do not need to be re-aligned for future operations, thereby facilitating an accurate video dubbing result, and preventing tedious operations of manual timeline alignment from causing inaccurate dubbing results.
An interaction method and apparatus, and an electronic device and a computer-readable storage medium, which relate to the technical field of image processing. The method comprises: displaying a background image (S101); displaying an initial picture having a target special effect at a preset position of the background image (S102); in response to a special effect change instruction triggered by a user, controlling the target special effect to gradually change from the initial picture to a target picture (S103); and during the change process of the target special effect, adjusting a filter effect of the background image, such that the filter effect of the background image gradually changes from a first filter effect to a second filter effect (S104). In the method, by means of combining a change in a special effect with a change in a filter effect, a rich special effect is presented in a video, thereby increasing the enthusiasm of a user in terms of participation and interaction.
Disclosed in embodiments of the present invention are an information indication method and apparatus, an electronic device, and a storage medium. The method comprises: obtaining a first position parameter of first target information in the current page, wherein the current page is a page in a shared file shared by a sharing client; obtaining a second position parameter of second target information in the displayed page of the sharing client; and determining an indication identifier according to the first position parameter and the second position parameter, and indicating the second target information according to the indication identifier. When displaying the same shared file at the same time as the sharing client, a shared client can obtain the first position parameter of the first target information in the current page, receive the second position parameter of the second target information in the displayed page of the sharing client and determine relative positions of the two pieces of target information according to the two position parameters; and then, the second target information of the sharing client can be indicated, thereby improving the user sharing experience.
A method and apparatus for browsing a table in a document, and an electronic device and a storage medium. The method comprises: during the process of editing a table in an online document, receiving a table page jump instruction for a first table, and jumping from a document page, corresponding to the online document, to a first table page for display (S110); in the first table page, receiving a table switching instruction (S120); and according to the table switching instruction, switching from the first table page to a second table page for display (S130), wherein the first table page is used for displaying the first table, and the second table page is used for displaying a second table. Table switching can be performed quickly in order to improve the table browsing speed.
Disclosed in the embodiments of the present disclosure are a document sharing processing method and apparatus, a device, a medium, and a system. The method comprises: displaying shared document hyperlink information and a permission control entry of a shared document in a mail interface; and obtaining permission data that is determined by a first user for the shared document according to the permission control entry, wherein the permission data is used for determining the operation permission data of a second user for the shared document. The embodiments of the present disclosure can display the shared document hyperlink information and the permission control entry of the shared document in the mail interface, and by means of the permission control entry, the first user can determine the permission data for the shared document, thereby solving the problem in the prior art that the controllable function of the shared document in a mail is single, enriching the controllable function of the shared document in the mail, and meeting the document sharing requirement of the user.
Disclosed in the embodiments of the present disclosure are an information sharing method and apparatus, an electronic device, and a storage medium, the method comprising: in response to detecting an email sharing operation triggered by a sharer for a first email, determining a target email to be shared currently; and acquiring sharee information, and sharing, according to the sharee information, the target email with a sharee corresponding to the sharee information so as to display the target email on a client interface corresponding to the sharee.
Disclosed are a live broadcast stream pushing method and apparatus, and an electronic device. One specific embodiment of the method comprises: receiving viewing permission setting information, wherein the viewing permission setting information is used for representing viewing permission of viewing a live broadcast stream of a multimedia conference; on the basis of the live broadcast starting state of the multimedia conference and the viewing permission setting information, determining whether to push the live broadcast stream of the multimedia conference to a requester who requests the live broadcast stream; and in response to determining to push the live broadcast stream of the multimedia conference to the requester, pushing the live broadcast stream of the multimedia conference to the requester. According to the embodiment, the flexibility of pushing a live broadcast stream of a multimedia conference is improved.
Disclosed in the embodiments of the present disclosure are an interaction method and apparatus, and an electronic device. A specific implementation of the method comprises: in response to detecting a trigger operation for a multimedia conference control, sending a multimedia conference start request to a server, wherein the server starts a multimedia conference in response to determining to start the multimedia conference on the basis of the multimedia conference start request; and in response to detecting a confirmation operation for a live broadcast conference confirmation control, sending live broadcast conference confirmation information to the server, wherein a live stream is generated on the basis of a received multimedia conference stream in response to the server receiving the live broadcast conference confirmation information, and in response to receiving the multimedia conference stream, and the multimedia conference stream is sent to the server by a participating subject of the multimedia conference. Thus, provided is a method for generating a live stream on the basis of the multimedia conference stream.
A table information display method and apparatus, and a device and a storage medium. The table information display method comprises: obtaining a row information display instruction of a target table, the row information display instruction comprising information of a target content row; and vertically displaying first information of a table header row of the target table and second information of the target content row according to the row information display instruction, wherein the table header row of a cell where the first information is located and the target content row of a cell where the second information is located have a same column identifier.
An image processing method and apparatus, an electronic device and a computer-readable storage medium. The image processing method comprises: recognizing a first object in a first video frame and a second object located in the first object (S101); according to the position of the first object in the first video frame, covering a third object, as a foreground image, onto the first video frame so as to obtain a second video frame, wherein in the second video frame, the third object covers the first object (S102); and according to the position of the second object in the first video frame, superimposing the second object, as a foreground image, onto the third object in the second video frame, so as to obtain a third video frame (S103). By means of the described method, the technical problem of lacking reality in the prior art caused by the special effect covered on an object being unable to reflect an original feature of the object is solved.
Provided are a session message display method and apparatus, an electronic device and a storage medium. The session message display method includes the following steps, a first session message input by a first user for at least one second user is received on a session page, and the first session message is sent to an instant messaging application client of each second user of the at least one second user; a message display content is generated in the message entry of a session list page of the first user according to at least one of a reading action or a reply action of the at least one second user on the first session message; where the message display content includes the first session message and message state information of the first session message, and the message entry is associated with the at least one second user.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
15.
METHOD AND APPARATUS FOR DISPLAYING ONLINE DOCUMENT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Disclosed are a method and an apparatus for displaying an online document, an electronic device, and a storage medium. The method for displaying an online document comprises: when receiving an interactive command for a first user by means of an online document, generating an online document notification message in an instant messaging session list of the first user, the interactive command comprising a second user mentioning the first user or the second user mentioning the content edited by the first user in the online document; when receiving the command triggering the online document notification message, acquiring a target link address; and, by means of a document container, jumping to the target link address in the instant messaging window and displaying the online document in the instant messaging window, the document container being integrated into the instant messaging framework.
Methods, systems, and devices for coding or decoding video wherein the picture partition mode is based on block size are described. An example method for video processing includes using a dimension of a virtual pipeline data unit (VPDU) used for a conversion between a video comprising one or more video regions comprising one or more video blocks and a bitstream representation of the video to perform a determination of whether a ternary-tree (TT) or a binary tree (BT) partitioning of a video block of the one or more video blocks is enabled, and performing, based on the determination, the conversion, wherein the dimension is equal to VSize in luma samples, wherein dimensions of the video block are CtbSizeY in luma samples, wherein VSize = min (M, CtbSizeY), and wherein M is a positive integer.
Devices, systems and methods for video processing are described. An exemplary method for video processing includes determining, for a block of a video, a quantization parameter associated with the block, coding the block of the video into a bitstream representation of the video as a palette coded block in part based on a modified value of the quantization parameter, and signaling coded information related to the quantization parameter in the bitstream representation.
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
18.
MAPPING RESTRICTION FOR INTRA-BLOCK COPY VIRTUAL BUFFER
A method for video processing is described. The method includes determining, for a conversion between a current video block of a video picture of a video and a coded representation of the video, a number of reference samples in a reference region of the video picture used for predicting the current video block, based on a rule, wherein the rule specifies that the number of reference samples is limited to a certain range; and performing, based on the determining, the conversion.
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
19.
SAMPLE IDENTIFICATION FOR INTRA BLOCK COPY IN VIDEO CODING
A method of video processing includes maintaining, for a conversion between a current video block of a current picture of a visual media data and a bitstream representation of the visual media data, a buffer comprising reference samples from the current picture for a derivation of a prediction block of the current video block. One or more reference samples in the buffer that are marked unavailable for the derivation have values outside of a pixel value range.
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
20.
VIRTUAL PREDICTION BUFFER FOR INTRA BLOCK COPY IN VIDEO CODING
A method of visual media processing method includes performing a conversion between a current video block of a current picture of a visual media data and a bitstream representation of the visual media data. The conversion is based on a reference region from the current picture comprising reference samples used for deriving a prediction block of the current video block. A virtual buffer of a defined size is used for tracking availability of the reference samples for deriving the prediction block.
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
A method of video processing includes determining, for a conversion between a block that is in a video picture of a video and a bitstream representation of the video, a manner of padding a first set of samples located across boundaries of multiple video regions of the video picture for a current sample in an adaptive loop filter process. The method also includes performing the conversion according to the determining.
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/117 - Filters, e.g. for pre-processing or post-processing
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
22.
STICKER GENERATING METHOD AND APPARATUS, AND MEDIUM AND ELECTRONIC DEVICE
A sticker generating method, comprising: obtaining a background image, the background image comprising a target object (S101); displaying a display area of a sticker and anchors of the sticker in the background image (S102); receiving an import instruction of the sticker (S103); importing resource of the sticker according to the import instruction and displaying the resource of the sticker in the display area of the sticker (S104); dynamically selecting a tracking area according to the positions of the anchors of the sticker, wherein the tracking area is an image area of the target object (S105); and generating the sticker according to the display area, the tracking area, and the resource of the sticker (S106). By means of the sticker generating method, when a sticker moves on a target object, an image area of the target object can be selected in real time, a relative position relationship between the sticker and the target object can be determined more accurately, the target object can be tracked more precisely, the position of the sticker is accurately set, the position of the sticker can be dynamically adjusted according to a target object detection result, and a fast-responsive real-time dynamic sticker effect is achieved.
The present disclosure provides a special effect synchronization method, device, and a storage medium; wherein the method comprises: receiving a synchronization request for synchronizing the special effect file sent by a special effect preview end; based on the synchronization request, when determining that a direct connection condition is satisfied, establishing a communication link with the special effect preview end in a direct connection mode; receiving an acquisition request for the target special effect file through the communication link; synchronizing the target special effect file to the special effect preview end based on the acquisition request.
H04W 4/00 - Services specially adapted for wireless communication networks; Facilities therefor
H04W 4/80 - Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
A method for visual mediaprocessing, comprising: computing, during a conversion between a current video block of visual media data and a bitstream representation of the current video block, a cross-component linear model (CCLM) and/or a chroma residual scaling (CRS) factor for the current video block based, at least in part, on neighboring samples of a corresponding luma block which covers a top-left sample of a collocated luma blockassociated with the current video block, wherein one or more characteristics of the current video block are used for identifying the corresponding luma block.
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
25.
RESTRICTION ON APPLICABILITY OF CROSS COMPONENT MODE
A method for visual media processing, comprising performing a conversion between a current chroma video block of visual media data and a bitstream representation of the current chroma video block, wherein, during the conversion, a chroma residual of the current chroma video block is scaled based on a scaling coefficient, wherein the scaling coefficient is derived at least based on luma samples located in predefined positions.
A method for visual media processing, comprising performing a conversion between a current chroma video block of visual media data and a bitstream representation of the current chroma video block, wherein, during the conversion, a chroma residual of the current chroma video block is scaled based on a scaling coefficient, wherein the scaling coefficient is derived at least based on luma samples located in predefined positions.
Devices, systems and methods for digital video coding, which includes matrix-based intra prediction methods for video coding, are described. In a representative aspect, a method for video processing includes performing a first determination that a luma video block of a video is coded using a matrix based intra prediction (MIP) mode in which a prediction block of the luma video block is determined by performing, on previously coded samples of the video, a boundary downsampling operation, followed by a matrix vector multiplication operation, and selectively followed by an upsampling operation, performing, based on the first determination, a second determination about a chroma intra mode to be used for a chroma video block associated with the luma video block, and performing, based on the second determination, a conversion between the chroma video block and a bitstream representation of the chroma video block.
H04N 19/593 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
28.
MOST PROBABLE MODE LIST CONSTRUCTION FOR MATRIX-BASED INTRA PREDICTION
Devices, systems and methods for digital video coding, which includes matrix-based intra prediction methods for video coding, are described. In a representative aspect, a method for video processing includes generating, for a conversion between a current video block of a video and a coded representation of the current video block, a most probable mode (MPM) list based on a rule, where the rule is based on whether a neighboring video block of the current video block is coded with a matrix based intra prediction (MIP) mode, and performing the conversion between the current video block and the coded representation of the current video block using the MPM list, where the conversion applies a non-MIP mode to the current video block, and where the non-MIP mode is different from the MIP mode.
A method of video processing is provided to comprise: maintaining, prior to a conversion between a current video block of a video region and a coded representation of the video, at least one history-based motion vector prediction (HMVP) table, wherein the HMVP table includes one or more entries corresponding to motion information of one or more previously processed blocks; and performing the conversion using the at least one HMVP table; and wherein the motion information of each entry is configured to include interpolation filter information for the one or more previously processed blocks, wherein the interpolation filter information indicates interpolation filters used for interpolating prediction blocks of the one or more previously processed blocks.
Devices, systems and methods for video processing are described. In an exemplary aspect, a method for video processing includes encoding a video unit of a video as an encoded video unit; generating reconstruction samples from the encoded video unit; performing a clipping operation on the reconstruction samples, wherein a clipping parameter used in the clipping operation is a function of a clipping index and a bit-depth of the reconstruction samples or a bit-depth of samples of the video unit; applying a non-linear adaptive loop filter to an output of the clipping operation; and generating a coded representation of the video using the encoded video unit.
H04N 19/80 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
31.
IMPROVED WEIGHTING PROCESSING OF COMBINED INTRA-INTER PREDICTION
The present application relates to improved weighting processing of combined intra-inter prediction. A method for processing video includes: determining, during a conversion between a current video block, which is coded in a combined intra and inter prediction (CIIP) mode, of a video and a bitstream representation of the current video block, a weight pair comprising a first weight for a first prediction result of the current video block and a second weight for a second prediction result of the current video block, based on one or more neighboring video blocks to the current video block, wherein the first prediction result is generated by an intra prediction mode and the second prediction result is generated by an inter prediction mode; and determining a prediction result of the current block based on a weighted sum of the first prediction result and the second prediction result.
A video processing method is provided to comprise: performing a conversion between a coded representation of a video comprising one or more video regions and the video, wherein the coded representation includes reshaping model information applicable for in loop reshaping (ILR) of some of the one or more video regions, wherein the reshaping model information provides information for a reconstruction of a video unit of a video region based on a representation in a first domain and a second domain and/or scaling chroma residue of a chroma video unit, and wherein the reshaping model information comprises a parameter set that comprises a syntax element specifying a difference between an allowed maximum bin index and a maximum bin index to be used in the reconstruction, and wherein the parameter is in a range.
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
A method for video processing is provided. The method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of a cross-component linear model (CCLM) based on two or four chroma samples and/or corresponding luma samples; and performing the conversion based on the determining.
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
34.
INDEPENDENT CODING OF PALETTE MODE USAGE INDICATION
Devices, systems and methods for palette mode coding are described. An exemplary method for video processing includes performing a conversion between a block of a video region of a video and a bitstream representation of the video. The bitstream representation is processed according to a first format rule that specifies whether a first indication of usage of a palette mode is signaled for the block and a second format rule that specifies a position of the first indication relative to a second indication of usage of a prediction mode for the block.
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Devices, systems and methods for palette mode coding are described. An exemplary method for video processing includes determining, for a conversion between a block of a video region in a video and a bitstream representation of the video, a prediction mode based on one or more allowed prediction modes that include at least a palette mode of the block. An indication of usage of the palette mode is determined according to the prediction mode. The method also includes performing the conversion based on the one or more allowed prediction modes.
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
36.
NEIGHBOURING SAMPLE SELECTION FOR INTRA PREDICTION
A method for video processing is provided. The method includes determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) prediction mode based on chroma samples that are selected based on W available above-neighbouring samples, W being an integer; and performing the conversion based on the determining.
A method for video processing is provided to include performing a conversion between a current video block of a video region of a video and a coded representation of the video, wherein the conversion uses a coding mode in which the current video block is constructed based on a first domain and a second domain and/or chroma residue is scaled in a luma-dependent manner, and wherein a parameter set in the coded representation comprises parameter information for the coding mode.
H04N 19/82 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
38.
BUFFER MANAGEMENT FOR INTRA BLOCK COPY IN VIDEO CODING
A method of visual media processing includes determining a size of a buffer to store reference samples for prediction in an intra block copy mode; and performing a conversion between a current video block of visual media data and a bitstream representation of the current video block, using the reference samples stored in the buffer, wherein the conversion is performed in the intra block copy mode which is based on motion information related to a reconstructed block located in same video region with the current video block without referring to a reference picture.
H04N 19/15 - Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/122 - Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
H04N 19/159 - Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
A method for video processing is provided to include performing, for a conversion between a current video block of a video and a coded representation of the video, a motion information refinement process based on samples in a first domain or a second domain; and performing the conversion based on a result of the motion information refinement process, wherein, during the conversion, the samples are obtained for the current video block from a first prediction block in the first domain using an unrefined motion information, at least a second prediction block is generated in the second domain using a refined motion information used for determining a reconstruction block, and reconstructed samples of the current video block are generated based on the at least the second prediction block.
A method for video processing is provided. The method includes performing downsampling on chroma and luma samples of a neighboring block of the current video block; determining, for a conversion between a current video block of a video that is a chroma block and a coded representation of the video, parameters of cross-component linear model (CCLM) based on the downsampled chroma and luma samples obtained from the downsampling; applying the CCLM on luma samples located in a luma block corresponding to the current video block to derive prediction values of the current video block; and performing the conversion based on the prediction values.
H04N 19/50 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
41.
PULSE CODE MODULATION TECHNIQUE IN VIDEO PROCESSING
Devices, systems and methods for digital video coding, which include pulse code modulation techniques, are described. An exemplary method for video processing includes determining, for a current block of video, that at least one of a first coding mode in which pulse code modulation is used or a second coding mode in which multiple reference line based intra prediction is used is enabled, and performing, based on the first coding mode or the second coding mode, a conversion between the current block and a bitstream representation of the video, wherein a first indication indicative of use of the first coding mode and/or a second indication indicative of use of the second coding mode are included in the bitstream representation according to an ordering rule.
A method of video processing is provided to include: maintaining a set of tables, wherein each table includes motion candidates and each motion candidate is associated with corresponding motion information; updating a motion candidate list based on motion candidates in one or more tables using a pruning operation on the motion candidates; and performing a conversion between a first video block and a bitstream representation of a video including the first video block using the constructed motion candidate list.
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
A method of video decoding is provided to include maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.
H04N 19/52 - Processing of motion vectors by encoding by predictive encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
44.
METHOD AND DEVICE FOR SOCIAL PLATFORM-BASED DATA MINING
Provided is a method and device for social platform-based data mining. The method includes: acquiring interest label dictionaries of registered users on an information client and first objects having followed relationship with the registered users on the information client in a social platform; according to the first objects, determining first followed sets corresponding to the registered users; according to the interest label dictionaries of the registered users and the first followed sets, constructing an interest model; acquiring second objects having followed relationship with newly registered users on the information client in the social platform, and reading relationship information between the newly registered users and the second objects; according to the second objects determining a second followed set corresponding to the newly registered users; and matching the second followed set with the interest model to determine recommended interest labels of the newly registered users.
Provided is a method and device for refreshing a news list, wherein the method includes: receiving a refreshing signal; reading a refreshing start time according to the received refreshing signal; reading at least one pre-set time threshold; acquiring a recommended news list according to the refreshing start time and the time threshold; allocating a recommending time for each piece of news to be recommended in the recommended news list; and refreshing the news to be recommended in the recommended news list according to the recommending time, so as to generate a new recommended news list. The method and device solve the problem in the traditional art that more pieces of news cannot be refreshed via a refreshing operation due to the fact that a news client sorts news according to the publication time of the news.