A distribution system includes a device control circuit that receives a first sound signal and a second sound signal that are related to a performance sound to be distributed. The device control circuit also receives meta-data indicating a type of the first sound signal and a type of the second sound signal. The device control circuit also receives sound environment data indicating a sound characteristic of a sound appliance. Based on a combination of the type of the first sound signal and the sound characteristic or a combination of the type of the second sound signal and the sound characteristic, the device control circuit controls the first sound signal or the second sound signal to be output to the sound appliance.
The present invention relates to a PGC-1α expression promoting agent, a muscle building agent, or a mitochondria activating agent, which comprises at least one pyrimidine nucleotide or a precursor thereof as an active ingredient.
A61K 31/7072 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid having two oxo groups directly attached to the pyrimidine ring, e.g. uridine, uridylic acid, thymidine, zidovudine
A61K 31/7068 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid
A speaker unit includes a yoke, a magnet, a voice coil and a partitioner. The yoke includes a base portion and a protruding portion protruding from the base portion. The magnet is disposed on the base portion. The top plate is disposed on the magnet. A magnetic gap is formed between the top plate and the protruding portion. The voice coil is disposed in the magnetic gap. The partitioner is disposed in a surrounded-space surrounded by the yoke, the matmet and the top plate. The partitioner is spaced apart from the base portion. The partitioner includes at least one opening.
A headphone includes an outer arc-shaped headband including two strip-shaped guide portion respectively provided at both ends of the headband. The headphone further includes two speakers respectively connected to ends of the two guide portions. The headphone further includes an inner arc-shaped head pad including two sliders respectively provided at both ends of the head pad. Each slider includes an insertion hole through which the guide portion is inserted, a wall surface along which the guide portion slides, and a pressing portion that presses the guide portion toward the wall surface.
A sound generation method includes receiving a control value at each of a plurality of time points on a time axis, accepting a mandatory instruction, generating an acoustic feature value of a specific time point, by using a trained model to process the control value and an acoustic feature value sequence, and updating the acoustic feature value sequence. The acoustic feature value sequence is updated by using the generated acoustic feature value, as the mandatory instruction has not been received for the specific time point. As the mandatory instruction has been received for the specific time point, one or more alternative acoustic feature values of one or more time points, which includes at least the specific time point, in accordance with the control value for the specific time point is generated, and the acoustic feature value sequence is updated by using the one or more alternative acoustic feature values.
A musical score creation device includes at least one processor configured to execute a receiving unit configured to receive a note sequence that includes a plurality of musical notes, and an estimation unit configured to, by using a trained model, estimate each note and attribute information for creating a musical score. The trained model is a machine-learning model that has learned an input-output relationship between a reference note sequence including a plurality of reference notes, and each reference note and reference attribute information for creating a reference musical score.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A cytosine-type bridged nucleoside amidite crystal represented by the following structural formula:
A cytosine-type bridged nucleoside amidite crystal represented by the following structural formula:
A cytosine-type bridged nucleoside amidite crystal represented by the following structural formula:
in which R1 and R2 each represents a substituent, and R3 represents a protecting group.
A hollow structure has a hollow body with an annular first space and at least one resonator. In the resonator there is formed an elongated second space with a pair of openings at two ends thereof. Each of the pair of openings opens inwardly toward the interior of the first space. The ratio of the length of a centerline of the second space to the length of a centerline of the first space is within a range of from 0.45 to 0.55. The second space is disposed in a curve along a circumferential surface of the first space and is shaped such that one of the two ends of the second space is folded back toward the other end.
A detection system includes a detectable member disposed on a movable member that is configured to be displaced in response to a user operation, and signal generating circuitry including a first coil configured to generate a magnetic field, the signal generating circuitry being configured to generate a detection signal dependent on a distance between the detectable member and the first coil, in which the detectable member includes a flexible base fixed to the movable member, and a second coil disposed on the flexible base.
A musical instrument includes: a fixed member; a movable member displaceable relative to the fixed member within a movable range in response to a playing operation of the musical instrument; a detectable circuit including a magnetic or conductive body and disposed on the movable member; a detector circuit including a coil disposed on the fixed member and configured to output a detection signal corresponding to a voltage that is dependent on a distance between the detectable circuit and the coil; at least one memory storing instructions; and at least one processor configured to implement the instructions to perform a plurality of tasks, including: a generating task that generates, based on correspondences between voltages of detection signals and positions of the movable member in the movable range, position data indicating a position of the movable member that depends on the voltage of the detection signal output from the detector circuit; and a calibrating task that calibrates the correspondences based on the voltage of the detection signal obtained while the movable member is at a predetermined position within the movable range.
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
An earphone includes an inserter configured to be inserted into an outer ear hole of an ear, a main body configured to be coupled to the inserter, and an earpiece covering the inserter. The main body includes a bottom surface and a side surface that are configured to contact a surface of a cavum concha. The side surface has a first region configured to contact a side wall of an antihelix. The first region has a protrusion configured to contact an inner side of the antihelix.
A sound collection setting method detects a specific object from an image captured by a camera, obtains position information of the specific object in the image, and sets a sound collection target range of a microphone by changing directivity of the microphone, based on the position information.
A sound collection control method recognizes a speaker from an image, detects a position of the recognized speaker, sets a first collection beam based on the position of the recognized speaker, recognizes a specific object other than the recognized speaker from an image, detects a position of the recognized specific object, and sets a second collection beam based on the detected position of the recognized specific object.
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
G06T 7/70 - Determining position or orientation of objects or cameras
A musical instrument includes a fixed member; a movable member displaceable in response to a playing operation of the musical instrument, the movable member being displaceable relative to the fixed member from a first state where the movable member is in an initial position to a second state where the movable member is displaced from the initial position; a detectable circuit including a magnetic or conductive body and disposed on the movable member; and a detector circuit including a coil disposed on the fixed member and configured to output a detection signal corresponding to a voltage that is dependent on a distance between the detectable circuit and the coil. A distance between the detectable circuit and the coil in the first state is smaller than a distance between the detectable circuit and the coil in the second state.
An image generation device is for a live distribution system that distributes, in real-time, a piece of music performed by a performer to terminal devices of a plurality of viewers through a communication network. The image generation circuit includes an obtaining circuit and an image generation circuit. The obtaining circuit obtains motion information regarding a motion of a first viewer of the distributed piece of music. The image generation circuit generates an image showing at least one avatar group that is being caused to motion based on the motion information in an imaginary space that corresponds to a performance of the piece of music. Each of the at least one avatar group includes avatars of a plurality of viewers.
A microphone state display method includes receiving a mute-on or a mute-off operation by each of a plurality of microphones, displaying a state of a microphone that has received the mute-off operation as a first state on a display, when receiving the mute-on operation, in a case in which at least one microphone among the plurality of microphones is in a mute-off state, displaying a state of a microphone that has received the mute-on operation as a second state on a display, and, when receiving the mute-on operation in a case in which all of the plurality of microphones are in a mute-on state, displaying the state of the microphone that has received the mute-on operation as a third state on a display.
A power amplifier includes a first amplifier configured to amplify an input signal and output, from a first output, a first signal in which the input signal is amplified, a second amplifier configured to amplify the first signal and output, from a second output, a second signal in which the first signal is amplified, a third amplifier configured to amplify the second signal and output, from a third output, a third signal in which the second signal is amplified, a capacitor connected between the first output and a mixing node, a first resistor connected between the second output and the mixing node, a first inductor connected between the third output and the mixing node, a second inductor connected between the mixing node and a load, and a feedback circuit configured to negatively feed back a mixed signal of the mixing node to an input of the first amplifier.
The present invention is a muscular atrophy inhibitor comprising at least one pyrimidine nucleotide or a precursor thereof as an active ingredient. Also, the present invention is a method for inhibiting muscular atrophy by administrating at least one pyrimidine nucleotide or a precursor thereof.
A61K 31/7072 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid having two oxo groups directly attached to the pyrimidine ring, e.g. uridine, uridylic acid, thymidine, zidovudine
A61P 21/00 - Drugs for disorders of the muscular or neuromuscular system
A61K 31/7068 - Compounds having saccharide radicals and heterocyclic rings having nitrogen as a ring hetero atom, e.g. nucleosides, nucleotides containing six-membered rings with nitrogen as a ring hetero atom containing condensed or non-condensed pyrimidines having oxo groups directly attached to the pyrimidine ring, e.g. cytidine, cytidylic acid
A keyboard apparatus, which can provide a keyboard apparatus that a size in depth direction is miniaturized, including a frame, a first member, a key, one or more hammer assemblies configured to rotate in accordance with movement of the key. The one or more hammer assemblies including a rotation member rotatably connected to the frame with respect to a rotation axis as a center of rotation and a weight member attached to the rotation member and having a first portion and a second portion. The second portion faces the first member in a first direction extending along the rotation axis, a thickness of the second portion is smaller than a thickness of the first portion in the first direction, and a length of the second portion is larger than a length of the first portion in a rotation direction of the weight member.
A sound bar has a striking surface. The sound bar includes: a surface layer having a first surface constituting at least a part of the striking surface and a second surface opposite across a thickness of the surface layer from the first surface; and a base fixed to the second surface of the surface layer. A cutout surface is provided on a peripheral edge portion of the striking surface. The first surface of the surface layer is smaller than the base in a plan view.
A live distribution device includes an obtaining circuit, a data processing circuit, and a distribution circuit. The obtaining circuit is configured to obtain a piece of music and/or a user reaction to the piece of music. The user reaction is obtained from a viewing user, among a plurality of users, who is viewing a performance. The data processing circuit is configured to generate processed data based on the piece of music and/or the user reaction obtained by the obtaining circuit. The processed data indicates how the performance is viewed by the viewing user. The distribution circuit is configured to distribute the generated processed data to a terminal device of a non-viewing user, among the plurality of users, who is not viewing the performance.
A signal processing method, which is realized by a computer, includes receiving a control value representing a musical feature, receiving a selection signal for selecting either a first degree of enforcement or a second degree of enforcement that is lower than the first degree of enforcement, and generating, by using a trained model, in accordance with the selection signal, either an acoustic feature amount sequence that reflects the control value in accordance with the first degree of enforcement, or an acoustic feature amount sequence that reflects the control value in accordance with the second degree of enforcement.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A sound output system according to an embodiment includes a speaker configured to output a sound according to sound data supplied to the speaker, and an operation device comprising one or more operators, a drive unit configured to drive the one or more operators based on performance data supplied to the operation device in synchronization with the sound data supplied to the speaker, and a drive control unit configured to control the drive unit. According to the output system, it is possible to faithfully reproduce a performance sound at the time of performance and to accurately reproduce a performance at the time of performance by driving at operator based on the performance sound.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G10H 1/04 - Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation
A pedal unit 10 in one embodiment includes a case 190, a foot lever 100, and an elastic member 155. The foot lever 100 includes a first portion 100r located inside the case 190 and a second portion 100f located outside the case 190. The foot lever 100 is rotatably arranged with respect to the case 190. A center of rotation is located between the first portion 100r and the second portion 100f. The elastic member 155 is arranged within the case 190 and provides a force against the first portion 100r.
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A singing sound output system includes at least one processor configured to execute a teaching unit configured to indicate to a user a progression position in singing data that are temporally associated with accompaniment data and include a plurality of syllables, an acquisition unit configured to acquire at least one piece of sound information input by a performance, a syllable identification unit configured to identify, from the syllables in the singing data, a syllable corresponding to the sound information, a timing identification unit configured to associate, with the sound information, relative information indicating a relative timing with respect to an identified syllable identified by the syllable identification unit, a synthesizing unit configured to synthesize a singing sound based on the identified syllable, and an output unit configured to, based on the relative information, synchronize and output the singing sound and an accompaniment sound based on the accompaniment data.
In an embodiment, a text providing method includes providing chord input data, in which chords are aligned in chronological order, to a trained model in which a relationship between chord sequence data, in which chords are aligned in chronological order, and explanatory text related to the chords included in the chord sequence data is learned, and obtaining text corresponding to the chord input data from the trained model.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
30.
PERFORMANCE ANALYSIS METHOD, PERFORMANCE ANALYSIS SYSTEM AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
A performance analysis method is implemented by a computer system. The performance analysis method includes: obtaining performance data representing performance by a performer on a musical instrument; obtaining a performance image of performer's fingers playing the musical instrument; generating finger position data representing a position of each of the performer's fingers from the performance image; generating fingering data representing fingering in a performance by the performer based on the performance data and the finger position data; and displaying a performance image based on the generated fingering data.
An information processing method is implemented by a computer system. The information processing method includes: generating operation data representing one or more fingers, of a plurality of fingers of a left hand and a right hand of a user, that operate a musical instrument, by analyzing a performance image indicating the plurality of fingers of the user who plays the musical instrument; and executing first processing in a case where the operation data represents the musical instrument being operated with a finger of the left hand, and executing second processing different from the first processing in a case where the operation data represents the musical instrument being operated with a finger of the right hand.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
A sound bar includes an elongated member having a striking surface having an elongated shape. A weight of a striking surface side area of the elongated member, per unit volume of the striking surface side area of the elongated member, changes along a longitudinal direction of the striking surface. The striking surface side area is defined in a range of a uniform thickness from the striking surface.
A distribution system includes an obtaining circuit, a determination circuit, a generation circuit, and a distribution circuit. The obtaining circuit obtains reaction information from a terminal device of a viewer of the content. The reaction information indicates a reaction of the viewer to the content. The determination circuit determines whether the reaction information is first reaction information or second reaction information. The first reaction information indicates a first reaction made by equal to or more than a predetermined number of viewers. The second reaction information is different from the first reaction information and indicates a second reaction made by less than the predetermined number of viewers. The generation circuit generates an audience sound in a case of the first reaction information. The distribution circuit transmits the audience sound to the plurality of terminal devices.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
A sound editing device includes at least one processor that is configured to execute a first receiving unit configured to receive a first audio signal, a second receiving unit configured to receive a second audio signal, and an estimation unit configured to estimate effect information that reflects an effect to be applied to the first audio signal, from the first and second audio signals, by using a trained model indicating an input-output relationship between first and second input audio signals and output effect information that reflects an effect to be applied to the first input audio signal.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
36.
METHOD, SYSTEM, AND STORAGE MEDIUM FOR CONTROLLING LOUDSPEAKER GROUP DELAY
A method includes acquiring a latency value defining delay of sound through a filter, acquiring a first group delay indicating delay for each frequency of sound of a first loudspeaker, acquiring a second group delay indicating delay for each frequency of sound of a second loudspeaker, calculating an adjustment amount for adjusting a first audio signal supplied to the first loudspeaker and/or a second audio signal supplied to the second loudspeaker, such that a difference in the sounds of the first and second loudspeakers in a target band is reduced, and generating, in accordance with the adjustment amount, a frequency response of a first filter that controls characteristics of the first audio signal and/or a frequency response of a second filter that controls characteristics of the second audio signal, while controlling a latency of the first filter and/or the second filter in accordance with the latency value.
An audio signal processing method for a mixing apparatus including a plurality of channels, the audio signal processing method includes selecting at least a first channel among the plurality of channels, inputting an audio signal of the selected first channel, specifying setting data to be set to the mixing apparatus, based on time-series sound volume data for the first channel, or data on a second channel different from the first channel, among the plurality of channels, and outputting the specified setting data.
An object of the present invention is to provide a tone plate that can be increased in strength and that can produce the original sound of the material. A tone plate 10 according to an aspect of the present invention includes: a surface layer 1 having a striking surface 1a; and a base layer 2 that is laminated directly or indirectly onto a face of the surface layer 1 on the side opposite to the striking surface 1a, wherein the surface layer 1 is a layer of wood impregnated with resin, and the base layer 2 is not impregnated with resin.
An information processing device includes at least one processor configured to execute a plurality of modules including an input module into which natural language that includes an adjective is configured to be input by a user, and a timbre estimation module configured to output timbre data based on the natural language input by the user, by using a trained model configured to output the timbre data from the adjective.
G10H 1/14 - Circuits for establishing the harmonic content of tones during execution
G10H 5/00 - Instruments in which the tones are generated by means of electronic generators
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
40.
SIGNAL PROCESSING SYSTEM, SIGNAL PROCESSING METHOD, AND PROGRAM
A signal processing system causes a reproduction device to reproduce a time series signal that follows reproduction of a musical piece. The signal processing system includes an electronic controller including at least one processor. The electronic controller is configured to execute a plurality of units including an acquisition unit configured to acquire an indicated position indicated by a user in the reproduction of the musical piece, and a control unit configured to execute time stretching of the time series signal in accordance with the indicated position.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
41.
SOUND GENERATION DEVICE AND CONTROL METHOD THEREOF, PROGRAM, AND ELECTRONIC MUSICAL INSTRUMENT
A sound generation device includes an electronic controller including at least one processor. The electronic controller is configured to execute a first acquisition module configured to acquire first lyrics data in which a plurality of characters to be vocalized are arranged in a time series and that include a first character and a second character that follows the first character, a second acquisition module configured to acquire a vocalization start instruction, and a control module configured to, in response to the acquiring of the vocalization start instruction, output an instruction to generate an audio signal based on a first vocalization corresponding to the first character, in response to the vocalization start instruction satisfying a first condition, and output an instruction to generate the audio signal based on a second vocalization corresponding to the second character, in response to the vocalization start instruction not satisfying the first condition.
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A data processing method performed by a processor that receives data signals from a plurality of devices that utilize different protocols from each other receives data signals, respectively, from each of the plurality of devices, and generates multitrack audio data, in which audio data of a plurality of channels is stored by storing a first data string of a digital audio signal, received from a first device of the plurality of devices that utilizes a first protocol, in a first channel of the plurality of channels, and storing a data string of a digital signal, related to the digital audio signal and received from a second device of the plurality of devices that utilizes a second protocol different from the first protocol, as a second data string of the digital audio signal in a second channel of the plurality of channels.
An object of the present invention is to provide a magnetic circuit for an acoustic transducer, the magnetic circuit enabling peeling of the surface of a member formed by compression-molding a soft magnetic composite material to be suppressed. The present invention is a magnetic circuit for an acoustic transducer, the magnetic circuit including: a yoke 151; a magnet 152; and a top plate 153, wherein the yoke 151 includes a bottom surface portion 151a and a pole piece 151b provided perpendicular to the bottom surface portion 151a, the magnet 152 and the top plate 153 are provided in that order on the bottom surface portion 151a, a magnetic gap 141 in which a voice coil 14 is disposed is formed between the pole piece 151b and the top plate 153, at least one of the pole piece 151b and the top plate 153 includes: a composite material part 154 in a position facing the magnetic gap 141; and a protective layer 155 covering at least a surface of the composite material part 154, the composite material part 154 is formed from a soft magnetic composite material, and the material of the protective layer 155 differs from the soft magnetic composite material.
A content data processing method generates a first-time code indicating an elapsed time from a start of the live event, generates first content data from first data, which includes the one of the audio or video data, received from the live event by adding, to the first data, the first-time code with a first start time and a first end time associated with the first data, and generates a second-time code indicating a duration associated with the first data reflecting the first start time and the first end time.
A pedal unit according to an embodiment includes a first foot lever, a shaft serving as a center of rotation of the first foot lever, and a bearing paired with the shaft. The shaft or the bearing includes a first member arranged on at least a portion of surfaces in contact with each other, and a second member formed of a material different from the first member and supporting the first member from a side opposite the surfaces. The surfaces are included in an inner area of a width of the first foot lever when the first foot lever is viewed perpendicular to the shaft. The first member and the second member are fixed in a sliding direction of the shaft and the bearing.
A pedal device includes a case, a foot lever including a first portion located inside the case and a second portion located outside the case, the foot lever being arranged to be rotatable with respect to the case, a center of rotation of the foot lever being located between the first portion and the second portion, a first sensor detecting a position of the foot lever between a rest position and an end position, and a reaction force member contacting with the foot lever and applying a reaction force while the foot lever rotates from the rest position to the end position, the reaction force member being located corresponding to a portion opposite to the center of rotation of the first portion.
An information processing system (a) acquires user playing data indicative of playing of a piece of music by a user, (b) generates habit data indicative of a playing habit of the user in playing the piece of music on a musical instrument, by inputting the acquired user playing data into at least one first trained model that learns a relationship between (i) player playing training data indicative of playing of a piece of reference music by a player, and (ii) corresponding training habit data indicative of a playing habit of the player in playing the piece of reference music on a musical instrument, the playing habit being indicated by the player playing training data; and (c) identifies a practice phrase based on the generated habit data.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A body structure of an electric guitar includes a body including a first chamber and a second chamber formed spaced apart from each other, and a slit that connects the first chamber and the second chamber.
An audio analysis method that is realized by a computer system includes estimating a plurality of beat points of a musical piece by analyzing an audio signal representing a performance sound of the musical piece, receiving an instruction from a user to change a location of at least one beat point of the plurality of beat points, and updating a plurality of locations of the plurality of beat points in response to the instruction from the user.
SOUND GENERATION METHOD USING MACHINE LEARNING MODEL, TRAINING METHOD FOR MACHINE LEARNING MODEL, SOUND GENERATION DEVICE, TRAINING DEVICE, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING SOUND GENERATION PROGRAM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING TRAINING PROGRAM
A sound generation method that is realized by a computer includes receiving a representative value of a musical feature amount for each of a plurality of sections of a musical note, and using a trained model to process a first feature amount sequence in accordance with the representative value for each section, thereby generating a sound data sequence corresponding to a second feature amount sequence in which the musical feature amount changes continuously.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
51.
AUDIO ANALYSIS METHOD, AUDIO ANALYSIS SYSTEM AND PROGRAM
An audio analysis method that is realized by a computer system includes setting a maximum tempo curve representing a temporal change of a maximum tempo value and a minimum tempo curve representing a temporal change of a minimum tempo value in accordance with an instruction from a user, and analyzing an audio signal representing a performance sound of a musical piece, thereby estimating a tempo of the musical piece within a restricted range between a maximum value represented by the maximum tempo curve and a minimum value represented by the minimum tempo curve.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
52.
SOUND PROCESSING METHOD, SOUND PROCESSING APPARATUS AND SOUND PROCESSING SYSTEM
A sound processing method obtains first audio data representing first sound, obtains second audio data representing second sound created in advance, analyzes the first audio data, compares the second audio data with an analysis result of the analyzing, reproduces third audio data from the second audio data by omitting a type of sound from the second sound that matches a type of sound in the first sound, based on a comparison result of the comparing, and outputs an audio signal representing the reproduced third audio data.
G10L 21/028 - Voice signal separating using properties of sound source
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
H04R 5/033 - Headphones for stereophonic communication
A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
54.
INFORMATION PROCESSING SYSTEM, ELECTRONIC MUSICAL INSTRUMENT, INFORMATION PROCESSING METHOD, AND TRAINING MODEL GENERATING METHOD
An electronic musical instrument is configured to: (a) acquire input data that includes habit data indicative of a playing habit of a user in playing a musical instrument; (b) generate correction data by inputting the acquired input data into at least one trained model that learns a relationship between training input data and training correction data; and (c) correct, using the generated correction data, at least one first intensity characteristic representative of a relationship between: (i) a playing intensity in playing the musical instrument by the user; and (ii) a sound intensity of a musical sound output in response to playing of the musical instrument.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
A detection system for a keyboard instrument includes a movable member that is displaceable in response to a user playing operation of the keyboard instrument. The detection system includes: (i) a detectable portion of a magnetic or conductive body that is disposed on the movable member; (ii) a detection board including: a detection coil disposed facing the detectable portion; and a detection circuit configured to generate a detection signal depending on a distance between the detection coil and the detectable portion; (iii) a control board, which is discrete from the detection board, including a control integrated circuit configured to generate, based on the detection signal, displacement data indicating a position of the movable member; and (iv) a wiring portion including a wiring configured to transmit the detection signal from the detection board to the control board.
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
56.
SOUND GENERATION METHOD USING MACHINE LEARNING MODEL, TRAINING METHOD FOR MACHINE LEARNING MODEL, SOUND GENERATION DEVICE, TRAINING DEVICE, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING SOUND GENERATION PROGRAM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING TRAINING PROGRAM
A sound generation method that is realized by a computer includes receiving a first feature amount sequence in which a musical feature amount changes over time, and using a trained model that has learned an input-output relationship between an input feature amount sequence in which the musical feature amount changes over time at a first fineness and a reference sound data sequence corresponding to an output feature amount sequence in which the musical feature amount changes over time at a second fineness that is higher than the first fineness, to process the first feature amount sequence, thereby generating a sound data sequence corresponding to a second feature amount sequence in which the musical feature amount changes at the second fineness.
G10H 1/12 - Circuits for establishing the harmonic content of tones by filtering complex waveforms
G10H 3/12 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a lo
A single printed-circuit board of a class-D amplifier includes an input ground, an output ground, an input amplifying circuit, a modulation circuit, an output amplifying circuit, and an output filter, a solid pattern, a first feedback circuit, and a second feedback circuit. The solid pattern of the output ground extends into all regions of the input amplifying circuit, the modulation circuit, the output amplifying circuit and the output filter. The first feedback circuit executes a feedback where a voltage at a first connecting point is negatively fed back to an inverting input of the input amplifying circuit. The second feedback circuit executes a feedback where a voltage at a second connecting point is negatively fed back to a non-inverting input of the input amplifying circuit.
A sound signal processor and a control method therefor that enable appropriate processing to be applied to each sound signal and the sound signals to be outputted to an appropriate output destination, while avoiding equipment connection complexity. A first processing unit performs first processing on a first sound signal to generate a second sound signal, and outputs the second sound signal to a mix bus, a second processing unit performs second processing on a third sound signal to generate a fourth sound signal, and outputs the fourth sound signal to the mix bus and a first output destination, and the mix bus mixes the second sound signal with the fourth sound signal to generate a fifth sound signal, and outputs the fifth sound signal to a second output destination.
An audio signal distribution method includes assigning in advance a reproduction role to each of a plurality of speakers, distributing an audio signal to the plurality of speakers, according to the reproduction role, and, in response to the number of speakers being decreased, causing a speaker paired with the released speaker on the reproduction role to further reproduce sound of the released speaker.
A communication method according to an embodiment includes determining, by a first communication device, whether timing information, which is repeatedly generated at a predetermined interval and is used for a notification process executed by the first communication device, has been generated; and transmitting, as packet data, notification data and sequentially acquired sound data to a second communication device connected to the first communication device via a network, the notification data being based on a presence or an absence of the timing information during a period in which the sound data, contained in the packet data, was sequentially acquired.
An audio analysis system includes at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: receive an instruction indicative of a target timbre; acquire a first audio signal containing a plurality of audio components corresponding to different timbres; and select at least one reference signal from among a plurality of reference signals respectively representative of different pieces of audio based on the target timbre and the first audio signal, in which: the at least one reference signal has an intensity with a temporal change, the temporal change in the intensity of the at least one reference signal is represented by a reference rhythm pattern, the plurality of audio components include audio components corresponding to the target timbre, the audio components corresponding to the target timbre have an intensity with a temporal change, the temporal change in the intensity of the audio components corresponding to the target timbre is represented by an analysis rhythm pattern, and the reference rhythm pattern is similar to the analysis rhythm pattern.
The present disclosure provides a generation device having a plurality of operators that receive a user operation that causes generation of a sound. The generation device includes: at least one first operator arranged in a first region and configured to receive a first user operation that causes generation of a rhythm sound signal; at least one second operator arranged in a second region and configured to receive a second user operation that causes generation of a melody sound signal; and at least one third operator arranged in a third region and configured to receive a third user operation that causes a sound effect to be applied to a synthesized sound signal of the generated rhythm sound signal and the generated melody sound signal. The first region, the second region, and the third region are different regions of the generation device from each other.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
A fader device includes a plurality of shafts disposed parallel to each other, and a moving body attached to the plurality of shafts and movable in a longitudinal direction of the plurality of shafts. At least one of the plurality of shafts is a screw shaft with a male thread on an outer periphery thereof and is configured to be rotatable about an axis extending along the longitudinal direction. At least one of the plurality of shafts other than the screw shaft is a guide shaft that guides the moving body in the longitudinal direction. The moving body meshes with the male thread to move in the longitudinal direction as the screw shaft rotates.
An information processing system includes at least one memory configured to store instructions and at least one processor configured to implement the instructions to acquire first audio data indicative of audio of a target piece of music, and cause a trained model to output first timbre data indicative of a timbre appropriate for the target piece of music by inputting input data into the trained model, the input data including the first audio data, in which the trained model is trained to learn a relationship between second audio data indicative of audio and second timbre data indicative of a timbre for each reference piece of a plurality of reference pieces of music.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
An audio apparatus includes a network interface, a receiver, at least one storage, and at least one processor. The processor is configured to determine that the audio apparatus is in a state capable of communicating with the other audio apparatus via the network interface. The processor is also configured to receive audio data via the receiver transmitted from an external apparatus different from the other audio apparatus. The processor is also configured to output a sound based on the received audio data. The processor is also configured to transmit the sound emission control information stored in the at least one storage to the other audio apparatus. The sound emission control information includes one or more of a sound volume, and a frequency band.
An information processing system includes an image obtaining circuit and a display control circuit. The image obtaining circuit is configured to obtain observation images of a first keyboard of a first keyboard instrument. The display control circuit is configured to display, on a display device, the observation images and reference images. The reference images include moving images of at least one hand and one or more fingers of a reference performer who is playing a second keyboard of a second keyboard instrument. The at least one hand and the one or more fingers of the reference performer are displayed overlapping the first keyboard included in the observation images.
G09B 5/02 - Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
G10G 1/02 - Chord or note indicators, fixed or adjustable, for keyboards or fingerboards
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A fastening structure for fastening a to-be-fastened component to a wooden component by using a screw, wherein the wooden component is made of wood having a specific gravity of 0.08 g/cm3 to 0.85 g/cm3 inclusive, the screw has a nominal diameter of 0.8 mm to 3.5 mm inclusive, and the distance between a neutral position in a longitudinal direction of an effective screw part, which is the portion where the wooden component and a male screw part formed in a shaft part of the screw mesh together, and the surface of the wooden component that comes into contact with the to-be-fastened component is 1 mm to 15 mm inclusive.
An audio mixer includes an operation panel that includes a plurality of physical controllers that control a value of a parameter. The audio mixer has a signal processor that processes an audio signal to be outputted from a plurality of input channels to a plurality of mixing buses according to the value of the parameter, and a processor that controls an operation of the signal processor. The plurality of physical controllers include a first physical controller that controls a value of a first parameter and a second physical controller that controls a value of a second parameter. The processor, in a first mode, divides the plurality of input channels into a first input channel of a first signal processing system and a second input channel of a second signal processing system different from the first signal processing system, divides the plurality of mixing buses into a first mixing bus of the first signal processing system and a second mixing bus of the second signal processing system, controls the signal processor to perform a first operation to process an audio signal to be outputted from the first input channel to the first mixing bus according to the value of the first parameter in the first signal processing system, and controls the signal processor to perform a second operation to process an audio signal to be outputted from the second input channel to the second mixing bus according to the value of the second parameter in the second signal processing system, in a second mode different from the first mode, controls, by using the plurality of input channels and the plurality of mixing buses as part of the same signal processing system, the signal processor to perform a third operation to process an audio signal to be outputted from the plurality of input channels to the plurality of mixing buses according to the value of the first parameter and the value of the second parameter in the same signal processing system.
An audio device simulation method includes acquiring a standard model that models input-output characteristics of an audio device, and setting at least one parameter of a target audio device of the same type as the audio device and measuring input-output characteristics of the target audio device, and generating an individual model that models input-output characteristics of the target audio device by correcting the standard model using measured input-output characteristics that have been measured.
A speaker system includes: a portable speaker including a first locking member; and a charging base including a second locking member. The first locking member is configured to slide in: a first direction relative to the second locking member to lock the first and second locking members together; and a second direction, which is opposite to the first direction, relative to the second locking member to release the first locking member from the second locking member.
A method of outputting a parameter of a sound processing device receives an audio signal, obtains information of the parameter of the sound processing device, which corresponds to the received audio signal, by using a trained model obtained by performing training of a relationship among a training output sound of the sound processing device, a training input sound of the sound processing device, and a parameter of sound processing performed by the sound processing device, the parameter of the sound processing device being receivable by a user of the sound processing device, and outputs obtained information of the parameter of the sound processing device corresponding to the received audio signal.
G10H 3/18 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a lo using mechanically actuated vibrators with pick-up means using strings, e.g. electric guitars
G10H 1/16 - Circuits for establishing the harmonic content of tones by non-linear elements
73.
PROCESSING METHOD OF CONFERENCE SYSTEM, AND CONFERENCE SYSTEM
A processing method of a conference system includes a microphone, a camera, and a first object. The processing method obtains image data of an image taken by the camera, the image data including a plurality of objects, detects kinds and positions of a plurality of objects included in the image data obtained, identifies the first object and one or more second objects different from the first object based on the detected kinds of plurality of the objects from among the plurality of objects included in the obtained image data, calculates (i) a position of the first object and (ii) positions of the one or more second objects relative to the first object, selects a second object, from among the one or more second objects, whose position relative to the first object satisfies a specified condition, generates focused image data that focuses on the selected second object, and generates output data based on the generated focused image data, which focuses on the selected second object, and audio data picked up by the microphone.
The audio signal processing method in accordance with one embodiment receives an audio signal, obtains a first image, estimates room information based on the obtained first image, sets an acoustic parameter according to the estimated room information, applies sound processing to the audio signal according to the set acoustic parameter, and outputs the audio signal subjected to the sound processing.
An information processing method obtains first position information that indicates a position of at least one of a ceiling surface, a wall surface, or a floor surface in a predetermined space, obtains second position information that indicates a position of an acoustic device that outputs a sound beam in the predetermined space, and obtains direction information that indicates a direction of the sound beam to be outputted from the acoustic device, calculates a locus of the sound beam to be outputted from the acoustic device, based on the first position information, the second position information, and the direction information that have been obtained, and generates a sound beam image that shows the locus of the sound beam, based on a result of calculation.
H04R 1/38 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means in which sound waves act upon both sides of a diaphragm and incorporating acoustic phase-shifting means, e.g. pressure-gradient microphone
H04R 1/34 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by using a single transducer with sound reflecting, diffracting, directing or guiding means
76.
MUSICAL ELEMENT GENERATION SUPPORT DEVICE, MUSICAL ELEMENT LEARNING DEVICE, MUSICAL ELEMENT GENERATION SUPPORT METHOD, MUSICAL ELEMENT LEARNING METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING MUSICAL ELEMENT GENERATION SUPPORT PROGRAM, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING MUSICAL ELEMENT LEARNING PROGRAM
A musical element generation support device includes at least one processor configured to receive a musical element sequence including a plurality of musical elements and a blank portion that are arranged in a time series, and generate, by using a learning model, at least one suitable musical element for the blank portion based on a part of the musical elements that is positioned after the blank portion on a time axis in the musical element sequence. The learning model is configured to generate, from one-part musical element, another-part musical element.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
77.
AUDIO DEVICE, METHOD OF CONTROLLING AUDIO DEVICE, AND SOUND PROCESSING SYSTEM
A method of controlling an audio device identifies an external user of the audio device by communicating with an information processing apparatus, receives, by the audio device, setting information of the audio device corresponding to the identified external user, and controls the audio device based on the received setting information of the audio device.
A sound processing system includes an electronic musical instrument and a sound processing apparatus communicable with the instrument. The instrument includes an audio signal generator that generates an audio signal according to a user performance on the electronic musical instrument, a first signal processor that performs first effect processing on the audio signal to generate a first processed audio signal, a first sound emitter that emits a first performance sound component based on at least one of the first processed audio signal or a second processed audio signal, and a first audio signal output that outputs the audio signal. The apparatus includes a first audio signal receiver that receives the audio signal output from the first audio signal output, in a state where the instrument is communicating with the apparatus, a second signal processor that performs second effect processing on the received audio signal, including removing a direct sound component from the received audio signal, to generate the second processed audio signal, and a second sound emitter that emits a second performance sound component based on the second processed audio signal. In the state where the instrument is communicating with the apparatus, the first signal processor changes the amount of the first effect processing to the generated audio signal in generating the first processed audio signal that is emitted by the first sound emitter, in a state where the audio signal is not output to the apparatus, or the first sound emitter emits the second performance sound component based on the second processed audio signal.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
79.
SOUND PROCESSING METHOD, SOUND PROCESSING SYSTEM, ELECTRONIC MUSICAL INSTRUMENT, AND RECORDING MEDIUM
A computer-implemented sound processing method includes: outputting singing sound data based on a sound signal representing singing sound; and outputting sound data representing musical instrument sound that correlates with musical elements of the singing sound, by inputting input data that includes the singing sound data to a trained model that has learned, by machine learning, a relationship between singing sound for training and musical instrument sound for training.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A signal processing device includes an electronic controller including at least one processor. The electronic controller is configured to execute a reception unit, a generation unit, and a processing unit. The reception unit is configured to receive first time-series data that include sound data, and second time-series data that are generated based on the first time-series data and that include at least data indicating a timing of a human action. The generation unit is configured to generate, based on the second time-series data, third time-series data notifying of the timing of the human action. The processing unit is configured to synchronize and output an output signal based on the first time-series data and an output signal based on the third time-series data, such that the timing of the human action for the first time-series data and the timing of the human action for the third time-series data match.
G10G 7/00 - Other auxiliary devices or accessories, e.g. conductors' batons or separate holders for resin or strings
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
An input device includes a touch bar having an operation area for receiving a contact operation performed by a user, a switcher that switches between a first mode (continuous mode) in which a continuously changing value is receivable in response to an operation performed on the operator and a second mode (grid mode) different from the first mode, and a section presenter that presents section information representing a plurality of sections of the operation area during the second mode.
G10H 1/34 - Switch arrangements, e.g. keyboards or mechanical switches specially adapted for electrophonic musical instruments
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
83.
SIGNAL GENERATION DEVICE, ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC KEYBOARD DEVICE, ELECTRONIC APPARATUS, AND SIGNAL GENERATION METHOD
According to an embodiment, a signal generation device includes a memory storing a program and a processor communicatively connected to the memory and executing the program to function as a signal generating unit and a sound generation control unit. The signal generating unit generates a sound signal in response to operation of a plurality of operators. The plurality of operators includes a first operator and a second operator. The sound generation control unit controls a sound generation form of a second sound signal generated in response to a second operation of the second operator following a first operation, based on a duration of the first operation of the first operator.
G10H 1/053 - Means for controlling the tone frequencies, e.g. attack or decay; Means for producing special musical effects, e.g. vibratos or glissandos by additional modulation during execution only
A keyboard apparatus according to an embodiment includes a keyboard and a mutual capacitance proximity sensor. The keyboard includes a first key and a second key arranged in an array direction with respect to the first key. The proximity sensor includes a first electrode having a portion extending from at least a first area below the first key to a second area below the second key, a second electrode arranged in the first area, and a third electrode arranged in the second area. The proximity sensor is a mutual capacitance type sensor and is configured to use a change in capacitance between the first electrode and the second electrode and a change in capacitance between the first electrode and the third electrode.
A sound processing method includes: receiving, using the communication device, from the remote apparatus, a first sound signal representing first sound generated by a user of the remote apparatus; emitting, using the sound emitting apparatus, the first sound represented by the first sound signal; receiving, using the sound receiving apparatus, sound that includes second sound generated by a user of the sound processing system; generating a second sound signal by sound processing, using processing parameters, a reception sound signal generated by the sound receiving apparatus; transmitting, using the communication device, the second sound signal to the remote apparatus; updating the processing parameters based on the first sound signal or the reception sound signal; and stopping the updating of the processing parameters in a state where musical sound is included in at least one of the first sound or the second sound.
H04R 3/02 - Circuits for transducers for preventing acoustic reaction
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
A percussion instrument motion reproduction device includes a percussion instrument. The percussion instrument includes a vibration portion. The percussion instrument motion reproduction device includes a first driving portion. The first driving portion of the percussion instrument motion reproduction device is configured to visually displace the vibration portion in response to an electrical signal.
G10H 3/14 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a lo using mechanically actuated vibrators with pick-up means
A class-D amplifier that amplifies an input signal comprises a control circuit configured to generate a control signal that varies in accordance with a level of the input signal, a first generating circuit configured to generate a first pulse, and a second generating circuit configured to generate a second pulse. A pulse width of the first pulse becomes narrower as the signal level of the input signal becomes smaller, and the pulse width of the first pulse becomes wider as an instantaneous magnitude of the input signal becomes larger. A pulse width of the second pulse becomes narrower as the signal level of the input signal becomes smaller, and the pulse width of the second pulse becomes wider as an instantaneous magnitude of the input signal becomes smaller.
A sound synthesizing method according to one aspect of the present disclosure relates to a sound synthesizing method that is realized by a computer, including receiving musical score data and acoustic data via a user interface; and generating, based on respective one of the musical score data and the acoustic data, acoustic features of a sound waveform having a desired timbre.
G10H 7/00 - Instruments in which the tones are synthesised from a data store, e.g. computer organs
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G10L 13/033 - Voice editing, e.g. manipulating the voice of the synthesiser
G10L 13/04 - Methods for producing synthetic speech; Speech synthesisers - Details of speech synthesis systems, e.g. synthesiser structure or memory management
G10L 13/06 - Elementary speech units used in speech synthesisers; Concatenation rules
89.
IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS
An image processing method obtains a first input image from a camera, generates a background image based on the first input image, determines whether the first input image includes a specific object, determines whether the specific object satisfies a predetermined position condition in a case where the first input image includes the specific object, and replaces the specific object that satisfies the predetermined position condition with the background image.
A percussion instrument driving device includes an actuator and an actuator mount. The actuator includes an actuator surface configured to vibrate in response to an electrical signal to vibrate a vibration surface of a percussion instrument from outside of the percussion instrument. The actuator mount is disposed outside the percussion instrument having a vibration surface. The actuator mount is configured to mount the actuator to the percussion instrument from outside of the percussion instrument, with the actuator surface facing the vibration surface and spaced from the vibration surface.
G10H 3/14 - Instruments in which the tones are generated by electromechanical means using mechanical resonant generators, e.g. strings or percussion instruments, the tones of which are picked up by electromechanical transducers, the electrical signals being further manipulated or amplified and subsequently converted to sound by a lo using mechanically actuated vibrators with pick-up means
G10D 13/10 - STRINGED MUSICAL INSTRUMENTS; WIND MUSICAL INSTRUMENTS; ACCORDIONS OR CONCERTINAS; PERCUSSION MUSICAL INSTRUMENTS; AEOLIAN HARPS; SINGING-FLAME MUSICAL INSTRUMENTS; MUSICAL INSTRUMENTS NOT OTHERWISE PROVIDED FOR - Details or accessories therefor - Details of, or accessories for, percussion musical instruments
A keyboard device includes a plurality of keys, and a keyboard driver configured to drive at least a part of the plurality of keys. The keyboard device is configured such that, in accordance with determination whether a key corresponding to a pitch of event data is drivable by the keyboard driver, sound is generated based on a first sound-generating process in which the keyboard driver is configured to drive the key corresponding to the pitch or a second sound-generating process that is different from the first sound-generating process.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
92.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
First time-series data is edited according to a first user instruction, and second time-series data representing a series of features and generated based on the edited first time-series data is edited according to a second user instruction. In response to editing of the first time-series data, the edited first time-series data is saved as a new version in first history data. In response to editing of the second time-series data, the edited second time-series data is saved as a new version in second history data. A first version number and a second version number are designated according to a third user instruction. Third time-series data representing content corresponding to the first time-series data is then generated by using the first version of first time-series data in the first history data, and the second version of second time-series data in the second history data.
A keyboard unit includes a frame; a first member connected to the frame, a rigidity of the first member in a first direction being lower than a rigidity of the first member in a second direction intersecting with the first direction; and a mounted member. The mounted member includes a first pressing portion configured to suppress a movement of the first member in the first direction by pressing a part of the first member in the second direction, and a positioning portion configured to suppress a movement of the mounted member with respect to the frame in the first direction.
A sound signal processing device supplies an output sound signal to an amplification device configured to supply a first sound signal to a high-frequency speaker and a low-frequency speaker. The sound signal processing device includes a high-pass filter, an amplitude limitation circuit, a low-pass filter, and a synthesis circuit. The high-pass filter removes a low-frequency component from an input sound signal to generate a high-frequency sound signal. The amplitude limitation circuit limits an amplitude of the input sound signal at or below a reference value to generate a second sound signal. The reference value corresponds to a clipping voltage of the amplification device. The low-pass filter removes a high-frequency component from the second sound signal to generate a low-frequency sound signal. The synthesis circuit synthesizes the high-frequency sound signal and the low-frequency sound signal to generate the output sound signal.
A signal processing system is a system in which a plurality of devices including at least a first terminal device and a second terminal device that receive streaming data are connected to a communication system capable of communicating with the plurality of devices. The signal processing system includes: a receiving unit that receives a designation of first sound data from the first terminal device that received the streaming data and a designation of second sound data from the second terminal device that received the streaming data; a signal processing unit that obtains first sound data corresponding to the received designation of first sound data and second sound data corresponding to the received designation of second sound data, and generates a third sound signal in which a first sound signal corresponding to the first sound data and a second sound signal corresponding to the second sound data are mixed.
A first terminal transmits first event data instructing generation of a first sound to a server. A second terminal transmits second event data instructing generation of a second sound to the server. The server transmits data including the first event data and the second event data to the first terminal. The first terminal controls generation of the first sound and the second sound, based on the data including the first event data and the second event data.
A sound processing apparatus includes sound collection circuity that collects a sound and generates a first sound signal, and processing circuitry that estimates an estimated noise, controls a gain of the first sound signal and outputs a second sound signal based on the estimated noise, performs filter processing to reduce a component of a predetermined frequency band of the second sound signal based at least in part on the estimated noise.
A computer-implemented information processing method includes: determining, based on musical instrument information indicative of a musical instrument, a target part of a body of a first player, the first player playing the musical instrument indicated by the musical instrument information; and acquiring image information indicative of imagery of the determined target part.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
99.
Information Processing Method, Information Processing System, and Recording Medium
A computer-implemented information processing method includes receiving image information indicative of imagery of a first player using a musical instrument, and determining comment information indicative of a comment for the first player based on the image information.
A recording apparatus receives image data and time information that are obtained by capturing a conference from a start time of the conference, sets a specific object from the image data as a target image, detects an event during the conference, and records detection timing of the event, time information in a past before the detection timing, and the image data in association with the time information.