SiGe photodiode for crosstalk reduction. In one embodiment, an image sensor includes a plurality of pixels arranged in rows and columns of a pixel array disposed in a semiconductor material. Each pixel includes a plurality of photodiodes. The plurality of pixels are configured to receive an incoming light through an illuminated surface of the semiconductor material. Each pixel includes a first photodiode comprising a silicon (Si) material; and a second photodiode having the Si material and a silicon germanium (SiGe) material.
COLUMN ARITHMETIC LOGIC UNIT DESIGN FOR DUAL CONVERSION GAIN SENSOR SUPPORTING CORRELATED MULTIPLE SAMPLING AND THREE READOUT PHASE DETECTION AUTOFOCUS
An arithmetic logic unit (ALU) includes a front end latch stage coupled to a signal latch stage coupled to a Gray code (GC) to binary stage. First inputs of an adder stage are coupled to receive outputs of the GC to binary stage. Outputs of the adder stage are generated in response to the first inputs and second inputs of the adder stage. A pre-latch stage is coupled to latch outputs of the adder stage. A feedback latch stage is coupled to latch outputs of the pre-latch stage. The second inputs of the adder stage are coupled to receive outputs of the feedback latch stage. The feedback stage includes first conversion gain feedback latches configured to latch outputs of the pre-latch stage having a first conversion gain and second conversion gain feedback latches configured to latch outputs of the pre-latch stage having a second conversion gain.
A global shutter image sensor with improved pixel failure coverage detects failures caused by the pixel chip of the image sensor. The global shutter image sensor includes a pixel chip including an array of photodiodes and associated logic, and a logic chip, bonded to the pixel chip, including an array of logic blocks for processing the images detected by the photodiodes. A failure detection circuit coupled to a reference voltage node of the image sensor detects a failure in the pixel chip by capturing a first level of pixel bias current and a second level of pixel bias current wherein a difference between the first level and the second level drives an output of the failure detection circuit either as logic high or as logic low.
H04N 5/367 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response applied to defects, e.g. non-responsive pixels
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
4.
FIXED PATTERN NOISE REDUCTION IN IMAGE SENSORS OPERATED WITH PULSED ILLUMINATION
Fixed pattern noise (FPN) reduction techniques in image sensors operated with pulse illumination are disclosed herein. In one embodiment, a method includes, during a first sub-exposure period of a frame, (a) operating a first tap of a pixel to capture a first signal corresponding to first charge at a first floating diffusion, the first charge corresponding to first light incident on a photosensor, and (b) operating a second tap of the pixel to capture a first parasitic signal corresponding to FPN at a second floating diffusion. The method further includes, during a second sub-exposure period of the frame, (a) operating the second tap to capture a second signal corresponding to second charge at the second floating diffusion, the second charge corresponding to second light incident on the photosensor, and (b) operating the first tap to capture a second parasitic signal corresponding to FPN at the first floating diffusion.
H04N 25/67 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
H04N 25/616 - Noise processing, e.g. detecting, correcting, reducing or removing noise involving a correlated sampling function, e.g. correlated double sampling [CDS] or triple sampling
H04N 25/706 - Pixels for exposure or ambient light measuring
H04N 25/709 - Circuitry for control of the power supply
H04N 25/76 - Addressed sensors, e.g. MOS or CMOS sensors
H04N 25/78 - Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
5.
FIXED PATTERN NOISE REDUCTION IN IMAGE SENSORS OPERATED WITH PULSED ILLUMINATION
Fixed pattern noise (FPN) reduction techniques in image sensors operated with pulse illumination are disclosed herein. In one embodiment, a method includes, during a first sub-exposure period of a frame, (a) operating a first tap of a pixel to capture a first signal corresponding to first charge at a first floating diffusion, the first charge corresponding to first light incident on a photosensor, and (b) operating a second tap of the pixel to capture a first parasitic signal corresponding to FPN at a second floating diffusion. The method further includes, during a second sub-exposure period of the frame, (a) operating the second tap to capture a second signal corresponding to second charge at the second floating diffusion, the second charge corresponding to second light incident on the photosensor, and (b) operating the first tap to capture a second parasitic signal corresponding to FPN at the first floating diffusion.
H04N 25/671 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
H04N 25/616 - Noise processing, e.g. detecting, correcting, reducing or removing noise involving a correlated sampling function, e.g. correlated double sampling [CDS] or triple sampling
Electrical Phase Detection Auto Focus. In one embodiment, an image sensor includes a plurality of pixels arranged in rows and columns of a pixel array disposed in a semiconductor material. Each pixel includes a plurality of photodiodes configured to receive incoming light through an illuminated surface of the semiconductor material. The plurality of pixels includes at least one autofocusing phase detection (PDAF) pixel having: a first subpixel without a light shielding, and a second subpixel without the light shielding. Autofocusing of the image sensor is at least in part determined based on different electrical outputs of the first subpixel and the second sub pixels.
A pixel cell for an image sensor including a first semiconductor substrate, a photodiode, and a transfer gate is described. The first semiconductor substrate includes a first side and a second side. The first side is opposite the second side. The photodiode is disposed within the first semiconductor substrate between the first and the second side. The transfer gate is disposed proximate to the first side of the first semiconductor substrate. The transfer gate includes a planar region. The first side of the semiconductor substrate is disposed between the planar region and the photodiode. A lateral area of the photodiode is less than or equal to a lateral area of the planar region of the transfer gate.
An image sensor comprising a semiconductor substrate and pixel cell circuitry is described. The semiconductor substrate includes a first side and a second side opposite the first site. The pixel cell circuitry is disposed proximate to the first side of the semiconductor substrate. The pixel cell circuitry includes an arrangement of individual groups of components, each including a reset gate, a source-follower gate, and a row select gate. The individual groups of components included in the pixel cell circuitry includes a first group and a second group adjacent to the first group, and wherein the source-follower gate of the first group is disposed adjacent to the source-follower of the second group.
A method of detecting an object in an image includes (i) processing, with a machine-learned model, pixel intensities of a pixel pair in a first region of the image, to determine a first confidence score representing a likelihood of the object being present within the first region, and (ii) determining, based on the first confidence score, presence of the object in the first region.
G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
G06V 10/24 - Aligning, centring, orientation detection or correction of the image
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
10.
LOW POWER EVENT DRIVEN PIXELS WITH PASSIVE, DIFFERENTIAL DIFFERENCE DETECTION CIRCUITRY, AND RESET CONTROL CIRCUITS FOR THE SAME
Low power event driven pixels with passive, differential difference detection circuitry (and reset control circuits for the same) are disclosed herein. In one embodiment, an event driven pixel comprises a photosensor, a photocurrent-to-voltage converter, and a difference circuit. The difference circuit includes (a) a first circuit branch configured to sample a reference light level based on a voltage output by the photocurrent-to-voltage converter, and to output a first analog light level onto a first column line that is based on the reference light level; and (b) a second circuit branch configured to sample a light level based on the voltage, and to output a second analog light level onto a second column line that is based on the light level. A difference between the second analog light level and the first analog light level indicates whether the event driven pixel has detected an event in an external scene.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
11.
LOW POWER EVENT DRIVEN PIXELS WITH ACTIVE DIFFERENCE DETECTION CIRCUITRY, AND RESET CONTROL CIRCUITS FOR THE SAME
Low power event driven pixels and reset control circuits for the same are disclosed herein. In one embodiment, an event driven pixel comprises a photosensor; a photocurrent-to-voltage converter coupled to the photosensor; and a difference circuit coupled to the photocurrent-to-voltage converter. The difference circuit includes a source follower transistor and is configured to generate a signal at a gate of the source follower transistor that is based on a voltage output from the photocurrent-to-voltage converter. The difference circuit is further configured to output a difference signal in response to assertion of a row select signal. The event driven pixel can further include a reset control circuit coupled to the difference circuit and configured to initialize the difference circuit, and to reset the difference circuit when the difference signal output from the event driven pixel indicates a change in the voltage greater than a threshold amount.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
H03F 3/16 - Amplifiers with only discharge tubes or only semiconductor devices as amplifying elements with semiconductor devices only with field-effect devices
H04N 5/343 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. between still and video mode or between interlaced and non-interlaced mode
A pixel circuit includes a first photodiode and a second photodiode. The first and second photodiodes photogenerate charge in response to incident light. A first transfer transistor is coupled to the first photodiode. A first floating diffusion is coupled to the first transfer transistor. A second transfer transistor is coupled to the second photodiode. A second floating diffusion is coupled to the second transfer transistor. A dual floating diffusion transistor is coupled between the first and second floating diffusions. An overflow transistor is coupled to the second photodiode. A capacitor is coupled between a voltage source and the overflow transistor. A capacitor readout transistor is coupled between the capacitor and the second floating diffusion. An anti-blooming transistor coupled between the first photodiode and a power line.
H04N 5/359 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
H04N 5/347 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by combining or binning pixels in SSIS
A pixel circuit includes a photodiode configured to photogenerate image charge in response to incident light. A floating diffusion is coupled to receive the image charge from the photodiode. A transfer transistor is coupled between the photodiode and the floating diffusion to transfer the image charge from the photodiode to the floating diffusion. A reset transistor is coupled between a bias voltage source and the floating diffusion. The reset transistor is configured to be switched in response to a reset control signal. A lateral overflow integration capacitor (LOFIC) including an insulating region disposed between a first metal electrode and a second metal electrode is also included. The first metal electrode is coupled to a bias voltage source. The second metal electrode is coupled to the reset transistor and selectively coupled to the floating diffusion.
Pixel designs with reduced LOFIC reset and settling times are disclosed herein. In one embodiment, a pixel cell includes a photosensor configured to photogenerate image charge in response to incident light, a floating diffusion to receive the image charge from the photosensor, a transfer transistor coupled between the floating diffusion and the photosensor to transfer the image charge to the floating diffusion, and a first reset transistor coupled between the floating diffusion and the voltage supply. The pixel cell further includes a capacitor having two ends, and a second reset transistor. A first end of the capacitor is coupled to the floating diffusion. The second reset transistor is coupled between a second end of the capacitor and the voltage supply.
A compact camera includes an image sensor, a transparent layer, and a microlens (ML) layer, between the image sensor and the transparent layer. The ML layer forms (a) a first ML array having a plurality of first MLs, and (b) a second ML array with a plurality of second MLs interleaved with the plurality of first MLs. The compact camera also includes a baffle layer, between the ML layer and the image sensor, that forms a plurality of first aperture stops each aligned with a different one of the first MLs and a plurality of second aperture stops each aligned with a different one of the second MLs. The first MLs each have a first set of optical characteristics and the second MLs each have a second set of optical characteristics that are different from the first set of optical characteristics.
THIN, MULTI-LENS, OPTICAL FINGERPRINT SENSOR ADAPTED TO IMAGE THROUGH CELL PHONE DISPLAYS AND WITH MULTIPLE PHOTODIODE GROUPS EACH HAVING SEPARATE FIELDS OF VIEW FOR EACH MICROLENS
An image sensor for imaging fingerprints has multiple photodiode groups each with field of view through a microlens determined by optical characteristics of the microlens and locations of the microlens and openings of upper and lower mask layers. Many photodiode groups have fields of view outwardly splayed from a center-direct field of view. A diameter of openings of the upper mask layer distant from the group having a center-direct field of view is larger than openings of a photodiode group having a center-direct field of view. A method of matching illumination of a group of photodiodes with center-direct field of view to illumination of photodiode groups having outwardly splayed fields of view includes sizing openings in the upper mask layer of photodiode groups with outwardly splayed fields of view larger than openings in the upper mask layer associated with photodiode groups having center-direct field of view.
An imaging system includes a pixel array with pixel circuits, each including a photodiode, a floating diffusion, a source follower transistor, and a row select transistor. The imaging system further includes rolling clamp (RC) drivers, each coupled to a gate terminal of a row select transistor of one of the pixel circuits and each including first and second PMOS transistors coupled between a clamp voltage and the gate terminal of the row select transistor of the one of the pixel circuits, and first, second, and third NMOS transistors coupled between the clamp voltage and the gate terminal of the row select transistor of the one of the pixel circuits. The PMOS transistors and the NMOS transistors are coupled in parallel. The PMOS transistors are configured to provide an upper clamp voltage range, and the NMOS transistors are configured to provide a lower clamp voltage range.
Image sensors for Phase-Detection Auto Focus (PDAF) are provided. An image sensor includes a pixel including a plurality of photodiodes disposed in a semiconductor material according to an arrangement. The arrangement defines a first image subpixel comprising a plurality of first photodiodes, a second image subpixel comprising a plurality of second photodiodes, and a third image subpixel including a plurality of third photodiodes, and a phase detection subpixel comprising a first photodiode, a second photodiode, or a third photodiodes. The pixel can include a plurality of first micro-lenses disposed individually overlying at least a subset of the plurality of photodiodes of the first, second and third image subpixels. The pixel can also include a second micro-lens disposed overlying the phase detection subpixel, a first micro-lens of the first micro-lenses having a first radius less than a second radius of the second micro-lens.
Quad photodiode microlens arrangements, and associated systems and methods. In one embodiment, a plurality of pixels are arranged in rows and columns of a pixel array disposed in a semiconductor material. The plurality of pixels includes green (G) pixels, red (R) pixels, blue (B) pixels and clear (C) pixels. Each pixel comprises a plurality of photodiodes that are configured to receive incoming light through an illuminated surface of the semiconductor material. A plurality of small microlenses are distributed over individual photodiodes of clear (C) pixels. A plurality of large microlenses are distributed over individual green (G) pixels. A diameter of the small microlenses is smaller than a diameter of the large microlenses.
Half Quad Photodiode (QPD) for improving QPD channel imbalance. In one embodiment, an image sensor includes a plurality of pixels arranged in rows and columns of a pixel array that is disposed in a semiconductor material. Each pixel includes a plurality of subpixels. Each subpixel comprises a plurality of first photodiodes, a plurality of second photodiodes and a plurality of third photodiodes. The plurality of pixels are configured to receive incoming light through an illuminated surface of the semiconductor material. A plurality of small microlenses are individually distributed over individual first photodiodes and individual second photodiodes of each subpixel. A plurality of large microlenses are each distributed over a plurality of third photodiodes of each subpixel. A diameter of the small microlenses is smaller than a diameter of the large microlenses.
A novel method for driving a digital display having a plurality of pixel rows includes receiving a frame of data to be written to the display in a predetermined frame time, dividing the frame time into a plurality of time intervals, defining a plurality of row groups, and defining a unique sequence for each row group to display corresponding multi-bit data words and a number of off states on the pixels of the rows of the row groups. The drive sequence of each row is offset with respect to an initial row by a unique predetermined number of off states. The predetermined numbers of off states ensure that the pixel updates of the groups do not overlap.
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 3/32 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix using controlled light sources using electroluminescent panels semiconductive, e.g. using light-emitting diodes [LED]
An image sensor for imaging fingerprints has multiple photodiode groups each having a field of view determined by pinholes of upper and lower mask layers and a metal layer, each field of view being through a microlens. Many photodiode groups have fields of view outwardly splayed from a group having a center-direct field of view. A diameter of pinholes of the metal layer distant from the group having a center-direct field of view have larger diameter than a pinhole of the photodiode group having a center-direct field of view. A method of matching illumination of a group of photodiodes with center-direct field of view to illumination of photodiode groups having outwardly splayed fields of view includes sizing a pinhole in the metal layer of photodiode groups with outwardly splayed fields of view larger than a pinhole associated with of photodiode groups having a center-direct field of view.
H01L 27/32 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including components using organic materials as the active part, or using a combination of organic materials with other materials as the active part with components specially adapted for light emission, e.g. flat-panel displays using organic light-emitting diodes
23.
FINGERPRINT SENSOR WITH WAFER-BONDED MICROLENS ARRAY
A fingerprint sensor has an array of microlenses formed on an upper surface of a transparent substrate; with a lower surface of the transparent substrate bonded to an upper surface of a fingerprint image sensor integrated circuit. In embodiments, it includes one or two filter layers on the lower surface of the transparent substrate, and may also include masked black baffle layers on one or more of the upper and lower surface of the transparent substrate. The sensor is made by forming the microlenses and black baffle layers on the transparent substrate, then aligning the transparent substrate to a wafer of fingerprint sensor integrated circuits and bonding the transparent substrate to the wafer, then dicing the wafer into individual fingerprint sensors.
A ramp generator includes an operational amplifier having an output to generate a ramp signal. An integration current source is coupled to a first input and a reference voltage is coupled to a second input of the operational amplifier. A feedback capacitor and a reset switch are coupled between the first input and the output of the operational amplifier. The reset switch is turned on to reset the ramp generator. A ramp event is configured to be generated in the ramp signal at the output of the operational amplifier in response to the reset switch being turned off. An assist current source is coupled between the output of the operational amplifier and ground. The assist current source is configured to conduct an assist current from the output of the operational amplifier to ground in response to the reset switch being turned off.
H03K 4/501 - Generating pulses having essentially a finite slope or stepped portions having triangular shape having sawtooth shape using as active elements semiconductor devices in which a sawtooth voltage is produced across a capacitor the starting point of the flyback period being determined by the amplitude of the voltage across the capacitor, e.g. by a comparator
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
H03K 4/56 - Generating pulses having essentially a finite slope or stepped portions having triangular shape having sawtooth shape using as active elements semiconductor devices in which a sawtooth voltage is produced across a capacitor using a semiconductor device with negative feedback through a capacitor, e.g. Miller integrator
25.
CALIBRATION CIRCUIT FOR RAMP SETTLING ASSIST CIRCUIT IN LOCAL RAMP BUFFER CIRCUIT
A ramp buffer circuit includes a ramp buffer input device having an input coupled to receive a ramp signal. A current monitor is circuit coupled to a power line and the ramp buffer input device to generate a current monitor signal in response to an input current conducted through the ramp buffer input device. A corner bias circuit is coupled to the current monitor circuit to generate an assist bias voltage in response to the current monitor signal. A bias current source is coupled to an output of the ramp buffer input device. An assist current source is coupled to the corner bias circuit and coupled between the output of the ramp buffer input device and ground to conduct an assist current from the output of the ramp buffer input device to ground in response to the assist bias voltage.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
H03M 1/56 - Input signal compared with linear ramp
H03K 5/24 - Circuits having more than one input and one output for comparing pulses or pulse trains with each other according to input signal characteristics, e.g. slope, integral the characteristic being amplitude
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
A sample and hold readout system and method for ramp analog to digital conversion is presented in which an optical array is read out using a sample and hold circuit such that each sample is used to charge a sample and hold capacitor and is read out during a hold phase using an amplifier that drives an ramp analog to digital converter. The sample and hold circuit transitions to a tracking phase wherein the optical array input drives an amplifier that drives the sample and hold capacitor then transitions to a sample phase where the sample and hold capacitor is connected to the optical array output directly.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
An imaging device includes a pixel array of pixel circuits arranged in rows and columns. Bitlines are coupled to the pixel circuits. Clamp circuits are coupled to the bitlines. Each of the clamp circuits includes a clamp short transistor to a power line and a respective one of the bitlines. The clamp short transistor is configured to be switched in response to a clamp short enable signal. A first diode drop device is coupled to the power line. A clamp idle transistor is coupled to the first diode drop device such that the first diode drop device and the clamp idle transistor are coupled between the power line and the respective one of the bitlines. The clamp idle transistor is configured to be switched in response to a clamp idle enable signal.
A failure detection circuit for an image sensor includes a first input node, an array of second input nodes, and an output stage. The first input node is coupled to a reference voltage. The array of second input nodes has each input node coupled to receive a signal from a bitline of a bitline array in an image sensor that includes an array of pixels with each pixel is coupled to at least one bitline of the bitline array. The output stage is coupled to generate an output voltage indicative of any of the second input nodes being lower than the reference voltage.
A polarization-sensitive infrared sensitive image sensor, including a plurality of pixels in a semiconductor substrate and forming a pixel array, each pixel including: at least one microlens; at least one photodiode; and at least one light absorbing patch above a corresponding photodiode, each light absorbing patch oriented at a predetermined angle with respect to each of the at least one light absorbing patch, the light absorbing patch absorbs a portion of incident light dependent on polarization of the incoming light relative to the predetermined angle of the light absorbing patch.
An image sensor has an array of a tiling pattern of cells, each cell having at least one spiral nanowire circular polarizer formed of nanowires less than 80 nanometers in width; and photodiodes to receive incoming light form the circular polarizer. In embodiments, the polarizer is a descending spiral circular polarizer including at least four nanowires each about fifty nanometers wide at successive levels in the polarizer. In other embodiments, the circular polarizer comprises a flat spiral nanowire of width about seventy nanometers width, the flat spiral nanowire interrupted by cuts, disposed over multiple photodiodes to analyze a diffraction pattern from the polarizer.
Image sensors and devices for phase-detection auto focus processes are provided. A symmetric polarization filter includes a first polarizer defining a first plurality of apertures and a second polarizer adjacent with the first polarizer defining a second plurality of apertures. The first plurality of apertures can be mirror symmetrical with the second plurality of apertures about a lateral axis of the symmetric polarization filter between the first polarizer and the second polarizer. The lateral axis can be defined as an axis of symmetry of the symmetric polarization filter in plane with the first polarizer and the second polarizer.
Transistors, electronic devices, and methods are provided. Transistors include a gate trench formed in a semiconductor substrate and extending to a gate trench depth, and a source and a drain formed as doped regions in the semiconductor substrate and having a first conductive type. The source and the drain are formed along a channel length direction of the transistor at a first end and a second end of the gate trench, respectively, and the source and the drain each includes a first doped region and a second doped region extending away from the first doped region. The second doped region extends to a depth in the semiconductor substrate deeper than the first doped region relative to a surface of the semiconductor substrate.
A pixel cell is formed on a semiconductor substrate having a front surface. The pixel cell includes a photodiode, a floating diffusion region, and a transfer gate. The photodiode is disposed in the semiconductor substrate. The floating diffusion region includes a first doped region disposed in the semiconductor substrate, wherein the first doped region extends from the front surface to a first junction depth in the semiconductor substrate. The transfer gate is configured to selectively couple the photodiode to the floating diffusion region controlling charge transfer between the photodiode and the floating diffusion region. The transfer gate includes a planar gate disposed on the front surface of the semiconductor substrate and a pair of vertical gate electrodes. Each vertical gate electrode extending a gate depth from the planar gate into the semiconductor substrate. The first junction depth is greater than the gate depth.
An under-display optical fingerprint sensors employing microlens arrays (MLAs) and an opaque aperture layer includes high aspect-ratio metal aperture structures for efficient angular signal filtering and stray light control. Instead of relying on one or more opaque aperture baffle-layers, embodiments disclosed herein utilize an image sensor's inherent metal layers for filtering signals originated from unwanted angular ranges and blocking undesired stray light could achieve similar or better performance with simplified process flow and lower cost. Layers from the sensors' inherent metal layers are brought into the sensing area on purpose to form the high aspect-ratio metal aperture structure. The metal layers in the sensing area may include apertures aligned to apertures in the opaque layer, and may also be grounded.
Image sensors, isolation structures, and techniques of fabrication are provided. An image sensor includes a source of electromagnetic radiation disposed on a substrate, a pixel array disposed on the substrate and thermally coupled with source of electromagnetic radiation, and an isolation structure disposed on the substrate between the source of electromagnetic radiation and the pixel array. The isolation structure can define a first reflective surface oriented on a first bias relative to a lateral axis of the pixel array and a second reflective surface oriented on a second bias relative to the lateral axis. The isolation structure can be configured to attenuate residual electromagnetic radiation reaching a proximal region of the pixel array by pairing a first reflection and a second reflection of the electromagnetic radiation by the first reflective surface and the second reflective surface.
A ramp generator includes an operational amplifier having an output to generate a ramp signal. An integration current source is coupled to a first input and a reference voltage is coupled to a second input of the operational amplifier. A feedback capacitor is coupled between the first input and the output of the operational amplifier. A monitor circuit is coupled to the first and second inputs of the operational amplifier to generate an output flag in response to a comparison of the first and second inputs. A trimming control circuit is configured to generate a trimming signal in response to the output flag. An assist current source is configured to conduct an assist current from the output of the operational amplifier to ground in response the trimming signal generated by the trimming control circuit.
A pixel circuit includes a transfer transistor is coupled between a photodiode and a floating diffusion to transfer the image charge from the photodiode to the floating diffusion. A lateral overflow integration capacitor (LOFIC) includes an insulating region between a first metal electrode and a second metal electrode that is coupled to a first reset transistor and selectively coupled to the floating diffusion. A second reset transistor and a bias voltage source are coupled to the first metal electrode. During an idle period, the first reset transistor is configured to be on, the second reset transistor is configured to be off, and the bias voltage source is configured to provide a first bias voltage to the first metal electrode to reverse bias the LOFIC. The first bias voltage is less than a reset voltage provided from the reset voltage source.
H04N 25/77 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
H04N 25/59 - Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
H04N 25/709 - Circuitry for control of the power supply
38.
OPTICAL SENSOR WITH SIMULTANEOUS IMAGE/VIDEO AND EVENT DRIVEN SENSING CAPABILITIES
An optical sensor includes a pixel array of pixel cells. Each pixel cell includes photodiodes to photogenerate charge in response to incident light and a source follower to generate an image data signal in response to the charge photogenerated from the photodiodes. An image readout circuit is coupled to the pixel cells to read out the image data signal generated from the source follower of at least one of the pixel cells of a row of the pixel array. An event driven circuit is coupled to the pixel cells to read out the event data signals generated in response to the charge from the photodiodes of another row of the pixel cells of the pixel array. The image readout circuit is coupled to read out the image data signal and the event driven circuit is coupled to read out the event data signals from pixel array simultaneously.
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
H04N 25/79 - Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
H04N 25/443 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading pixels from selected 2D regions of the array, e.g. for windowing or digital zooming
H04N 25/445 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by skipping some contiguous pixels within the read portion of the array
39.
HIGH DYNAMIC RANGE, BACKSIDE-ILLUMINATED, LOW CROSSTALK IMAGE SENSOR WITH WALLS ON BACKSIDE SURFACE TO ISOLATE PHOTODIODES
A backside-illuminated image sensor includes photodiodes in photodiode regions electrically isolated by filled trenches with openings in a dielectric layer over the photodiodes. The image sensor has a metal grid aligned over the trenches, the metal grid within 80 nanometers of the trenches. The image sensor is formed by: fabricating photodiodes in photodiode regions of a frontside of a silicon substrate with source-drain regions of transistors, the photodiodes electrically isolated by deep trenches, each photodiode within a photodiode region of the substrate; forming the filled trenches in a backside of the semiconductor substrate; forming protective oxide and process stop layers over the backside of the semiconductor substrate; depositing a metal grid over the deep trenches, removing the process stop layer from over photodiode regions; and depositing color filters over the photodiode regions.
A global shutter readout circuit includes a pixel enable signal and a first sample and hold (SH) signal that are configured to turn ON a pixel enable transistor and a first storage transistor at a first time during a global transfer period. The pixel enable signal is configured to begin a transition to an OFF level at a second time and complete the transition to the OFF level at a third time to turn OFF the pixel enable transistor. The first SH signal is configured to begin a transition to the OFF level at a fourth time, which occurs after the second and third times, and complete the transition to the OFF level at a fifth time to turn OFF the first storage transistor. An OFF transition duration between the fourth and fifth times is greater than an ON transition duration of the first SH signal at the first time.
A video encoding method includes adding, to a bitstream, an indicator of whether a frame resolution is to be reduced. The method also includes, when said indicator indicates that the frame resolution is to be reduced, encoding a block of video data on a current video frame without motion compensation. A video decoding method includes receiving, as a syntax element from a bitstream, an indicator of whether a frame resolution is to be reduced. The method also includes, when said indicator indicates that the frame resolution is to be reduced, decoding a block of video data on a video frame without motion compensation.
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/146 - Data rate or code amount at the encoder output
Image sensors include a pixel array arranged about an array center, each pixel of the pixel array having a photodiode formed in a semiconductor substrate, and a central deep trench isolation structure disposed in the semiconductor substrate relative to a pixel center between the photodiode and an illuminated surface of the semiconductor substrate. If the pixel center is not coincident with the array center, then the central deep trench isolation structure is disposed at a CDTI shift distance away from the pixel center.
An image processing method and a device configured to implement the same are disclosed. The method comprises: from an imaging device, obtaining image data that comprises temporally consecutive image frames; performing feature extraction on each of the obtained image frames; dynamically retaining extracted feature data of the obtained image frames in a feature accumulation database by regulating data retention in the feature accumulation database to a selective subset of the extracted feature data from the obtained image frames; performing Random Sample Consensus (RANSAC) operation on the selective subset of the extracted feature data from the feature accumulation database; and generating an estimation model from output of the RANSAC operation based on at least one of an extracted feature data of a current image frame or extracted feature data of one or more temporally preceding image frames of the obtained image frames.
A pixel circuit includes a photodiode configured to photogenerate image charge in response to incident light. A transfer transistor is configured to transfer the image charge from the photodiode to a floating diffusion. A reset transistor coupled between a reset voltage source and the floating diffusion. A lateral overflow integration capacitor (LOFIC) includes an insulating region disposed between a first metal electrode and a second metal electrode. The first metal electrode is coupled to a bias voltage source, the second metal electrode is selectively coupled to the floating diffusion, and excess image charge photogenerated by the photodiode during an idle period is configured to overflow from the photodiode through the transfer transistor into the floating diffusion.
H04N 23/741 - Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
45.
Electronic camera module with integral LED and light-pipe illuminator
An electronic camera assembly includes a camera chip cube bonded to camera bondpads of an interposer; at least one light-emitting diode (LED) bonded to LED bondpads of the interposer at the same height as the camera bondpads; and a housing extending from the interposer and LEDs to the height of the camera chip cube, with light guides extending from the LEDs through the housing to a top of the housing. In embodiments, the electronic camera assembly includes a cable coupled to the interposer. In typical embodiments the camera chip cube has footprint dimensions of less than three and a half millimeters square.
A pixel circuit includes a transfer transistor coupled between a photodiode and a floating diffusion. A lateral overflow integration capacitor (LOFIC) includes an insulating region disposed between a first metal electrode coupled to a bias voltage source, and a second metal electrode selectively coupled to the floating diffusion. A multifunction reset transistor includes a gate, a drain, a first source, and a second source. The drain, the first source, and the second source are coupled to each other in response to a multifunction reset control signal turning the multifunction reset transistor on. The drain, the first source, and the second source are decoupled from one another in response to the multifunction reset control signal turning the multifunction reset transistor off. The drain is coupled to a reset voltage source, the first source is coupled to the first metal electrode, and the second source is coupled to the second metal electrode.
H04N 25/59 - Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
H04N 25/709 - Circuitry for control of the power supply
A global shutter readout circuit includes a reset transistor coupled between a reset voltage and a bitline. A pixel enable transistor is coupled between the reset transistor and a source follower transistor. First and second terminals of the pixel enable transistor are coupled together in response to a pixel enable signal coupled to a third terminal of the pixel enable transistor. A first storage transistor coupled to the second terminal of the pixel enable transistor and the gate of the source follower transistor. A first storage capacitor is coupled to the first storage transistor. A second storage transistor coupled to the second terminal of the pixel enable transistor and the gate of the source follower transistor. A second storage capacitor is coupled to the second storage transistor. A row select transistor is coupled to the source follower transistor to generate an output signal from the global shutter readout circuit.
H04N 25/62 - Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
H04N 25/65 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to reset noise, e.g. KTC noise related to CMOS structures by techniques other than CDS
H04N 25/771 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion
H04N 25/532 - Control of the integration time by controlling global shutters in CMOS SSIS
48.
Low power event driven pixels with passive difference detection circuitry, and reset control circuits for the same
Low power event driven pixels with passive difference detection circuit (and reset control circuits for the same) are disclosed herein. In one embodiment, an event driven pixel comprises a photosensor; a photocurrent-to-voltage converter, and a difference circuit. The difference circuit includes a source follower transistor and a switched-capacitor filter having an input coupled to the photocurrent-to-voltage converter and an output coupled to a gate of the source follower transistor. The switched-capacitor filter includes a first capacitor coupled between the input and the output of the switched-capacitor filter, a second capacitor having a first plate coupled to the output of the switched-capacitor filter, and a reset transistor coupled between a reference voltage and the output of the switched-capacitor filter. The difference circuit is configured generate a difference signal that is indicative of whether the event driven pixel has detected an event in an external scene.
H04N 25/772 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
H04N 25/42 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
H04N 25/709 - Circuitry for control of the power supply
H03K 19/00 - Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits
H03K 19/20 - Logic circuits, i.e. having at least two inputs acting on one output; Inverting circuits characterised by logic function, e.g. AND, OR, NOR, NOT circuits
H03M 1/40 - Analogue value compared with reference values sequentially only, e.g. successive approximation type recirculation type
H03M 1/44 - Sequential comparisons in series-connected stages with change in value of analogue signal
49.
SEMICONDUCTOR SUBSTRATE WITH PASSIVATED FULL DEEP-TRENCH ISOLATION
An image sensor with passivated full deep-trench isolation includes a semiconductor substrate, the substrate including a plurality of sidewalls that form a plurality of trenches that separates pixels of a pixel array, each of the plurality of trenches extending from a first surface of the semiconductor substrate to a second surface opposite the first surface, and a passivation layer disposed on the second surface, the passivation layer lining the plurality of sidewall surfaces and the back surface of the semiconductor substrate. Each of the plurality of trenches extending from the first surface into the semiconductor substrate forming a first opening proximate to the first surface and a second opening proximate to the second surface, wherein the first opening has a width greater than that of the second opening.
A ramp buffer circuit includes an input device having an input coupled to receive a ramp signal. A bias current source is coupled to an output of the input device. The input device and the bias current source are coupled between a power line and ground. An assist current source is coupled between the output of the input device and ground. The assist current source is configured to conduct an assist current from the output of the input device to ground only during a ramp event generated in the ramp signal.
The present disclosure provides an alignment method for image sensor fabrication that involve forming a number of set of alignment marks using key process mask layers to improve alignment registration between process mask layers so as to reduce number of alignment transfer improves alignment accuracy between pixel elements. The present disclosure further provides a semiconductor device that includes such alignment mark structures.
In an embodiment, a method of reducing resistance-capacitance delay along photodiode transfer lines of an image sensor includes forking a plurality of photodiode transfer lines each into a plurality of sublines coupled together and to a first decoder-driver at a first end of each subline; and distributing selection transistors of a plurality of multiple-photodiode cells among the plurality of sublines. In embodiments, the sublines may be recombined at a second end of the sublines and driven by a second decoder-driver at the second end.
H04N 25/621 - Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels for the control of blooming
H04N 25/76 - Addressed sensors, e.g. MOS or CMOS sensors
H04N 25/60 - Noise processing, e.g. detecting, correcting, reducing or removing noise
H04N 25/704 - Pixels specially adapted for focusing, e.g. phase difference pixel sets
H04N 25/42 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
H04N 25/13 - Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
53.
IMAGE PROCESSING METHOD AND APPARATUS IMPLEMENTING THE SAME
An image processing method and a device configured to implement the same are disclosed. The method comprises: obtaining optical input from a hybrid imaging device, wherein an obtained optical input comprises a first component and a second component that temporally corresponds to the first component; wherein the first component of the obtained optical input corresponds to a first temporal resolution, while the second component of the obtained optical input corresponds to a second temporal resolution higher than that of the first component; performing image restoration operation on a first subset of the first component of the obtained optical input in accordance with data from the second component of the obtained optical input; and performing image fusion operation to generate fused image data from an output of the image restoration operation and a second subset of the first component of the obtained optical input.
A reduced cross-talk pixel-array substrate includes a semiconductor substrate, a buffer layer, a metal annulus, and an attenuation layer. The semiconductor substrate includes a first photodiode region. A back surface of the semiconductor substrate forms a trench surrounding the first photodiode region in a cross-sectional plane parallel to a first back-surface region of the back surface above the first photodiode region. The buffer layer is on the back surface and has a feature located above the first photodiode region with the feature being one of a recess and an aperture. The metal annulus is on the buffer layer and covers the trench. The attenuation layer is above the first photodiode region.
Transistors include trenches formed in the semiconductor substrate having a first conductive type. The trenches define, in a channel width plane of the transistor, at least one nonplanar substrate structure having a plurality of sidewall portions and a tip portion disposed between the plurality of sidewall portions. An epitaxial overlayer is epitaxially grown on the sidewall portions and the tip portion. A channel doping layer having a doped portion of the semiconductor substrate is formed in the nonplanar substrate structure and enclosed by the epitaxial overlayer. An isolation layer is disposed in the trenches and over the epitaxial overlayer. A gate is disposed on the isolation layer and extends into the trenches.
A method for estimating a signal charge collected by a pixel of an image sensor includes determining an average bias that depends on the pixel's floating-diffusion dark current and pixel-sampling period. The method also includes determining a signal-charge estimate as the average bias subtracted from a difference between a weighted sum of a plurality of N multiple-sampling values each multiplied by a respective one of a plurality of N sample-weights.
H04N 25/67 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
H04N 25/671 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
The invention disclose a pixel in an image sensor capable of detecting infrared light and associated fabrication method. The image sensor includes a semiconductor substrate has a first photodiode and a second photodiode adjacent to the first photodiode. A planarized dielectric layer having a recessed region is disposed on a first side of the semiconductor substrate. A first color filter disposed on the planarized dielectric layer aligned with the first photodiode and configured to transmit light of a first wavelength range. A second color filter disposed in the recessed region and on the planarized dielectric layer. The second color filter is aligned with the second photodiode, and configured to transmit light of a second wavelength range that is different from the first wavelength range. A first depth-wise thickness of the first color filter is less than a second depth-wise thickness of the second color filter.
Pixels, such as for image sensors and electronic devices, include a photodiode formed in a semiconductor substrate, a floating diffusion, and a transfer structure selectively coupling the photodiode to the floating diffusion. The transfer structure includes a transfer gate formed on the semiconductor substrate, and a vertical channel structure including spaced apart first doped regions formed in the semiconductor substrate between the transfer gate and the photodiode. Each spaced apart first doped region is doped at a first dopant concentration with a first-type dopant. The spaced apart first doped regions are formed in a second doped region doped at a second dopant concentration with a second-type dopant of a different conductive type.
A backside-illuminated image sensor includes arrayed photodiodes separated by isolation structures, and interlayer dielectric between first layer of metal interconnect and substrate. The image sensor has barrier metal walls in the interlayer dielectric between isolation structures and first layer interconnect, the barrier metal walls aligned with the isolation structures and disposed between the isolation structures and first layer interconnect. The barrier metal wall deflects light passing through photodiodes of the sensor that would otherwise be reflected by interconnect into different photodiodes. The sensor is formed by providing a partially fabricated semiconductor substrate with photodiodes and source-drain regions formed; forming gate electrodes on a frontside surface of the semiconductor substrate, depositing an etch-stop layer over the gate electrodes; depositing interlayer dielectric on the etch-stop layer; forming trenches extending to the etch-stop layer through the interlayer dielectric, the trenches being between photodiodes; and filling trenches with metal to form barrier metal walls.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
60.
MULTI-LAYER METAL STACK FOR ACTIVE PIXEL REGION AND BLACK PIXEL REGION OF IMAGE SENSOR AND METHODS THEREOF
An image sensor includes an active pixel photodiode, a black pixel photodiode, a metal grid structure, and a light shield. Each of the active pixel photodiode and the black pixel photodiode are disposed in a semiconductor material having a first side and a second side opposite the first side. The first side of the semiconductor material is disposed between the light shield and the black pixel photodiode. The metal grid structure includes a first multi-layer metal stack including a first metal and a second metal different from the first metal. The light shield includes a second multi-layer stack including the first metal and the second metal. A first thickness of the first multi-layer metal stack is less than a second thickness of the second multi-layer metal stack.
An image sensor element includes a transfer transistor TX, a LOFIC select transistor LF, a photodiode PD, and a first overflow path OFP. The transfer transistor TX outputs a readout signal from a first end. The LOFIC select transistor LF includes a first end connected to a second end of the transfer transistor TX, and a second end connected to a capacitor. The photodiode PD is connected in common to a third end of the transfer transistor and a third end of the LOFIC select transistor LF. The first overflow path OFP is formed between the photodiode PD and a second end of the LOFIC select transistor LF. Each of the transfer transistor TX and the LOFIC select transistor LF is configured with a vertical gate transistor.
H04N 25/59 - Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
62.
TRANSISTORS HAVING INCREASED EFFECTIVE CHANNEL WIDTH
An image sensor includes a photodiode disposed in a semiconductor substrate having a first surface and a second surface opposite to the first surface. A floating diffusion is disposed in the semiconductor substrate. A transfer transistor is configured for coupling the photodiode to the floating diffusion. The transfer transistor includes a vertical transfer gate extending a first depth in a depthwise direction from the first surface into the semiconductor substrate. A transistor is coupled to the floating diffusion. The transistor includes: a planar gate disposed proximate to the first surface of the semiconductor substrate; and a plurality of vertical gate electrodes, each extending a respective depth into the semiconductor substrate from the planar gate in the depthwise direction. The respective depth of at least one of the plurality of vertical gate electrodes is the same as the first depth of the vertical transfer gate.
An imaging system includes a pixel array configured to generate image charge voltage signals in response to incident light received from an external scene. An infrared illumination source is deactivated during the capture of a first image of the external scene and activated during the capture of a second image of the external scene. An array of sample and hold circuits is coupled to the pixel array. Each sample and hold circuit is coupled to a respective pixel of the pixel array and includes first and second capacitors to store first and second image charge voltage signals of the captured first and second images, respectively. A column voltage domain differential amplifier is coupled to the first and second capacitors to determine a difference between the first and second image charge voltage signals to identify an object in a foreground of the external scene.
C08J 3/215 - Compounding polymers with additives, e.g. colouring in the presence of a liquid phase the polymer being premixed with a liquid phase at least one additive being also premixed with a liquid phase
An image sensor includes a semiconductor substrate and a multilayer film. The semiconductor substrate includes a photodiode and a back surface having a recessed region that surrounds the photodiode. The multilayer film is on, and conformal to, the recessed region, and includes N layer-groups of adjacent high-κ material layers. Each pair of adjacent high-κ material layers of a same layer-group of the N layer-groups includes (i) an outer-layer having an outer fixed-charge density and (ii) an inner-layer, located between the outer-layer and the recessed region, that has an inner fixed-charge density. Each of the outer and inner fixed-charge density is negative. The inner fixed-charge density is more negative than the outer fixed-charge density.
An image sensor includes a plurality of pixels that is arranged in a matrix and each of which outputs a signal in response to incident light, wherein readout of data can be performed with respect to the plurality of pixels, and simultaneous readout of data of a plurality of columns of pixels can be performed, and at least one pixel of the plurality of columns of pixels to be read simultaneously can be read for phase detection with respect to each of divided sub-pixels. The image sensor is configured to, with n rows as a readout unit where n is an integer of 2 or more, perform readout for at least one sub-pixel of at least one pixel in one readout cycle within the readout unit, perform readout for each pixel including phase detection readout for the other sub-pixel of the at least one pixel in which the at least one sub-pixel has been read in the one readout cycle, in another readout cycle within the readout unit, and end the readout for the readout unit with the n+1 readout cycles.
H04N 25/704 - Pixels specially adapted for focusing, e.g. phase difference pixel sets
H04N 25/77 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
H04N 25/44 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
H04N 25/78 - Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
H04N 25/46 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
66.
Image sensor with on-chip occlusion detection and methods thereof
An imaging system including an image sensor coupled to a controller to image an external scene is described. The controller includes logic storing instructions that when executed causes the imaging system to perform operations including capturing images, including a first image and a second image, of an external scene, and generating reduced representations of the images including a first reduced representation associated with the first image and a second reduced representation associated with the second image. The operations further include comparing the first reduced representation with the second reduced representation to determine a difference between the first image and the second image and identifying an occurrence of an occlusion affecting the image sensor imaging the external scene when the difference is greater than a threshold value.
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
H04N 5/222 - Studio circuitry; Studio devices; Studio equipment
H04N 23/63 - Control of cameras or camera modules by using electronic viewfinders
H04N 23/67 - Focus control based on electronic image sensor signals
67.
Nine cell pixel image sensor with phase detection autofocus
An imaging device includes a pixel array of 1×3 pixel circuits that include 3 photodiodes in a column. Bitlines are coupled to the 1×3 pixel circuits. The bitlines are divided into groupings of 3 bitlines per column of the 1×3 pixel circuits. Each column of the 1×3 pixel circuits includes a plurality of first banks coupled to a first bitline, a plurality of second banks coupled to a second bitline, and a plurality of third banks coupled to a third bitline of a respective grouping of the 3 bitlines. The 1×3 pixel circuits are arranged into groupings of 3 1×3 pixel circuits per nine cell pixel structures that form a plurality of 3×3 pixel structures of the pixel array.
H04N 25/615 - Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4" involving a transfer function modelling the optical system, e.g. optical transfer function [OTF], phase transfer function [PhTF] or modulation transfer function [MTF]
H04N 25/133 - Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
H04N 25/13 - Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
H04N 25/447 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by preserving the colour pattern with or without loss of information
68.
Circuit and method for image artifact reduction in high-density, highpixel-count, image sensor with phase detection autofocus
An image sensor includes an array of multiple-photodiode cells, each photodiode coupled through a selection transistor to a floating diffusion of the cell, the selection transistors controlled by respective transfer lines, a reset, a sense source follower, and a read transistor coupled from the source follower to a data line. The array includes phase detection rows with phase detection cells and normal cells; and a compensation row of more cells. In embodiments, each phase detection row has cells with at least one photodiode coupled to the floating diffusion by selection transistors controlled by a transfer line separate from transfer lines of selection transistors of adjacent normal cells of the row. In embodiments, the compensation row has cells with photodiodes coupled to the floating diffusion by selection transistors controlled by a transfer line separate from transfer lines of selection transistors of adjacent normal cells of the compensation row.
A pixel readout circuit includes an analog to digital converter coupled to the bitline output of the pixel circuit. A switch is coupled between the bitline output of the pixel circuit and a reference voltage. The switch is pulsed on and off a first time to settle the bitline to the reference voltage prior to an autozero operation of the analog to digital converter. The switch is pulsed on and off a second time to settle the bitline to the reference voltage after the autozero operation and prior to a first analog to digital conversion. The switch is configured to be pulsed on and off a third time to settle the bitline to the reference voltage after the first analog to digital conversion operation and prior to a second analog to digital conversion operation.
An image sensor processor implemented method for retaining pixel intensity, comprising: receiving, by the image processor, a numerical value indicative of a corresponding pixel intensity; determining, by the image processor, whether a least significant portion of the received numerical value is equal to a predetermined numerical value; and responsive to determining the least significant portion of the received numerical value is equal to the predetermined numerical value, rounding, by the image processor, the received numerical value of the corresponding pixel intensity to a higher or lower value depending on a bit sequence, and if the least significant portion of the received numerical value is not equal to the predetermined value, rounding the received numerical value to the higher or lower value based on the received numerical value; and binning the rounded value.
H04N 25/46 - Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
G06T 3/40 - Scaling of a whole image or part thereof
H04N 23/80 - Camera processing pipelines; Components thereof
A differential subrange analog-to-digital converter (ADC) converts differential analog image signals received from sample and hold circuits to a digital signal through an ADC comparator. The comparator of the differential subrange ADC is shared by a successive approximation register (SAR) ADC coupled to provide both M upper output bits (UOB) and a ramp ADC coupled to provide N lower output bits (LOB). Digital-to-analog converters (DACs) of the differential subrange SAR ADC comprises 2M buffered bit capacitor fingers connected to the comparator. Each buffered bit capacitor finger comprises a bit capacitor, a bit buffer, and a bit switch controlled by the UOB. Both DACs are initialized to preset values and finalized based on the values of the least significant bit of the UOB. The subsequent ramp ADC operation will be ensured to have its first ramp signal ramps in a monotonic direction and its second ramp signal ramp in an opposite direction.
H04N 25/772 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
H03M 1/46 - Analogue value compared with reference values sequentially only, e.g. successive approximation type with digital/analogue converter for supplying reference values to converter
H03M 1/56 - Input signal compared with linear ramp
H04N 25/75 - Circuitry for providing, modifying or processing image signals from the pixel array
A dark-current-inhibiting image sensor includes a semiconductor substrate, a thin and a thin junction. The semiconductor substrate includes a front surface, a back surface opposite the front surface, a photodiode, and a concave surface between the front surface and the back surface. The concave surface extends from the back surface toward the front surface, and defines a trench that surrounds the photodiode in a cross-sectional plane parallel to the back surface. The thin junction extends from the concave surface into the semiconductor substrate, and is a region of the semiconductor substrate. The semiconductor substrate includes a first substrate region, located between the thin junction and the photodiode, that has a first conductive type. The photodiode and the thin junction have a second conductive type opposite the first conductive type.
A time-of-flight sensor includes an integrated circuit chip in which a voltage regulator and a load are disposed. The load includes a grouping of pixel circuits and modulation driver that is supplied power from the voltage regulator. The grouping of pixel circuits included in a pixel array disposed in the integrated circuit trip. Each one of the pixel circuits includes a photodiode configured to photogenerate charge in response to reflected modulated light, a floating diffusion configured to store a portion of charge photogenerated in the photodiode, and transfer transistor to transfer the portion of charge from the photodiode to the floating diffusion in response to a phase modulation signal generated by the modulation driver. A feedback circuit is coupled between the load and the voltage regulator and is coupled to receive a feedback signal from the feedback circuit in response to the load.
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
H03K 5/24 - Circuits having more than one input and one output for comparing pulses or pulse trains with each other according to input signal characteristics, e.g. slope, integral the characteristic being amplitude
H03G 3/30 - Automatic control in amplifiers having semiconductor devices
H04N 13/296 - Synchronisation thereof; Control thereof
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
A stacked image sensor includes a signal-processing circuitry layer, a pixel-array substrate, a heat-transport layer, and a thermal via. The signal-processing circuitry layer includes a conductive pad exposed on a circuitry-layer bottom surface of the signal-processing circuitry layer. The pixel-array substrate includes a pixel array and is disposed on a circuitry-layer top surface of the signal-processing circuitry layer. The circuitry-layer top surface is between the circuitry-layer bottom surface and the pixel-array substrate. The heat-transport layer is located between the signal-processing circuitry layer and the pixel-array substrate. The thermal via thermally couples the heat-transport layer to the conductive pad.
A time-of-flight pixel circuit includes a photodiode configured to generate charge in response to modulated light reflected from an object. First and second transfer transistors are coupled to the photodiode. The first transfer transistor transfers a first portion of charge from the photodiode in response to a first modulation signal and the second transfer transistor transfers a second portion of charge from the photodiode in response to a second modulation signal. The second modulation signal is an inverted first modulation signal. A first floating diffusion is coupled to the first transfer transistor to receive the first portion of charge in response to a first modulation signal. Each one of a first plurality of sample and hold transistors is coupled between a respective one of a first plurality of memory nodes and the first transfer transistor.
A time-of-flight pixel array includes photodiodes that generate charge in response to incident reflected modulated light. First transfer transistors transfer a first portion of the charge from the photodiodes in response to a first modulation signal and second transfer transistors transfer a second portion of the charge from the photodiodes in response to a second modulation signal, which is an inverted first modulation signal. First floating diffusions are coupled to the first transfer transistors. A binning transistor is coupled between one of the first floating diffusions and another one of the first floating diffusions. A first memory node is coupled to one of the first floating diffusions through a first sample and hold transistor and a second memory node is coupled to another one of the first floating diffusions through a second sample and hold transistor.
H04N 5/347 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by combining or binning pixels in SSIS
H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
77.
Readout architecture for indirect time-of-flight sensing
A time-of-flight sensor includes a pixel array of pixel circuits. A first subset of the pixel circuits is illuminated by reflected modulated light from a portion of an object. A second subset of the pixel circuits is non-illuminated by the reflected modulated light. Each pixel circuit includes a floating diffusion that stores a portion of charge photogenerated in a photodiode in response to the reflected modulated light. A transfer transistor transfers the portion of charge from the photodiode to the floating diffusion in response to modulation by a phase modulation signal. A modulation driver block generates the phase modulation signal and is coupled to a light source that emits the modulated light to the portion of the object. The modulation driver block synchronizes scanning the modulated light emitted by the light source across the object with scanning of the first subset of the pixel circuits across the pixel array.
A curved-surface image-sensor assembly has a porous carrier having a concave surface with a thinned image sensor bonded by an adhesive to its concave surface of the porous carrier; the porous carrier is mounted into a water-resistant package. The sensor assembly is made by fabricating a thinned, flexible, image-sensor integrated circuit (IC) and applying adhesive to a non-illuminated side of the IC; positioning the IC over a concave surface of a porous carrier; applying vacuum through the porous carrier to suck the IC onto the concave surface of the porous carrier; and curing the adhesive to bond the IC to the concave surface of the porous carrier.
A multiple-lens optical fingerprint reader for reading fingerprints through a display has a spacer; and multiple microlenses with concave and convex surfaces in a microlens array, each microlens of multiple lenses focuses light arriving at that microlens from a finger adjacent the display through the spacer forms an image on associated photosensors on a photosensor array of an image sensor integrated circuit. A method of verifying identity of a user includes illuminating a finger of the user with an OLED display; focusing light from the finger through arrayed microlenses onto a photosensor array, reading the array into overlapping electronic fingerprint images; extracting features from the overlapping fingerprint images or from a stitched fingerprint image, and comparing the features to features of at least one user in a library of features and associated with one or more fingers of one or more authorized users.
A method for preventing defects in a thin film deposited on a semiconductor substrate includes forming a plurality of trenches on a periphery-region of the semiconductor substrate to yield a trenched surface. The semiconductor substrate includes a pixel array; the periphery-region surrounds the pixel array. The trenched surface includes (i) a plurality of trench regions each forming a respective one of the plurality of trenches and (ii) between each pair of adjacent trenches, a respective one of a plurality of inter-trench surfaces. The method also includes depositing the thin film on the surface such that the thin film covers each inter-trench surface and conformally covers each trench region.
An image sensor configured to resolve intensity and polarization has multiple pixels each having a single microlens adapted to focus light on a central photodiode surrounded by at least a first, a second, a third, and a fourth peripheral photodiodes, where a first polarizer at a first angle is disposed upon the first peripheral photodiode, a third polarizer at a third angle is disposed upon the third peripheral photodiode, a second polarizer at a second angle is disposed upon the second peripheral photodiode, and a fourth polarizer at a fourth angle is disposed upon the fourth peripheral photodiode, the first, second, third, and fourth angles being different. In embodiments, 4 or 8 peripheral photodiodes are provided, and in an embodiment the polarizers are parts of an octagonal polarizer.
A cavity interposer has a cavity, first bondpads adapted to couple to a chip-type camera cube disposed within a base of the cavity at a first level, the first bondpads coupled through feedthroughs to second bondpads at a base of the interposer at a second level; and third bondpads adapted to couple to a light-emitting diode (LED), the third bondpads at a third level. The third bondpads coupled to fourth bondpads at the base of the interposer at the second level; and the second and fourth bondpads couple to conductors of a cable with the first, second, and third level different. An endoscope optical includes the cavity interposer an LED, and a chip-type camera cube electrically bonded to the first bondpads; the LED is bonded to the third bondpads; and a top of the chip-type camera cube and a top of the LED are at a same level.
A61B 1/05 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
A61B 1/06 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
83.
Endoscope Tip Assembly Using Truncated Trapezoid Cavity Interposer To Allow Coplanar Camera And LEDs in Small-Diameter Endoscopes
A cavity interposer has a cavity, first bondpads adapted to couple to a chip-type camera cube disposed within a base of the cavity at a first level, the first bondpads coupled through feedthroughs to second bondpads at a base of the interposer at a second level; and third bondpads adapted to couple to a light-emitting diode (LED), the third bondpads at a third level. The third bondpads coupled to fourth bondpads at the base of the interposer at the second level; and the second and fourth bondpads couple to conductors of a cable with the first, second, and third level different. An endoscope optical includes the cavity interposer an LED, and a chip-type camera cube electrically bonded to the first bondpads; the LED is bonded to the third bondpads; and a top of the chip-type camera cube and a top of the LED are at a same level.
A61B 1/05 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
A61B 1/06 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
A method and apparatus for embedding a digital watermark in image content that is not visible to the human eye is performed on single-sensor digital camera images (often called ‘raw’ images) from a pixel-array. The raw image is transformed to generate preprocessed image coefficients, a watermark message is encrypted using a first key; the encrypted watermark message is randomized using a second key to form a watermark; and the watermark is embedded in randomly selected preprocessed image coefficients.
A pixel includes a semiconductor substrate that includes a floating diffusion region and a photodiode region. The pixel also includes, between a front surface of the semiconductor substrate and a back surface opposing the front surface: a first trench and a second trench adjacent to the first trench in a separation direction that is both (a) parallel to the front surface and (b) in a plane that is perpendicular to the front surface. Each of the first and second trench (a) is between the floating diffusion region and the photodiode region and (b) extends into the semiconductor substrate from the front surface. In the separation direction, a top average-separation between the first and second trench, at depths between the front surface and a first depth in the semiconductor substrate, exceeds a bottom average-separation between the first and second trench, at depths exceeding the first depth.
An optical fingerprint sensor with spoof detection includes a plurality of lenses, an image sensor including a pixel array that includes a plurality of first photodiodes and a plurality of second photodiodes, and at least one apertured baffle-layer having a plurality of aperture stops, wherein each second photodiode is configured to detect light having passed through a lens and at least one aperture stop not aligned with the lens along an optical axis. A method for detecting spoof fingerprints detected using an optical fingerprint sensor includes detecting large-angle light incident on a plurality of anti-spoof photodiodes, wherein the plurality of anti-spoof photodiodes is interleaved with a plurality of imaging photodiodes, determining an angular distribution of light based at least in part one the large-angle light, and detecting spoof fingerprints based at least in part on the angular distribution of light.
A method for detecting spoof fingerprints with an under-display fingerprint sensor includes illuminating, with incident light emitted from a display, a target region of a fingerprint sample disposed on a top surface of the display; detecting a first scattered signal from the fingerprint sample with a first image sensor region of an image sensor located beneath the display, the first image sensor region not directly beneath the target region, the first scattered signal including a first portion of the incident light scattered by the target region; determining a scattered light distribution based at least in part on the first scattered signal; and identifying spoof fingerprints based at least in part on the scattered light distribution.
A method for detecting spoof fingerprints detected using an optical fingerprint sensor and polarization includes controlling a display of an electronic device to output a pattern of light to illuminate a fingerprint sample touching the display; blocking smaller-angle light from impinging a plurality of anti-spoof photodiodes of the pixel array; filtering larger-angle light incident on the plurality of anti-spoof photodiodes to at least one polarization direction; detecting the larger-angle light using the plurality of anti-spoof photodiodes; correlating the larger-angle light with the pattern of light; determining the fingerprint spoofing based at least in part on the correlation of the larger-angle light and the pattern of light; and wherein the plurality of anti-spoof photodiodes is interleaved with a plurality of imaging photodiodes such that each anti-spoof photodiode of the plurality of anti-spoof photodiodes is between adjacent imaging photodiodes of the plurality of imaging photodiodes.
A video encoding method includes (i) determining a current bit rate of a communication channel between a destination device and a source device that stores an input video frame, and (ii) generating a current reconstructed frame and an encoded bitstream at least in part via inter-frame coding of a current input video frame of a sequence of input video frames using a previously-generated reconstructed frame generated at least in part via inter-frame coding of a previous input video frame. The current reconstructed frame is a compressed version of the current input video frame. When both (i) a subsequent bit rate, determined after said inter-frame coding, is less than a threshold and (ii) the current bit rate exceeds the threshold, the method includes: (a) generating a downscaled reconstructed frame at least in part by downscaling the current reconstructed frame; and (b) appending the encoded bitstream with a bit sequence representing the downscaled reconstructed frame.
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 11/02 - Colour television systems with bandwidth reduction
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/139 - Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
H04N 19/146 - Data rate or code amount at the encoder output
90.
DAM OF IMAGE SENSOR MODULE HAVING SAWTOOTH PATTERN AND INCLINED SURFACE ON ITS INNER WALL AND METHOD OF MAKING SAME
An image sensor module comprises an image sensor having a light sensing area, a cover glass for covering the light sensing area, a dam between the image sensor and the cover glass, which surrounds the light sensing area, and has an outer wall and an inner wall, where a cross-section of the inner wall parallel to the surface of the light sensing area of the image sensor forms a sawtooth pattern and/or, where a cross-section of the inner wall orthogonal to the surface of the light sensing area of the image sensor forms an inclined surface.
A flare-reducing image sensor includes a plurality of pixels, NP in number, and a plurality of microlenses, NML in number, where each of the plurality of microlenses is aligned to a respective one of the plurality of pixels, such that NP=NML. The flare-reducing image sensor further includes a plurality of phase-shifting layers, NL, in number, where each phase-shifting layer is aligned with a respective one of the plurality of microlenses, where NL, is less than or equal to NML.
An image sensor comprises a first photodiode, a second photodiode, and a deep trench isolation structure. The first photodiode and the second photodiode are each disposed within a semiconductor substrate. The first photodiode is adjacent to the second photodiode. The deep trench isolation structure has a varying depth disposed within the semiconductor substrate between the first photodiode and the second photodiode. The DTI structure extends the varying depth from a first side of the semiconductor substrate towards a second side of the semiconductor substrate. The first side of the semiconductor substrate is opposite of the second side of the semiconductor substrate.
Reference clock CMOS input buffer with self-calibration and improved ESD performance. In one embodiment, a reference clock input buffer of an image sensor includes a Schmitt trigger configured to generate a clock signal having a falling edge and a rising edge. The falling edge and the rising edge are separated by a hysteresis voltage. The Schmitt trigger includes a plurality of output switches and a plurality of voltage control switches that are individually coupled to individual output switches [M2-i] of the plurality of output switches. Voltage of the falling edge signal or the rising edge signal of the Schmitt trigger is adjustable by selectively switching at least one voltage control switch of the plurality of voltage control switches.
H03L 7/08 - Automatic control of frequency or phase; Synchronisation using a reference signal applied to a frequency- or phase-locked loop - Details of the phase-locked loop
H01L 27/02 - Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including integrated passive circuit elements with at least one potential-jump barrier or surface barrier
H04N 25/772 - Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising A/D, V/T, V/F, I/T or I/F converters
A pixel of an image sensor includes a semiconductor substrate having a front surface and a back surface opposing the front surface, a photodiode and floating diffusion (FD) region formed in the substrate along a first pixel axis parallel to the front surface and a transfer gate formed in the front surface of the substrate between the photodiode and the FD region. The transfer gate includes a planar gate on the front surface of the substrate, a vertical transfer gate extending into the substrate from the planar gate, the vertical transfer gate further including a trench and a layer of doped semiconductor material epitaxially grown on the sides and bottom of the trench. The semiconductor substrate and the epitaxial layer comprise a first conductive type, and the photodiode and the FD region comprise a second conductive type. An image sensor and method of forming the vertical transfer gate are disclosed.
An optical fingerprint sensor with spoof detection includes a plurality of lenses; a pixel array including a plurality of first photodiodes, a line between a center of each first photodiode and an optical center of each lens forms an optical axis; at least one apertured baffle-layer positioned between the image sensor and the plurality of lenses, each having a respective plurality of aperture stops, each aperture stop being center-aligned with the optical axis; and a plurality of second photodiodes intercalated with the plurality of first photodiodes; and a color filter layer between the pixel array and the plurality of lenses, said color filter layer includes a plurality of color filters positioned such that each second photodiode is configured to detect electromagnetic energy having passed through lens, a color filter, and at least one aperture stop not aligned along the optical axis.
A method for forming a contact pad of a semiconductor device is disclosed. The method includes providing a semiconductor substrate including a first side and a second side. The semiconductor device includes a shallow trench isolation structure, disposed between the first side and the second side, and an intermetal dielectric stack coupled to the second side. The intermetal dielectric stack includes a first metal interconnect. The method further includes etching a first trench into the semiconductor substrate, depositing a dielectric material into the first trench to form a dielectric spacer extending along side walls of the first trench, etching a second trench aligned with the first trench, and depositing a metal material into the second trench to form the contact pad that contacts the first metal interconnect.
The present application discloses an imaging system for detecting human-object interaction and a method for detecting human-object interaction thereof. The imaging system includes an event sensor, an image sensor, and a controller. The event sensor is configured obtain an event data set of the targeted scene according to variations of light intensity sensed by pixels of the event sensor when an event occurs in the targeted scene. The image sensor is configured capture a visual image of the targeted scene. The controller is configured to detect human according to the event data set, trigger the image sensor to capture the visual image when the human is detected, and detect the human-object interaction in the targeted scene according to the visual image and a series of event data sets obtained by the event sensor during the event.
G06N 3/04 - Architecture, e.g. interconnection topology
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
H04N 23/611 - Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
H04N 23/951 - Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
A flare-blocking image sensor includes large pixels and small pixels, a microlens, and an opaque element. The large pixels and small pixels form a first and second pixel array respectively, each having a pixel pitch Px and Py. The second pixel array is offset from the first pixel array by ½Px and ½Py. A first large pixel of the large pixels is between and collinear with a first and a second small pixel separated by √{square root over (Px2+Py2)} in a first direction and each having a width W less than both pixel pitch Px and Py. The microlens is aligned with the first large pixel. The opaque element is between the first large pixel and the microlens and extends, in the first direction, less than
A flare-blocking image sensor includes large pixels and small pixels, a microlens, and an opaque element. The large pixels and small pixels form a first and second pixel array respectively, each having a pixel pitch Px and Py. The second pixel array is offset from the first pixel array by ½Px and ½Py. A first large pixel of the large pixels is between and collinear with a first and a second small pixel separated by √{square root over (Px2+Py2)} in a first direction and each having a width W less than both pixel pitch Px and Py. The microlens is aligned with the first large pixel. The opaque element is between the first large pixel and the microlens and extends, in the first direction, less than
1
2
(
P
x
2
+
P
y
2
-
W
)
A flare-blocking image sensor includes large pixels and small pixels, a microlens, and an opaque element. The large pixels and small pixels form a first and second pixel array respectively, each having a pixel pitch Px and Py. The second pixel array is offset from the first pixel array by ½Px and ½Py. A first large pixel of the large pixels is between and collinear with a first and a second small pixel separated by √{square root over (Px2+Py2)} in a first direction and each having a width W less than both pixel pitch Px and Py. The microlens is aligned with the first large pixel. The opaque element is between the first large pixel and the microlens and extends, in the first direction, less than
1
2
(
P
x
2
+
P
y
2
-
W
)
from the first small pixel toward the second small pixel. The opaque element has a width perpendicular to the first direction not exceeding width W.
An image processing method and a device configured to implement the same are disclosed. The device comprises: a hybrid imaging device configured to obtain optical input; and a processing device in signal communication with the hybrid imaging device. The processing device comprises: a motion detection circuit that performs feature tracking based on a first component of an obtained optical input; a motion estimation circuit that performs motion compensation based on output of the motion detection unit; a frame reconstruction circuit that reconstructs image frame based on both the output of the motion estimation unit and a second component of the optical input; and an output unit that outputs image frame at a predetermined global frame rate.
A pixel circuit includes a photodiode configured to photogenerate charge in response to reflected modulated light incident upon the photodiode. A first floating diffusion is configured to store a first portion of charge photogenerated in the photodiode. A first transfer transistor is configured to transfer the first portion of charge from the photodiode to the first floating diffusion in response to a first phase signal. A first storage node is configured to store the first portion of charge from the first floating diffusion. A first decoupling circuit has a first output responsive to a first input. The first input is coupled to the first floating diffusion and the first output is coupled to first storage node. A voltage swing at the first output is greater than a voltage swing at the first input.
H04N 5/3745 - Addressed sensors, e.g. MOS or CMOS sensors having additional components embedded within a pixel or connected to a group of pixels within a sensor matrix, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components