Wednesday, May 24, 2017

CCM Market in China

China Securities Research publishes a report on Q Technology, the smallest of the top-three CCM suppliers in China, said to be "on the fast track in catching up with the market leaders and has become an early mover in FPM and dual-cameral modules (DCM)." A few interesting figures from the report:

Teledyne DALSA Opens its Line Scan Sensors for Sale

Marketwired: For more than 35 years, Teledyne DALSA has designed and manufactured what it calls the machine vision industry's best-in-class line-scan image sensors. Used in the company's line scan cameras, this class of sensors these sensors were not available as stand alone products. Now, DALSA has decided to offer them for sale, available immediately in resolutions from 2k to 16k.

SSD for Vision Data Storage

Nikkei: Prof. Ken Takeuchi group at Chuo University, Japan, proposes "Value-Aware SSD" that evaluates the value of image data and stores important and not-so-important data in high- and low-reliability flash memory cells, respectively. With that, it became possible to implement high-accuracy face recognition even when the error rate is 10%, which is 12 times higher than in existing SSDs. The data retention time of SSD was improved 300 times. In addition, the read speed was improved by 26% by minimizing the time it takes to correct memory errors.

Tuesday, May 23, 2017

Sony Image Sensor Growth Strategy

Sony has held a Corporate Strategy Meeting as a part of its IR Day discussing the business growth strategy and fiscal targets. Regarding the image sensor business, the company looks for "the image sensor for mobile use business to recover."

Sony CEO Kazuo Hirai says:


The IR Day slide deck updates on Sony Semiconductor Segment (SSS) business and plans:

Omnivision Announces HDR Sensors with LED Flicker Mitigation, Surround View ISP

PRNewswire: OmniVision introduces the 1.3MP OX1A10 and 1.7MP OX2A10 for side- and rear-view camera monitoring systems (CMS), respectively. Built on 4.2um BSI split pixel technology for HDR, the new sensors offer LED flicker–reduction.

"In regular HDR cameras, the short exposure time causes the image sensor to miss the LED 'on' pulse, giving the appearance of 'flicker' in the video stream on a display. Merely increasing the exposure time of normal pixel technology to capture the LED pulse does not solve the problem, but rather causes saturation and loss of dynamic range," said Marius Evensen, product marketing manager at OmniVision. "We designed the OX1A10 and OX2A10 image sensors with LED flicker–reduction technology to specifically mitigate this problem and enable mass adoption of e-mirrors in the automotive market. These sensors join our growing portfolio of automotive specific digital imaging solutions targeted at both machine and vision display systems."

The OX1A10 and OX2A10 achieve 110dB HDR while guaranteeing LED pulse capture. The OX1A10 supports 1280 × 1080 resolution in a 1:1.2 aspect ratio for side-view cameras. Targeting rear-view cameras, the OX2A10 supports 1840 × 940 resolution in a 2:1 aspect ratio. The sensors' on-chip combination algorithm reduces the output data rate for easier data transition and back-end processing.

The OX1A10 and OX2A10 are currently in volume production.


PRNewswire: OmniVision announces the OV493, a companion chip with surround-video image-processing capabilities for automotive applications. Each OV493 can process two video streams simultaneously, and two ISP companion chips can process four camera inputs for surround-view applications.

"As advanced automotive driver-assistance features, such as 360-degree surround-view systems, become more popular, automotive manufacturers seek imaging solutions that are suitable for multiple vehicle platforms and can meet stringent industry standards," said Andy Hanvey, senior automotive marketing manager at OmniVision. "The OV493 gives Tier-1 OEMs an opportunity to reduce system cost, maintain high performance, and design distributed architectures for multiple driver-assistance systems."

Monday, May 22, 2017

Omnivision Introduces 2MP Automotive Sensor

PRNewswire: OmniVision introduced the OV2311, an automotive 2MP, 3um global shutter IR-enhanced image sensor for driver monitoring systems.

To combat distracted driving, the automotive industry is ramping up its development of driver monitoring systems and vehicle co-pilot applications, which in combination can allow the on-board computer to seize or relinquish control of the vehicle, based on the driver's state. NHTSA defines this setup as level 3 autonomy. Currently only available for luxury vehicles, these systems are expected to become a standard safety feature in the near future.

"The demand for driver monitoring systems is expected to increase significantly as more affordable technologies allow advanced semi-automated features to transition from high-end to mainstream vehicles," said Jeff Morin, automotive product marketing manager at OmniVision. "Possessing the same capability customarily found in much larger and more expensive sensors, the OV2311 aims to bring advanced driver monitoring systems to the masses by delivering high-level, cost-effective performance in a compact form factor."

Vision-based driver monitoring systems in semi-autonomous vehicles require highly sophisticated eye-tracking technology and imaging capabilities. The OV2311 achieves high NIR QE to minimize active illumination power.

The OV2311 is available for sampling, with volume production expected in Q4 2017.

Softkinetic ToF Gesture Control Powers BMW Series 5 and 7

PRNewswire: Softkinetic announces that BMW extended use of its ToF camera for gesture control in its Series 5 cars, in addition to the last year's Series 7.

"SoftKinetic is proud to expand our technology partnership with BMW Group to include both the BMW 7 and BMW 5 series cars," said Eric Krzeslo, CMO of SoftKinetic. "The infotainment gesture control we see in the BMW cars is just the beginning of the innovation we are bringing to the automotive market. Our technology can improve driver safety through driver assistance and monitoring and 3D vision cameras that ascertain the environment in and out of the vehicle at all times paving the way towards the fully autonomous vehicle."

Sunday, May 21, 2017

Intel Euclid Vision Computer

Intel keeps investing in its vision-based solution and capabilities. The recently announced Euclid Development Kit is a fully stand-alone computer integrating RealSense IR stereo depth camera, a fish eye camera, an RGB camera, an Atom x7-Z8700 Quad core CPU, microphone, GPS, WiFi, and Bluetooth to produce a compact all-in-one computer and depth camera in the size of a candy bar. It comes with a 2000mAh battery so it is completely stand alone.


Thanks to AM for the link!

CrucialTec In-Display Fingerprint Sensor Patent

KoreaHerald Investor reports that CrucialTec has been granted a US patent for its in-display fingerprint solution which the company calls DFS:


The company is talks with some global clients to commercialize the fingerprint tech in the whole area of a smartwatch screen and a certain part of a smartphone display,” a CrucialTec official said. The newly patented technology is said to feature three thin film transistors for each electrode to pick up high-resolution images, compared to one for each electrode in the existing fingerprint scanner. That configuration is said to maximize the sensing capability while maintaining high transparency level of the components.

In spite of mentioning 3T technology, some of the company's recent patent applications keep talking about 1T pixels:


MobileIDWorld, BiometricUpdate: In-display fingerprint sensing solutions are gathering quite a lot of attention recently. Goodix presented its solution at MWC in Barcelona this year. Synaptics and OXi Technology have been reported developing a similar technology some time ago. Apple is rumored to integrate a similar sensor in its future iPhone displays.

Saturday, May 20, 2017

Great Minds Think Alike

ST patent application US20170134683 "Global-shutter image sensor" by François Guyader and François Roy proposes a light screening layer to protect the charge storage note in an BSI GS pixel, fairly similar to TSMC proposal published two weeks ago:


General Electric patent application US20170135179 "Image sensor controlled lighting fixture" by Laszlo Balazs, Tamas Both, and Jean-marc Naud proposes integration of image sensor into a lightbulb: "The controller receives detection signal data from the image sensor and wide-angle lens component when a user is within a detection area associated with a view angle of the wide angle lens, and then determines the position of the user. The controller then controls the illuminance of the light source based on the position of the user." This is quite similar to the Cree proposal published a week ago:

Friday, May 19, 2017

Low-Cost 20,000 fps Film Camera from 1980s

DexterLab2013 publishes a nice educational video explaining the operation of 20,000fps Photec IV 16mm film camera. The camera is said to represent the state-of-the-art in low-cost high speed imaging in 1980s:

ON Semi Industrial Imaging Presentation

ON Semi publishes a video presentation on Python image sensors for industrial applications:

Thursday, May 18, 2017

Market Shares, View from China

Beijing, China-based Chlue Research publishes its "Global CMOS Image Sensor Market Report 2016" filled with a lot of interesting data from 2015. Here is a part about the market shares (Piart is meant to be Pixart, probably):


The 2017 version of the report is still behind the paywall.

Wednesday, May 17, 2017

SK Hynix Telecom Image Sensor Noise Generator Wins First Customers

IoTNow reports that Aeris becomes one of the first customers of SK Hynix Telecom image sensor-based quantum noise generator:

"SK Telecom’s chip operates on a process called quantum shot noise that generates mathematically proven random numbers.

Quantum shot noise is more than a buzzword; it’s a scientifically tested principle that relies on the ricochet of light waves that produce patterns that always are random and unique. SK Telecom’s security chip generates quantum shot noise with two LEDs inside the chip itself.

The LEDs generate photons that bounce around inside the chip and are detected by a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor within the chip. The shot noise is the final image detected by the CMOS sensor, and it is this shot noise that truly is random.

The new security chip is estimated to cost a few dollars, making it a very attractive option for robust, future-proof security.
"

Tuesday, May 16, 2017

Imec Eliminates Image Sensor from Eye Tracker

Imec and Holst Centre (set up by imec and TNO) announce a technology to detect eye movement in real time based on electrical sensing aimed to virtual and augmented reality applications.

Today’s eye movement detection technology makes use of high-resolution cameras embedded in eye-tracking screens or glasses, already commercialized for numerous applications, including healthcare, research and gaming. While camera-based solutions can accurately determine where users are looking, most cameras’ frame rates are not fast enough to match the eye’s most rapid movements, such as saccades – a typical movement during reading. Using a more sophisticated camera that matches the eyes’ speed is said to significantly increase the cost of these devices and could have implications for their commercial use. Imec’s solution, based on electrical sensing, offers a much more inexpensive alternative, while solving the issue of the image processing delay.

Imec’s sensors were integrated into a set of glasses, with four built-in electrodes around each lens, two to pick up the eye’s vertical movement and two for horizontal movements. Parallel to that, an advanced algorithm was developed to translate the signals into a concrete position, based on the angle the eye is making with its central point of vision. Imec’s solution also offers insights on the eye’s behavior, like the speed of movement or the frequency and duration of blinks.

Human eyes have a natural electrical potential”, stated Gabriel Squillace, researcher in the Biomedical Applications & Systems group at imec. “At imec, we are leveraging this feature to develop the next-generation of eye-movement detection devices that can detect the eye’s position in real time at a five times lower cost and up to four times faster than what is currently available on the market. Imec’s ultimate goal is to develop a solution that can track the eye’s most rapid movements, such as saccades, enabling seamless real time tracking for AR and VR applications.

Sony Announces 1000fps Sensor Stacked on Top of Vision Processor

Sony announces the IMX382 high-speed vision sensor, which enables detection and tracking of objects at 1,000 fps. Sony begins sampling it in October 2017.

This vision sensor features a stacked configuration with a BI pixel array and signal processing circuit layer. The circuit layer is equipped with image processing circuits and a programmable column-parallel processor, delivering high-speed target detection and tracking. The new sensor uses information such as color and brightness obtained from pixels to detect objects, then extracts the object's centroid, moment and motion vector, and finally outputs the information from the vision sensor in each frame.


Sony Youtube video demos the new 1.27MP vision sensor capabilities:

Monday, May 15, 2017

Pixart Sales Rise

Digitimes reports that Pixart net profit rises 22.4% QoQ and 230% YoY in Q1 2017. The company expects its revenues to grow 10-15% sequentially in Q2 on increased sales for gaming notebooks and laser mice. The company's gross margin is expected to range from 53-54% in Q2 compared to 52.8% in Q1. Pixart product mix slowly shifts away from the optical mouse business:

Analysis of RGB + Mono Dual Camera for Low Light Photography

OSA Optics Express paper "Enhancement of low light level images using color-plus-mono dual camera" by Yong Ju Jung, Gachon University, Seongnam, Korea discusses "A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images."

Espros ToF Sensors Lineup

Espros Photonics presents its ToF image sensors lineup ranging from 8 x 8 pixel to QVGA resolution:

Sunday, May 14, 2017

Pseudorandom Pixels and Jaggies

IEICE Electronics Express, Vol.14, No.9 publishes a paper "CMOS image sensor with pseudorandom pixel placement for jaggy elimination" by Junichi Akita, Kanazawa University, Japan. This quite an old idea has been implemented all the way to a silicon and its advantage has been demonstrated:

Image Sensor in Every Lightbulb

Cree, one of the world's largest illumination LED manufacturers, files a patent application US20170127492 "Lighting fixture with image sensor module" by Robert D. Underwood, John Roberts proposing an integration of image sensor into a LED lightbulb:

"the derived image data is used to determine an ambient light level in an area surrounding the lighting fixture. In particular, a mean light intensity for a number of zones (i.e., a zoned mean light intensity) within a frame (or a number of frames) of the image data may be obtained from the image sensor module and used to determine the ambient light level. In one embodiment, zones within the frame of image data for which the mean light intensity is a predetermined threshold above the mean light intensity for one or more other zones may be ignored (not included in the average) to increase the accuracy of the determined ambient light level.

In one embodiment, the derived image data is used to determine an occupancy event within the area surrounding the lighting fixture. In particular, a mean light intensity for a number of zones within the image data may be obtained from the image sensor module and used to determine whether an occupancy event has occurred.
"

Saturday, May 13, 2017

Qualcomm Introduces its Iris Authentication Solution

Snapdragon 835 features Iris Authentication software solution:



Qualcomm explains:

"Due to the nature of information that needs to be captured for iris authentication (the iris has 225 points of comparison), an infrared (IR) LED illuminates the iris while the IR camera captures the iris image. What’s displayed, however, is a color preview captured by a front-facing camera. It’s a far less spooky-looking image, allowing for a better user experience.

Our iris authentication solution uses the Snapdragon 835’s camera security framework. Camera security is engineered to help provide hardware-based authentication and prevent the iris image from being captured by malware. This means the malware cannot fake authentication and replay the captured image into the Trusted Execution Environment. Both the IR image capture and RGB preview take advantage of camera security capabilities; each time the user scans her iris to open the phone or pay for something, the path is then hardened, so it becomes increasingly difficult for the image to be captured and used for nefarious reasons.

The iris authentication solution takes advantage of the Snapdragon 835’s processing power, which is engineered to deliver an unlock speed of less than 100ms. The solution makes quick work of the verification even if you’re wearing glasses, sunglasses, or even contact lenses. That’s because the iris authentication solution is capable of recognizing a variety of eyewear and extracting only the necessary information while ignoring the rest. And it’s designed to work well in indoor and outdoor environments so you can unlock your phone just as easily in bright and dark lighting conditions.

All iris authentication systems are vulnerable to attack by spoofing. For example, hackers might present fake iris photographs or videos of an authorized user and attempt to gain access to a device. Qualcomm Technologies’ iris authentication solution is designed to help withstand such attacks through an anti-spoofing mechanism known as liveness detection. Liveness detection solutions work to ensure that the iris being presented to the device is that of an authorized (and living) user.
"

Friday, May 12, 2017

Image Sensor Papers at VLSI Symposia 2017

VLSI Symposia to be held on June 5-8, 2017 in Kyoto, Japan, presents its programs with a significant image sensors content:

An All Pixel PDAF CMOS Image Sensor with 0.64μm×1.28μm Photodiode Separated by Self-Aligned In-Pixel Deep Trench Isolation for High AF Performance,
S. Choi, K. Lee, J. Yun, S. Choi, S. Lee, J. Park, E. S. Shim, J. Pyo, B. Kim, M. Jung, Y. Lee, K. Son, S. Jung, T.-S. Wang, Y. Choi, D.-K. Min, J. Im, C.-R. Moon, D. Lee and D. Chang, Samsung, Korea
We present a CMOS image sensor (CIS) with phase detection auto-focus (PDAF) in all pixels. The size of photodiode (PD) is 0.64μm by 1.28μm, the smallest ever reported and two PDs compose a single pixel. Inter PD isolation was fabricated by deep trench isolation (DTI) process in order to obtain an accurate AF performance. The layout and depth of DTI was optimized in order to eliminate side effects and maximize the performance even at extremely low light condition up to 1lux. In particular the AF performance remains comparable to that of 0.70μm dual PD CIS. By using our unique technology, it seems plausible to scale further down the size of pixels in dual PD CIS without sacrificing AF performance.

A Shutter-Less Micro-Bolometer Thermal Imaging System Using Multiple Digital Correlated Double Sampling for Mobile Applications,
S. Park, T. Cho, M. Kim, H. Park and K. Lee, KAIST and Seoul National Univ. of Science and Technology, Korea
A micro-bolometer focal plane array (MBFPA)-based long wavelength Infra-red thermal imaging sensor is presented. The proposed multiple digital correlated double sampling (MD-CDS) readout method employing newly designed reference-cell greatly reduces PVT variation-induced fixed pattern noise (FPN) and as a result features much relaxed calibration process, easier TEC-less operation and Shutter-less operation. The readout IC and MBFPA was fabricated in 0.35um CMOS and amorphous silicon MEMS process respectively. The fabricated MBFPA thermal imaging sensor has NETD performance of 0.1 kelvin even though the mechanical shutter is not used.

Trantenna: Monolithic Transistor-Antenna Device for Real-Time THz Imaging System,
M. W. Ryu, R. Patel, S. H. Ahn, H. J. Jeon, M. S. Choe, E. Choi, K. J. Han and K. R. Kim, UNIST, Korea
We report a circular-shape monolithic transistor-antenna (trantenna) for high-performance plasmonic terahertz (THz) detector. By designing an asymmetric transistor on a ring-type metal-gate structure, more enhanced (45 times) channel charge asymmetry has been obtained in comparison with a bar-type asymmetric transistor of our previous work. In addition, by exploiting ring-type transistor itself as a monolithic circular patch antenna, which is designed for a 0.12-THz resonance frequency, we demonstrated the highly-enhanced responsivity (Rv) > 1 kV/W (x 5) and reduced noise-equivalent power (NEP) < 10 pW/Hz0.5 (x 1/10).

Chip-Scale Fluorescence Imager for In Vivo Microscopic Cancer Detection,
E. P. Papageorgiou, B. E. Boser and M. Anwar, Univ. of California, Berkeley and Univ. of California, San Francisco, USA
Modern cancer treatment faces the pervasive challenge of identifying microscopic cancer foci in vivo, but no imaging device exists with the ability to identify these cells intraoperatively, where they can be removed. We introduce a novel CMOS sensor that identifies foci of less than 200 cancer cells labeled with fluorescent biomarkers in 50ms. The sensor’s miniature size enables manipulation within a small, morphologically complex, tumor cavity. Recognizing that focusing optics traditionally used in fluorescence imagers present a barrier to miniaturization, we integrate stacked CMOS metal layers above each photodiode to form angle-selective gratings, rejecting background light and deblurring the image. A high-gain capacitive transimpedance amplifier based pixel with 8.2V/s per pW sensitivity and a dark current minimization circuit enables rapid detection of microscopic clusters of 100s of tumor cells with minimal error.

A 4.1Mpix 280fps Stacked CMOS Image Sensor with Array-Parallel ADC Architecture for Region Control,
T. Takahashi, Y. Kaji, Y. Tsukuda, S. Futami, K. Hanzawa, T. Yamauchi, P. W. Wong, F. Brady, P. Holden, T. Ayers, K. Mizuta, S. Ohki, K. Tatani, T. Nagano, H. Wakabayashi, and Y. Nitta, Sony, Japan and Sony, USA
A 4.1Mpix 280fps stacked CMOS image sensor with array-parallel ADC architecture is developed for region control applications. The combination of an active reset scheme and frame correlated double sampling (CDS) operation cancels Vth variation of pixel amplifier transistors and kTC noise. The sensor utilizes a floating diffusion (FD) based back-illuminated (BI) global shutter (GS) pixel with 4.2e-rms readout noise. An intelligent sensor system with face detection and high resolution region-of-interest (ROI) output is demonstrated with significantly low data bandwidth and low ADC power dissipation by utilizing a flexible area access function.

A 256 Energy Bin Spectrum X-Ray Photon-Counting Image Sensor Providing 8Mcounts/s/pixel and On-Chip Charge Sharing, Charge Induction and Pile-Up Corrections,
A. Peizerat, J.-P. Rostaing, P. Ouvrier-Buffet, S. Stanchina, P. Radisson, and E. Marché, CEA-LETI and Multix, France
To achieve better and faster material discrimination in applications like security inspection, X-Ray image sensors giving a highly resolved energy spectrum per pixel are required. In this paper, a new pixel architecture for spectral imaging is presented, exhibiting a 256 bin spectrum per pixel in a single image duration, up to two orders of magnitude higher than previous works. A prototype circuit, composed of 4x8 pixels of 756μmx800μm and hybridized to a CdTe crystal, was fabricated in a 0.13μm process. Our pixel architecture has been measured at 8 Mcounts/s/pixel while embedding on-chip charge sharing, charge induction and pile-up corrections.

A 0.61 E- Noise Global Shutter CMOS Image Sensor with Two-Stage Charge Transfer Pixels,
K. Yasutomi, M. W. Seo, M. Kamoto, N. Teranishi and S. Kawahito, Shizuoka Univ., Japan
A low-noise global shutter (GS) CMOS image sensor (CIS) with two-stage charge transfer (2-CT) structure is presented. The low-noise wide dynamic range performance of the proposed pixel has been demonstrated by using column-parallel folding integration (FI)/cyclic ADCs. The GS image sensor with 5.6μm-pitch 1200 x 900 pixels is implemented with a 0.11μm CIS technology. The noise and dynamic range are measured to be 0.61 erms and 81 dB, respectively.

224-ke Saturation Signal Global Shutter CMOS Image Sensor with In-Pixel Pinned Storage and Lateral Overflow Integration Capacitor,
Y. Sakano, S. Sakai, Y. Tashiro, Y. Kato, K. Akiyama, K. Honda, M. Sato, M. Sakakibara, T. Taura, K. Azami, T. Hirano, Y. Oike, Y. Sogo, T. Ezaki, T. Narabu, T. Hirayama, and S. Sugawa, Sony and Tohoku Univ., Japan
The required incorporation of an additional in-pixel retention node for global shutter complementary metal-oxide semiconductor (CMOS) image sensors means that achieving a large saturation signal presents a challenge. This paper reports a 3.875-μm pixel single exposure global shutter CMOS image sensor with an in-pixel pinned storage (PST) and a lateral-overflow integration capacitor (LOFIC), which extends the saturation signal to 224 ke, thereby enabling the saturation signal per unit area to reach 14.9 ke/μm2. This pixel can assure a large saturation signal by using a LOFIC for accumulation without degrading the image quality under dark and low illuminance conditions owing to the PST.

320x240 Back-Illuminated 10µm CAPD Pixels for High Speed Modulation Time-of-Flight CMOS Image Sensor,
Y. Kato, T. Sano, Y. Moriyama, S. Maeda, T. Yamazaki, A. Nose, K. Shina, Y. Yasu, W. van der Tempel, A. Ercan and Y. Ebiko, Sony, Japan and SoftKinetic, Belgium
A 320x240 back-illuminated Time-of-Flight CMOS image sensor with 10µm CAPD pixels has been developed. The backilluminated (BI) pixel structure maximizes the fill factor, allows for flexible transistor position and makes the light path independent of the metal layer. In addition, the CAPD pixel, which is optimized for high speed modulation, results in 80% modulation contrast at 100MHz modulation frequency.

An Imager Using 2-D Single-Photon Avalanche Diode Array in 0.18-μm CMOS for Automotive LIDAR Application,
H. Akita*, I. Takai, K. Azuma, T. Hata and N. Ozaki, DENSO and Toyota, Japan
A feasibility imager chip of a 32 x 4-pixel array was developed in a 0.18-μm CMOS process for a small size automotive laser imaging detection and ranging. Each pixel consists of 8 single-photon avalanche diodes as a world-first 2-D pixel array with digital output macro pixel architecture which enables laser signal sensing under sunlight noise. Distance measurement results show less than 2.1% nonlinearity and 0.11-m standard deviation up to 20-m distance with 10%-reflective target under the ambient light of 75 klux.

A 16.5 Giga Events/s 1024 × 8 SPAD Line Sensor with Per-Pixel Zoomable 50ps-6.4ns/bin Histogramming TDC,
A. T. Erdogan, R. Walker, N. Finlayson, N. Krstajić, G. O. S. Williams and R. K. Henderson, Univ. of Edinburgh, UK
A 1024 × 8 single photon avalanche diode (SPAD) based line sensor for time resolved spectroscopy is implemented in 0.13μm imaging CMOS with 23.78 μm pixel pitch at 49.31% fill factor. The line sensor can operate in single photon counting (SPC) mode (65 giga-events/s), time-correlated single photon counting (TCSPC) mode (194 million events/s) or histogramming mode (16.5 giga-events/s), increasing the count rate up to 85 times compared to TCSPC operation. This performance is enabled by a 512 channel histogramming TDC with 50ps-6.4ns/bin zoomable time resolution.

A 272.49 pJ/pixel CMOS Image Sensor with Embedded Object Detection and Bio-Inspired 2D Optic Flow Generation for Nano-Air-Vehicle Navigation,
K. Lee, S. Park, S.-Y. Park, J. Cho and E. Yoon, Univ. of Michigan, USA
We report a CMOS imager embedded with energy-efficient object detection and bio-inspired 2D optic flow generation cores for navigation of nano-air-vehicles (NAVs). The proposed vision-based navigation system employs spatial difference imaging and gradient orientation using mixed-signal circuits to achieve both energy-efficient and area-efficient implementation. The system achieved 272.49 pJ/pixel with 75% reduction in memory size for integrated operation of object detection and 2D optic flow generation.

Demo Sessions:
  • A Shutter-Less Micro-Bolometer Thermal Imaging System Using Multiple Digital Correlated Double Sampling for Mobile Applications,
    S. Park, T. Cho, M. Kim, H. Park, and K. Lee, KAIST and Seoul National Univ. of Science and Technology, Korea
  • A 4.1Mpix 280fps Stacked CMOS Image Sensor with Array-Parallel ADC Architecture for Region Control,
    T. Takahashi, Y. Kaji, Y. Tsukuda, S. Futami, K. Hanzawa, T. Yamauchi, P. W. Wong, F. Brady, P. Holden, T. Ayers, K. Mizuta, S. Ohki, K. Tatani, T. Nagano, H. Wakabayashi, and Y. Nitta, Sony, Japan and Sony, USA
  • 320x240 Back-Illuminated 10µm CAPD Pixels for High Speed Modulation Time-of-Flight CMOS Image Sensor,
    Y. Kato, T. Sano, Y. Moriyama, S. Maeda, T. Yamazaki, A. Nose, K. Shina, Y. Yasu, W. van der Tempel, A. Ercan and Y. Ebiko, Sony, Japan and SoftKinetic, Belgium

Himax Keeps Investing in 3D Sensing

Himax announces its Q1 2017 results. The company updates on its image sensing business:

"With respect to the non-driver business, particularly in the WLO and CMOS Image Sensor products of 3D scanning solutions, we believe it is one of the most significant new applications for the next generation smartphone. Himax is well recognized to be the front runner and world leader in this important technology. Our SLiMTM product line is the state of the art total solutions for 3D sensing and scanning based on structured light technology, of which we can also provide individual technologies separately to selected customers to accommodate their specific needs. We are seeing strong demand for 3D scanning products from multiple top name customers who are either collaborating with us or engaging us for advanced stage discussions. In light of the promising new business opportunities around the corner, we will continue to invest heavily in R&D and customer engineering regardless of the prevailing unfavorable business conditions. We are aware that this will hit our short-term bottom line, but we believe such investment is extremely important and will bring in very handsome return in the next few years.

Sales of CMOS image sensors will deliver double-digit growth in the second quarter...

On the CMOS image sensor business update, the Company continues to make great progress with its two machine vision sensor product lines, namely near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). The Company’s NIR sensor is a critical part in the structured light 3D scanning total solution. Similar to WLO, the Company can supply NIR sensor as an individual component for both mobile and non-mobile applications. The Company’s NIR sensors’ overall performance is far ahead of those of its peers. Himax currently can offer low noise HD and 5.5 megapixel NIR sensors with superior quantum efficiency in NIR band while operating at excellent power consumption.

Himax’s AoS solutions provide super low power computer vision to enable new applications across a very wide variety of industries. The ultra-low power, always-on vision sensor is a powerful solution capable of detecting, tracking and recognizing its environment in an extremely efficient manner using just a few milliwatts of power. In April, Himax announced a strategic investment in Emza, an Israeli software company dedicated to developing extremely efficient machine vision algorithms. The investment enables the Company to provide turn-key solutions to meet customers’ increasing appetite for ultra-low power. With Emza’s machine-vision algorithms, the Company can transform AoS sensor from a pure image capturing component to an information analytics device that can be easily integrated into smart home and security applications as well as smartphone, AR/VR, AI and IoT devices.

For the traditional human vision segments, the Company expects mass production of several earlier design wins for notebooks and increased shipments for multimedia applications such as car recorder, surveillance, drone, home appliances, and consumer electronics, among others, during the second quarter.
"

Embedded Vision Summit Presentations

Embedded Vision Alliance opens access to the presentations of the Embedded Vision Summit held on May 1-3, 2017 in Santa Clara, CA (free registration and 207MB of hard disk space for presentations are required). Few interesting bits from various presentations:

Intel previews its 300m long range RealSense depth camera:


By now, Intel depth cameras lineup is probably the broadest in the industry:


Fotonation shows that camera and image processing are by far the most power consuming parts of a smartphone:


Fotonation proposes its HW accelerators to reduce power:


Microsoft keynote talks about the Hololens sensor architecture:


Sony plans to leverage its future image sensors speed and lower power: