2015-05-13 CV_appendix
Transcript of 2015-05-13 CV_appendix
Supplement
Materials
I1. Summary
The 3 strategies for bringing the customers the best image quality by:
Adaptive module assembly
Focus alignment with MTF or SFR
OIS module calibration
Active alignment (multiple regions, e.g., 9-region)
Image pipeline
Auto focus
Auto exposure
Color image pipeline, e.g. edge-aware color interpolation
Digital zooming-in
Color aliasing removal
Adaptive tone enhancement
Lens shading correction
Bi-model image modeling
Shading parameter estimation and correction
Quality control
IQ testing
Frequency component analysis
Tilt and field curvature estimation
Optical center estimation
Automatic optical inspection
Simulation of lens shading and through-focus curve
Assessment, e.g. Nokia VUP
In summary, the techniques having been used include:
RANSAC
Watershed
Mean-shift
PCA
Cubic spline
Newton's divided differences
Dynamic time warping
Camera calibration and homography transformation
Kalman filter
Fourier transform
Sigmoid function
Hybrid gamma curve
Encoded finder pattern
Circle and ellipse fitting by LSE
Image processing techniques
Machine and I/O controlling
EE (Ardruino and Raspberry Pi)
SW project management: code documenting tool, version controlling with
remote backup, trouble-shooting manual (for production line), and
knowledge base website (for internal use)
I2. Optical Image Stabilization (OIS) Module Calibration:
Find the optimal gain which can make the best shaking compensation.
I3. 6-Axis Active Alignment (AA):
Eliminate the tilt angle between sensor and lens when assembling the camera
module. In current process, the tilt angle can be decreased to 0.03° by using a
15 degree-of-freedom machine (9 motors: buffer tray + dispensing + 6-axis AA;
5 IOs: dispensing + vacuum+ de-vacuum + Gripper + UV; different machine
states: standby + PnP + dispensing + component loading + AA).
Loop1
Loop2
Loop0
I4. Color Image Pipeline – Color Interpolation:
An interpolation method optimized for edge-like content.
Ref algo.
YC Ref. A Cur
YC
I5. Color Image Pipeline –Color-aliasing Removal:
Remove the false color on the edges.
I6. Color Image Pipeline – Digital Zoom (4x):
Increase the image resolution by a frequency-domain approach.
Before After
Source 4x Res.
I7. Color Image Pipeline – Quick Auto Exposure (AE with 2 input frames):
An AE algorithm for the un-calibrated camera module under stationary
illumination.
I8. Color Image Pipeline – Lens Shading Correction:
Left: the input image (StD: 13.17 DN); right: the result (StD: 0.59 DN).
I9. Color Image Pipeline – Color Correction:
In each color patch, the upper half, the lower half and the small patch in the
lower half are the target color, the sampled color and the corrected color
respectively.
I10. Summary of Image Pipeline:
I11. Stereo Camera Calibration and Online Depth Estimation:
Use two webcams to build a stereo camera and do the online depth estimation.
The camera calibration is used for extraction of intrinsic and extrinsic
parameters. Rectification and make 3D point cloud.
Stereo camera and the camera calibration:
Depth Space Image:
I12. RGBD Camera Module Calibration:
Finding the correspondence between RGB image and depth map is essential to
the depth-related applications, such as re-focusing and generating of 3D point
cloud. To estimate the correspondence, the general idea is to find intrinsic
parameters and the relative orientation between two sensors, and then the
correspondence can be found after the objects captured by the depth sensor are
projected onto the RGB sensor.
I13. Color Transfer between Images:
The method is based on color space transformation. On the left the image
contains the target colors. The upper-right and lower-right are the original
image and the result image respectively.
s:
I14. Unsupervised Learning – Feature Selection:
Use sparse coding to obtain better feature. Spare coding is an iterative method
to find dictionary and feature vector by using matching pursuit and k-SVD
respectively.
The Dictionary:
I15. AOI - Adaptive golden image:
Find the defects on the LED cup and LED die.
Image samples and defects:
Good image samples:
Results:
I16. EE Project – Bluetooth Level Meter:
Use the InvenSense MPU6050 and Kalman filter to estimate the angle, and
then send the measurement to Android phone through Bluetooth.
I17. EE Project – Self Balancing Robot:
Use gyro, Kalman filter, PID control and I2C and PWM to balance the two-
wheel robot (code is ready and now tuning the PID parameters
I18. EE Project – Online Face Detection on Raspberry Pi:
Detecting face and overlapping the glasses image on the detected face.
I20. Master’s dissertation:
Use camera to estimate the pose of subject’s head.
(a) (b) (c)
(d)
The goal of the system is to estimate the coordinate transformation HEADTMEG
between subject’s head CHEAD and the machine CMEG. The conventional way to do
that is to use positioning coils which are attached on the subject’s head, but only can
be used before the experiments otherwise it will affect the Magnetoencephalography
(MEG) measurement. The proposed system can track the 3D coordinates of
subject’s head during experiments, and that can be used to estimate HEADTMEG and
compensate the artifacts caused by the head movement during the experiments. (a)
The MEG machine and the setup of camera calibration. (b) The pattern for CAMTMEG
estimation. (c) The pattern for HEADTCAM estimation.
I21. Patent – I3 (Integrated, Interactive, and Immersive) Surveillance System
http://www.youtube.com/watch?v=LAcAkLDRIY0
I22. Paper list:
Journal Papers:
1. "Integration of multiple views for a 3-D indoor surveillance system," Information,
vol. 13, no. 6, pp. 2039-2057, 2010. Yi-Yuan Chen, Yung-Huang Huang, Yung-Cheng
Cheng, and Yong-Sheng Chen
2. "Camera-guided coordinate system alignment for neuromagnetic source
estimation," International Journal of Bioelectromagnetism, vol. 7, no. 2, pp. 86-89,
2005. Yung-Cheng Cheng, Yong-Sheng Chen, Jen-Chuen Hsieh, and Li-Fen Chen
International Conference Papers:
1. "A 3-D surveillance system using multiple integrated cameras," Proceedings of the
IEEE International Conference on Information and Automation, Harbin, China, Jun.
2010. Yi-Yuan Chen, Yung-Huang Huang, Yung-Cheng Cheng, and Yong-Sheng Chen
2. "Accurate planar image registration for an integrated video surveillance system,"
Proceedings of the IEEE Workshop on Computational Intelligence for Visual
Intelligence, Nashville, Tennessee, USA, Mar. 2009. Yung-Cheng Cheng, Kai-Ying Lin,
Yong-Sheng Chen, Jenn-Hwan Tarng, Chii-Yah Yuan, and Chen-Ying Kao
3. "Camera-guided coordinate system alignment for neuromagnetic source
estimation," Proceedings of the Joint Meeting of 5th International Conference on
Bioelectromagnetism and 5th International Symposium on Noninvasive Functional
Source Imaging within the Human Brain and Heart, Minneapolis, Minnesota, May
2005. Yung-Cheng Cheng, Yong-Sheng Chen, Jen-Chuen Hsieh, and Li-Fen Chen
Taiwan Conference Papers:
1. "3-D environment model construction and adaptive foreground detection for multi-
camera surveillance system," Proceedings of 2010 IPPR Conference on Computer
Vision, Graphics, and Image Processing, Kaohsiung, Taiwan, Aug. 2010. Yi-Yuan
Chen, Hung-I Pai, Yung-Huang Huang, Yung-Cheng Cheng, Yong-Sheng Chen, Jian-
Ren Chen, Shang-Chih Hung, Yueh-Hsun Hsieh, Shen-Zheng Wang, and San-Lung
Zhao
2. "I3: Integrated, interactive, and immersive surveillance system," Proceedings of
2008 IPPR Conference on Computer Vision, Graphics, and Image Processing, Yilan,
Taiwan, Aug. 2008. Yung-Cheng Cheng and Yong-Sheng Chen
3. "Real-time adaptive functional brain imaging," Proceedings of the International
Symposium on Biomedical Engineering, Taipei, Taiwan, Dec. 2006. Li-Fen Chen,
Yong-Sheng Chen, Jen-Chuen Hsieh, and Yung-Cheng Cheng
4. "Coordinate system alignment using single camera for functional brain imaging,"
Proceedings of 2005 IPPR Conference on Computer Vision, Graphics, and Image
Processing, Taipei, Taiwan, Aug. 2005. Yung-Cheng Cheng, Yong-Sheng Chen, Jen-
Chuen Hsieh, and Li-Fen Chen