Liquid camera lens

New Liquid Lens Digital Camera Tech

How to disassemble a Canon Lens

If you want to reproduce the results of coded aperture, or color-filtered aperture, you need to disassemble a lens to put in some masks. There are some documents you need to have before you start to do it,

canon lens parts catalog
http://www.f20c.com/stuff/canon/partslist/

If you want to disassemble a simple lens, like EF 50mm F1.8 II, check out this tutorial,
http://www.ejarm.com/photo/ef5018iidis/

base separation rules for stereo capturing

From John’s 3D Guide
http://www.crystalcanyons.net/pages/3dguidebook/BasicMethods.htm

HYPO AND HYPERSTEREO

From this we learn that the stereo base is one element in defining the perceived depth. Another factor is image magnification. In hypostereo the stereo base is less than the normal eye separation of 65mm (or about 2.4 inches). In hyperstereo the stereo base is more than normal eye separation. In ortho-stereoscopic viewing the base separation and the focal lengths of the lenses (43mm = eye equivalent) are set up to reproduce what the viewer would see in reality. In an “orthostereo display”, the taking and viewing lenses are usually the same. The two 35mm cameras mounted as shown at the bottom of figure 1.1 have a typical separation of about 3 inches. This is slightly hyperstereoscopic and may be just noticeable in normal viewing. With a large hyperstereo base distance a person will appear taller or more leggy than in actual fact. Sometimes hyperstereo is a good thing because it exaggerates the stereo effect! On the other hand it can render near objects oddly. For most scenes and action shots where the subject is not too close, the slightly hyper configuration at the bottom of figure 1.1 is fine.

Hyperstereo (extended stereo base) is particularly useful in shots where the nearest object is far off. Human’s do not notice depth on objects more than a couple hundred feet away because the rays coming to the eyes from such a distant object are essentially parallel. However, by moving the “eyes” (i.e. the cameras) further apart, such rays now come in at differing angles and depth perception is restored. For example, if the nearest object is 100 feet away, a hyperstereo base with the cameras separated by 1 to 2 feet will render significant depth to the image. Cameras three feet apart will yield stereo pairs that show some depth even at subject distances of 1000 feet or more.

IMPORTANT: When viewing objects as illustrated in figure 1.3, if the fig tree separation between L and R increases without limit, the left and right eyes will be looking at such disparate scenes that the brain won’t be able to “fuse” the image into a rational scene. Trying to fuse an image with too much near vs. far separation produces severe strain, nausea, even migraine headaches. Thus there is a fundamental guideline of 3D photography:

THE BASE SEPARATION FOR A NORMAL LENS (i.e. 50mm on a 35mm camera) SHOULD NOT EXCEED 1/30 THE DISTANCE TO THE NEAREST OBJECT IN THE FIELD OF VIEW.

Examples:

a) Nearest object 20 feet. Maximum separation 20/30 feet ~ 8 inches.

b) Nearest object 100 feet. Maximum separation 100/30 ~ 3 feet.

c) Nearest object 20 feet but a wide angle 24mm lens is used. Since the wide angle produces an image where the angular displacements are about half what they are with the 50mm lens (i.e. it is “twice as wide”), the maximum separation is 2*20/30 ~ 1.3 feet. Alternatively for a 24mm lens the guideline is 1/15.

d) Nearest object 100 feet, but we want to use a 135mm telephoto to “bring the object closer”. In doing this, image displacements are magnified by 135/50 = 2.7. Thus the maximum separation is about 20/(2.7*30) feet or approximately 1/4 foot ~ 3 inches.

e) You don’t always want to push the limit on the 1/30 rule. Experimentation will show the way. Telephoto lenses seem to magnify all the potential errors (parallelism in pointing for example), while wide angles generally are more forgiving and better for 3D because they don’t compress distance and do not exaggerate pointing errors.

NOTES ON THE 1/30 RULE:

Some experienced 3D’ers consider The 1/30 Rule to be more of a myth than a rule. In the Art of photography, there are few canonical rules. 1/30 is a good guideline to start with, however. Exceptions abound in reality. For example, if the near point and the far point in the scene are close in distance (for example if the far point is not at infinity, but only 10 feet in back of a 10 foot near point), then 1/30 can sometimes be exceeded (maybe to something like 1/15). In macro work the near points and far points are usually very close (because there is little depth of field when the magnification is high). Nonetheless I prefer not to exceed 1/30 in such situations. The ability to fuse stereo images varies widely from person to person. This ability also varies with the method of presentation (viewer magnification, slide projection beam angle (or projection lens magnification), distance from the screen, etc.) If you are going to expose a wide range of folks to your 3D art, it may be best not to go wild with the separation. Two migraines in a large audience are still two to many. Experimentation is a good way, at least initially. Bracket (i.e. try the same scene) with a few different stereo bases and test it out (writing down what you did!).

FFmpeg Change Video Speed

One possible solution is to use FFmpeg in conjunction with yuvfps:

ffmpeg -i input.dv -f yuv4mpegpipe – | yuvfps -s 50:1 -r 50:1 | ffmpeg -f yuv4mpegpipe -i – -b 28800k -y output.avi

http://ubuntuforums.org/showthread.php?t=688679

It can also be done using a pipe,

framerate2 =framerate1 / stretch_factor

Script (fish syntax):
set width (ffprobe -show_streams movie.mpeg | grep width= 2>/dev/null | cut -f2 -d=)
set height (ffprobe -show_streams movie.mpeg | grep height= 2>/dev/null | cut -f2 -d=)
set framerate1 (ffprobe -show_streams movie.mpeg | grep r_frame_rate= | cut -f2 -d=)
set framerate2 (echo "$framerate1 / $stretch_factor" | bc -l)

ffmpeg -i movie.mpeg -pix_fmt yuv420p -an -f rawvideo - |\
http://thread.gmane.org/474C122B.9090307@signal7.de

Or a raw or image intermediate format:

Assume starting from a 25fps avi (UNTESTED!)

ffmpeg -i 25.avi -pix_fmt yuv420p -f raw raw.dat
ffmpeg -s -r 50 -f yuv420p -i raw.dat -r 25 doublespeed.mpg

ffmpeg -f rawvideo -pix_fmt yuv420p -s {$width}x{$height} -r $framerate2 -i - -y movie-stretched.mpg

Optimization Library

I have been spending weeks to write a conjugate gradient program to do multi-dimensional minimization. But the results didn’t seem to be right. I’m also reluctant to use Matlab optimization toolbox, although it is very powerful. So I searched again and again, and re-found GSL. GSL is numerical library for c/c++ programmers, of course it includes minimization functions. Also, opt++ (http://csmr.ca.sandia.gov/opt++/) seems to be not bad.

stack a set of images

I use to write my own opencv program to stack a set of images for viewing, e.g. rectification results of multiple images. But today I want to give ImageMagick a try. I searched on-line, and find many extremely useful features of ImageMagick. To stack a set of images, I could use ‘convert’ command with option ‘+/-append’. I could also ‘montage’. Some examples go follows:

  montage -label Balloon   balloon.gif  \
-label Medical medical.gif \
\( present.gif -set label Present \) \
\( shading.gif -set label Shading \) \
-tile x1 -frame 5 -geometry '60x60+2+2>' \
-title 'My Images' titled.jpg

or

 montage -font Times-New-Roman -pointsize 24 \
-label (a) F28_IMG_0001.ppm \
-label (b) F28_IMG_0002.ppm \
-label (c) F28_x.ppm \
-tile x1 -geometry +10+10 titled.jpg

These commands are really cool. I don’t need Illustrator anymore for preparing my paper figures. http://www.imagemagick.org/Usage/montage/

T Slot Aluminum

I always tried to find good materials to build the camera array with low cost. However, the only think I can only find is slotted metal at Low’es and Homedepot. These metals are very heavy, and are not easy to align the camera array either. Slide bar in the market (http://www.stereoscopy.com/jasper/slide-bars.html) is way too expensive.

Today I found T slot aluminum. http://www.faztek.net/downloads.html. This is very good to build staffs. With linear bearing, I can build a slider on the rail very quickly, and I could also add a fastener to the side of the roller to fix the position of the camera.

Sensor Crop Factor

From – http://en.wikipedia.org/wiki/Image_sensor_format#Table_of_sensor_sizes

Table of sensor sizes

Since inch-based sensor formats are not standardized, exact dimensions may vary, but those listed are typical.[3]

Type 1/4″ 1/3.6″ 1/3.2″ 1/3″ 1/2.7″ 1/2.5″ 1/2″ 1/1.8″ 1/1.7″ 2/3″ 1″ 4/3″ Canon APS-C Nikon DX Canon APS-H 35mm Leica S2
Diagonal (mm) 4.50 5.00 5.68 6.00 6.72 7.18 8.00 8.93 9.50 11.0 16.0 21.6 26.7 28.4 34.5 43.3 54
Width (mm) 3.60 4.00 4.54 4.80 5.37 5.76 6.40 7.18 7.60 8.80 12.8 17.3 22.2 23.6-.7 28.7 36 45
Height (mm) 2.70 3.00 3.42 3.60 4.04 4.29 4.80 5.32 5.70 6.60 9.6 13.0 14.8 15.5-.8 19.1 24 30
Area (mm2) 9.72 12.0 15.5 17.3 21.7 24.7 30.7 38.2 43.3 58.1 123 225 329 366-374 548 864 1350
Crop factor[4] 9.62 8.65 7.61 7.21 6.44 6.02 5.41 4.84 4.55 3.93 2.70 2.00 1.62 1.52 1.26 1.0 0.8

DIGITAL CAMERA SENSOR SIZES

http://www.cambridgeincolour.com/tutorials/digital-camera-sensor-size.htm

New release of OpenCV

I just found out that the new release (1.1pre1) of OpenCV has already integrated many state of art algorithms, such as graph-cut stereo matching algorithm [kolmogorov03], SURF (a new feature detection method surperior that SIFT), stereo image rectification and etc. Check out the document and find more.

PTZ camera calibration

For PTZ camera, the principal is very unstable, it may vary over more than 200 pixels for resolution 640×480(???). So one cannot assume the principal stays the same when change the zoom condition.

We also probably need to model the camera focal length as a function of zoom settings. And assume the skewness and pixel aspect ratio are known before doing the calibration. Several papers are published to deal with this.