3D Object Scanning
Main
Home
About Me
Guest-Globe
Links
Site Stats

Photos
Kit Car
Index

Geek Stuff
Hardware
Software
3D Scanning
Code Wars
Java
Visuals
R2 Vis
R4 Vis
Tutorials

Feedback
Forum
Contact Me

 

View Dissertation
View Java Applet

  Intro
I'd been interested in trying to get 3D objects into my PC for quite a while, and had sortof given up after quite a few failed attempts at detection from stereo images. However, in summer 2003 i was working as an intern at Cambridge University on 2nd year hardware practicals. Some guys from MIT (Jim Paris and Mariano Alvira) came over and showed us a simple 2D tomography machine that had been made there to be used as a hardware practical. Some pics of it are below.

We spent a few days fiddling with it, and between us got it running from an Altera EPXA1 board and displaying the 16x16 pixel output. It worked pretty smoothly, even if the image quality wasn't great. The idea was quite simple. There were 16 LEDs, and 16 detectors opposite. You would light up each LED in turn and see if light could get between the objects to the opposite sensor. By rotating the object to be scanned at about 15 degree increments and doing the scan many times, an image could be built up using something known as the Radon transform. The Radon transform just extruded the 16 readings into columns on a 16x16 array, rotated it by the right amount, and added it to an accumulator. After all the steps, you get a top-down image.

The 3D Version of this turned into my Cambridge University Computer Science Degree's final year dissertation. You can view it HERE.
  Idea
The plan was pretty simple. Instead of using an array of LEDs and sensors, use a digital camera as a sensor, and take pictures of the object. Somehow make the background easy to differentiate from the object itself (using a different colour). The camera then takes pictures of the object at different rotations, and uses a modified version of the Radon transform to build up a 3D array of values.

The Transform used is remarkably simple. I made a function that took as parameters x,y and z in a cube, and a rotation, and outputted projected x and y values. This is exactly the transform used to display 3D graphics so you can find it anywhere. It just rotates around the y axis, and then adds perspective from Z. You could also use a transform matrix but it'll be a bit slower. So... for every point in your 3D array, and every rotation, put it through this transform, and then use the coordinates to find a pixel in the image for that rotation. If the luminance in this pixel is less than than what you currently have in the array, do nothing, otherwise update the array to the new value.

What you end up with after this is an array where each point is either large if it is outside the object, or small if it is inside. The Marching Cubes algorithm (the same one used for metaballs - search google for 'polygonization of a scalar field') can be used on this array to draw a polygon at a certain level. You need to move this level up and down by a bit to get the skin drawn in the right place. The shape gained after this is often a bit jagged, so i perform a 3-dimensional gaussian blur on the array.

After all this, the polygons are reduced down a bit, and textures added to each of them using the original images from the camera.
  The Setup
There was a small turntable mounted on a stepper motor from a 5.25" floppy disc (200 steps/rev). I used a PIC-controlled stepper driver to rotate it by a few steps at a time. I used both an Olympus 2MP digital camera and also a Web Cam - the Web Cam was easier to set up so that got used for most stuff.
  Images

parrot_2

parrot

pilchard_exported

scanned_toomuch

scanned_ok

parrot_3

spindude

controller

spindude_2d

scanned_toolittle

camera

parrot_1

tomography_2d

pilchard_hires

stepper

ultra_high

scanned_hires

pilchard

sample

pilchard_lores

pilchard_3d

platform

 

 

Created by and Copyright (c) Gordon Williams 2003