LulzBot 3D Scanner

A forum dedicated to the development of the LulzBot 3D Scanner

LulzBot 3D Scanner

Postby aeva » Wed Jan 15, 2014 10:31 am

Hi Everyone!

For the last year or so, I've been researching what all would be needed to produce something in the domain of 3D scanning for what we already have in the RepRap project. In the current state of hardware people have developed so far, there are a lot of great one-off projects that have been produced to be learned from (though they aren't designed with easy repeatability in mind). The corresponding state of scanning software is not so good - people are are forced to use proprietary software due to lack of quality free software or open source alternatives.

Aleph Objects, Inc. has recently committed to developing a libre [hard/soft]ware 3D scanner, a project I've been put in charge of. What follows below are my notes on the project, primarily on the software that will be developed for it.

I'd like to lay down some constraints for this project:

    1) The hardware must be libre hardware (with the acceptable allowance here being for standardized components that are easily sourced or replaced, eg cmos cameras).

    2) The software must be free software.

    3) The product needs to be < $1000
And, because LulzBot has a reputation to live up to about producing
totally awesome stuff that is actually usable to normal people, I'm
going to add:

    4) The software's output must be a printable stl file in the vast majority of cases. Eg generating a point cloud and telling the user to get cracking with meshlab is not ok.

With those four constraints, there is literally nothing out there in terms of existing projects that we can just manufacture and sell.


The Hardware
Early in the development process, the hardware is going to be simulated (likely using Blender), and so is less important to mention now. To be brief, it is going to be a modification of the laser-line scanner. The part that is different about this design, is that the scan area is to be contained in an enclosure.

The gist of the hardware is this:
    - a box
    - a laser + clear dowel
    - a turn table
    - two cameras, along the same radius from the turn tabel - one looking down slightly from above, one looking up slightly from below.
    - a micro controller, maybe a BBB or an Arduino
Nothing hard to build here.


The Software
For about two or three years now I've been working on voxel-based slicing engine. The algorithm was originally conceived as a means of converting solid appearing (yet totally non-manifold) 3D meshes into manifold ones. While not originally intended for it, the method adapts well for scanning, because a sufficiently dense point cloud can be an example of a "solid appearing, non-manifold object".

The algorithm for scanning assumes there is a model on a turn table, 1 or 2 cameras and a laser line in fixed locations with known orientations.

Step 0:
    Calibration is done with the scan chamber empty. For each camera, take a picture and save for future reference. This will be referred to as the [bg reference].

Step 1:
    An object is in the scan chamber. A solid voxel model is instanced, it represents the scannable area. This will be refered to as the [scan positive].

Step 2:
    For each step on the turn table, for each camera:

    A: Take a picture, with the laser line off but the object lit - this will be referred to as the [scan sample].

    B: The [bg reference] and the [scan sample] are used to make the [contour mask]. In a perfect world, this would be achived by subtracting the [bg reference] from the [scan sample] and taking the threshold of the result to produce a black and white bitmask [contour mask]. In reality, opencv might be useful here. Remove noise from the [contour mask], if necessary.

    C: For each black pixel in the [contour mask]: Assuming the [contour mask] is the back of the viewing frustrum of the camera, cast a ray from the camera location to the pixel in the mask. Delete all voxels in the [scan positive] that intersect with the ray. This will capture all convex details of the object. Scan resolution is directly determined by the camera's resolution.

    D: Take a picture, with the laser line on but other lighting off - this will be referred to as the [line sample].

    E: For every white pixel in the [contour mask] of which the corresponding pixel from the [line sample] who's color is close to the expected color of the laser line: Use paralax to calculate the pixel's 3D coordinates relative to the camera and laser line. Project a line from the camera to the "pixel"'s calculated 3D coordinates to the first voxel along that vector past the "pixel". Draw voxels along the line segment between the collision and the "pixel". May need to generate a frustrum or cone instead of a line. These new voxels are added to the [scan positive]. This will capture all concave details of the object, in a somewhat lower resolution than the convex details.

Step 3:
    At this point, the [scan positive] is a voxel model that closely resembles the topology of the object being scanned. However, interior voxels need to be removed. Thinking in terms of a cubic grid, the outter most shell of the grid needs to be completely empty. This either should be done by deleting the outter most grid "shell" from the [scan positive] (not to be confused with the outter most layer of the actual voxel data) or by doing the 3D equivalent of increasting the "canvas size" by 2 on all axies in the gimp and centering the image.

Step 4:
    Define a new, empty voxel model - called the [scan negative] - do a color fill algorithm on the [scan positive] starting from on of the corners to determine the volume of space around the scanned object. This data is added to the [scan negative].

Step 5:
    Delete the [scan positive] (or perhaps keep it to use as reference for color values). The inversion of the [scan negative] is the [scan result]. From this point, the exterior voxels may be easily identified (by their adjacent neighbors or lack thereof), and a high poly stl may be generated via marching cubes algorithm.

Step 6:
    Save result as stl.

So, thats the basic algorithm for scanning. The "casting" that happens in steps 3 and 4 might not be necessary - it mostly depends on how noisy the laser line generated voxel data is.

I mentioned before that the hardware of the scanner was going to be simulated at first, in Blender of all things. What is meant by that, is that the basic geometry of the scanner (as seen from the cameras' perspectives) will be built out in blender. The object to be scanned will be parented to the turn table, and python scripting will be used to turn on/off lights, rotate the object, and capture data from the cameras. Noisy rendering settings may also be used to make things interesting.

Simulating it in this way will allow for me to tweak the design of the scanner (eg what backdrop works best for a variety of objects). It also would be fun software to release for people to play with once the algorithm is implemented, and could be useful for reparing models.
User avatar
aeva
 
Posts: 11
Joined: Wed Mar 27, 2013 11:30 am
Location: Chicago

Re: LulzBot 3D Scanner

Postby piercet » Wed Jan 15, 2014 12:20 pm

If you decide to start investigating the projector based reflection scanner approach as well, let me know. I may be able to assist in testing, etc as I have access to various microprojectors, including some autofocus equipped units. Anyways, sounds neat, can't wait to see what you come up with!
User avatar
piercet
 
Posts: 656
Joined: Sun Aug 25, 2013 1:37 am
Location: Ridgefield, WA

Re: LulzBot 3D Scanner

Postby cabbage_breath » Tue Jan 21, 2014 6:50 pm

Will this project have a repository in the devel.lulzbot site?

I would be interested in building one if the cost of prototyping isn't too prohibitive.
cabbage_breath
 
Posts: 35
Joined: Sat Aug 03, 2013 7:46 pm

Re: LulzBot 3D Scanner

Postby spotrh » Wed Jan 22, 2014 9:37 am

This is fantastic news! I look forward to being able to test this out.
User avatar
spotrh
 
Posts: 1
Joined: Wed Jan 22, 2014 9:05 am

Re: LulzBot 3D Scanner

Postby aeva » Thu Jan 23, 2014 11:53 am

cabbage_breath wrote:Will this project have a repository in the devel.lulzbot site?

I would be interested in building one if the cost of prototyping isn't too prohibitive.

Yes. The software that will be developed for this will be available on github as I'm developing it. The hardware will be on dev.lulzbot like all of our other products when development on it starts.

By design, this should be a DIY friendly device - the overall BOM cost is relatively low as far as these things go, and won't rely on anythingy too difficult to source. You'll be able to test out the product as we develop it.
User avatar
aeva
 
Posts: 11
Joined: Wed Mar 27, 2013 11:30 am
Location: Chicago

Re: LulzBot 3D Scanner

Postby aeva » Thu Jan 23, 2014 9:33 pm

I put together a basic simulation of a laser line scanner, using Blender. I've uploaded a rendering of the scan data to youtube.

A python script will be able to control the simulated scanner. For example, picking which camera to pull data from, and activating / deactivating light sources. The fake scanner will be used to provide fake scan data which I can use to start putting together the software process described above. This will also be useful for tinkering with the conceptual layout of the hardware to see what effect it has on scan quality.
User avatar
aeva
 
Posts: 11
Joined: Wed Mar 27, 2013 11:30 am
Location: Chicago

Re: LulzBot 3D Scanner

Postby cabbage_breath » Mon Jan 27, 2014 8:25 pm

Step 2, B: "in reality opencv might be useful here"

Could you please explain what that means? What is opencv and how will it be implemented?

Thanks
cabbage_breath
 
Posts: 35
Joined: Sat Aug 03, 2013 7:46 pm

Re: LulzBot 3D Scanner

Postby gannon » Tue Jan 28, 2014 8:40 am

Open_CV is an open source computer vision library for programming. It can be quite useful for various applications involving image processing such as this :)
User avatar
gannon
 
Posts: 47
Joined: Wed Jul 10, 2013 11:42 am

Re: LulzBot 3D Scanner

Postby aeva » Thu Jan 30, 2014 12:30 am

cabbage_breath wrote:Step 2, B: "in reality opencv might be useful here"

Could you please explain what that means? What is opencv and how will it be implemented?

Thanks

As gannon said, OpenCV is a library that implements a ton of handy computer vision stuff. There is a nice python wrapper for it and tutorials here http://docs.opencv.org/trunk/doc/py_tut ... rials.html which illustrate how to do common tasks with it.

So far, I'm getting enough mileage out of just using Pillow ( http://pillow.readthedocs.org/en/latest/ , fork of "PIL" - the python imaging library) to put together proof of concepts for what I'm after. OpenCV will undoubtably be useful for doing things like de-noising, and other things to normalize for "real world" conditions. Right now, I'm working on putting together the basic scaning pipeline.

By the way, I threw together a quick script that takes two rendered "scan" images (one with the object to be scanned, and one with just the background) and creates a mask image of where it thinks object is.

This image is the "scan":
spotlight.png
Simple "scan" data from the virtual scanner.


This image is without the object to be scanned:
empty_plate.png
Background without object to be used for background subtraction.


This is the difference map of the two images:
pre_clamp_2.png
Difference filter.


This is this is the difference map when clamped, which will be used for ray casting and voxel stuff described in the algorithm in the first post on this thread:
mask.png
Mask for ray casting etc.
mask.png (2.73 KiB) Viewed 2386 times


Here is the source code for this example:
Code: Select all
from PIL import Image, ImageOps, ImageChops, ImageEnhance
THRESHOLD = 2

def clamp(val):
    global THRESHOLD
    if val < THRESHOLD:
        return 0
    else:
        return 255


if __name__ == "__main__":
    scan = Image.open("spotlight.png")
    bg = Image.open("empty_plate.png")

    result = ImageOps.autocontrast(ImageChops.difference(scan, bg))

    contrast = ImageEnhance.Contrast(result)
    result = contrast.enhance(2)

    result.save("pre_clamp.png")

    hack = Image.eval(result, clamp)
    hack = hack.convert("L")
    contrast = ImageEnhance.Contrast(hack)
    hack = contrast.enhance(1000)
   
    hack.save("mask.png")


Note that the "autocontrast" method determins the lightest and darkests colors in the image and adjusts the color curve so that those colors are black and white respectively. The global variable "THRESHOLD" is to determine the color distance from (0,0,0) that counts as "black", and thereby where the mask is clamped. The part with the image map "hack" is where Pillow provides an easy and FAST way to do per pixel per channel map method, but not just a per pixel map channel. Ideally at this point, I'd just select everything that isn't black and make it white, but there wasn't a clear method for that which was also fast. Converting the image to grayscale and boosting the contrast a ton accomplished the same effect quickly.

Note that the above method does NOT account for noise; in reality the difference map won't conviniently be perfectly black indicating the background. This is fine for now to build the rest of the pipeline off of, but the virtual scanner will need to be changed to produce noiser images (specifically using indirect lighting etc so that some of the object bleeds onto the turntable and a noise field overlayed that differs from scan to scan to make things interesting). OpenCV will come in handy here for de-noising at the very least. Pillow will still be useful, but some statistical analysis of the images will need to be done to determine the correct thresholds for the filters.

Also, as I'm sure some are wondering, the code example runs pretty much instantly, because Pillow is super awesome.

[edit]
No idea why one of the images isn't scaled - all of them are the same dimensions. *shrugs*
User avatar
aeva
 
Posts: 11
Joined: Wed Mar 27, 2013 11:30 am
Location: Chicago

Re: LulzBot 3D Scanner

Postby aeva » Thu Jan 30, 2014 7:49 pm

Out of curiosity, I ran a test to see what would happen when the source data was really dirty. In Blender, I added ambient occulusion to the render, and via the node editor, a different noise map is mixed for the scan and bg so theortically none of the pixels are perfectly identical in each image, despite looking similar. To make things interesting, the images are also given a slight gausian blur (prior to application of the noise map), and saved as 90% quality jpegs. I also changed the scanner's materials so that the inside is matte black, and the lights are much brighter.

I tried a couple of PIL-only methods for dealing with the noise, but the results were unusuable for my purposes.

Here is what I've come up with for the code:
Code: Select all
from glob import glob
from os.path import join
from tempfile import mkstemp
from PIL import Image, ImageOps, ImageChops, ImageEnhance, ImageFilter
import cv2


THRESHOLD = 30
DENOISE = 15


def cv_helper(img):
    """
    Helper function for using PIL and OpenCV together.  Pass either a
    pil img object or a path.  Returns an opencv image object.
    """
    try:
        ext = ".png"
        img_path = mkstemp(ext)[1]
        img.save(img_path)
    except AttributeError:
        ext = "." + img.split(".")[-1]
        img_path = img
    return cv2.imread(img_path)


def pil_helper(cv_img):
    """
    Takes a OpenCV image object and converts it to a pil image object.
    """
    stash_path = mkstemp(".png")[1]
    cv2.imwrite(stash_path, cv_img)
    return Image.open(stash_path)


def denoise(img):
    """
    Uses opencv to remove noise from the specified image.  See
    cv_helper for argument information.  Returns a PIL image object.
    """
    global DENOISE
    cv_img = cv_helper(img)
    cv_out = cv2.fastNlMeansDenoisingColored(cv_img, None, DENOISE,DENOISE, 7, 21)
    return pil_helper(cv_out)


def threshold(img, t=127):
    """
    Your standard threshold function.
    """
    def clamp (p):
        if p<t:
            return 0
        else:
            return 255

    gray = img.convert("L")
    return gray.point(clamp, 'L')


if __name__ == "__main__":
    search_dir = "fuzzy"
    scan_path = glob(join(search_dir, "scan.*???"))[0]
    bg_path = glob(join(search_dir, "bg.*???"))[0]

    scan, bg = map(denoise, [scan_path, bg_path])
    result = ImageChops.difference(bg, scan)
    result.save("just_diff.png")
   
    # Create the image mask.  The blur is used to remove small
    # features, mostly to clean up "dirt" / noise left over from the
    # first threshold.  Slight loss of detail, but effective.
    clamped = threshold(result, THRESHOLD)
    clamped = clamped.filter(ImageFilter.GaussianBlur(4))
    clamped = threshold(clamped)
    clamped.save("mask.png")


Here is the new "scan" data and background reference:
scan.jpg
Noisey version of the "scan" data.

bg.jpg
Nosiey version of the background reference image


The resulting difference map:
just_diff.png
Difference map between the scan and the bg. Note the de-noising happens before the diff.


The resulting mask image:
mask.png
The resulting mask file to be used for raycasting stuff and voxel model creation.
mask.png (1.37 KiB) Viewed 2362 times
User avatar
aeva
 
Posts: 11
Joined: Wed Mar 27, 2013 11:30 am
Location: Chicago

Next

Return to 3D Scanner

Who is online

Users browsing this forum: No registered users and 1 guest