How is everyone doing with their Ciclop?

I think I’m a couple of months behind many people in working with Horus and the Ciclop, but I haven’t seen many scans show up in the forums. How are your results?

Should there be a gallery thread so we can share results?

I’ve been kind of obsessed with trying to scan a Tux toy (seemed fitting) but have run into a few challenges and haven’t quite managed to get anything useful. The results I have from a few scans can probably be combined for a complete model but I’ve been trying to eke out better results from my Ciclop on a mostly black and somewhat shiny toy.

There are screenshots of a clay pot I’ve been able to reproduce in another thread. A Starbucks cup went pretty well, but I’m not sure that’s very helpful or shareable. :slight_smile:

I’m still working my way through the software tool chain to convert the point cloud to an STL. Not trivial! I’ve only had my Ciclop for 2 weeks and feel like I’ve made some good progress.

I use Blender and MeshLab along with some custom experimental tools based on numpy, etc. The easiest way to results is probably using MeshLab.

Import PLY
Filters -> Sampling -> Poisson Disk Sampling, Base Mesh Subsampling selected [1]
Filters -> Normals, Curvatures and Orientation -> Compute Normals for Point Sets [2]
Filters -> Remeshing, Simplification and Reconstruction -> Surface Reconstruction: Poisson [3]
** Bonus round
Render -> Render Mode -> Flat [4]
Export

1 This step reduces stray points. Put in a really simple way, imagine a box of cubes (like a 3D grid) placed over the model. The sampling only takes one point within the cube (grid) and removes the rest. In short, this reduces noise.

2 Helps in building geometry. Put in a really simple way, this tells the next step which direction the generated faces should be facing. Without this step, the next step doesn’t really know whether to build triangles (faces) pointing up, down, backwards, etc and what points to attempt to build them out of.

3 Again leveraging Poisson, this step “guesses” what the geometry should approximately look like and then builds faces with the normal data collected in step 2.

4 This step is let you see what your final mesh will look like. If it’s missing then repeat the process with different values. At this point you can use other MeshLab tools to correct the mesh, fill in missing faces, etc.

Export!

** Bonus round steps are for when you feel comfortable with the basic workflow. The next step is to import multiple point clouds including point clouds generated from different angles (ie: the object laying on its side) and align them. Then go through roughly the same basic workflow to get even better geometry. Even with the bonus round steps, my Tux toy is still roughly 30% modeled in Blender, 10% corrected and tweaked in MeshLab and 60% data from Horus/Ciclop. :frowning:

If anyone knows an easier way, let me know! I have a collection of automated tools, but they either produce blobs or kind of busted geometry.

I’ve been using MeshLab with essentially the same workflow. This write-up will help others, thanks! I’m just starting to work on merging multiple point clouds to “close the top”. Haven’t quite got there yet.

man this all sounds extremely tedious, under what circumstances would scanning be better than simply taking measurements and modeling the object from scratch? seems like you would have to go through a similar amount of work and you are likely going to get a more accurate result (plus you could replicate internal geometry as well).

With organically shaped objects. If you have an object that is simple 2-1/2D then measurement and redrawing is going to be much more accurate and faster.

After months of trying, calibrating, uninstalls, re-instals, Re-triangulation, disassembly, rebuilds, alternative cameras, every possible lighting configuration you can think of, all I get is point clouds that in no possible way could ever be usable with blender, meshlab, or any other program to make a printable .stl file. The support of the product is completely non existent on their sight and everything directs you back to github, the google group, or the wiki, in which only one moderator, Jesus, or Jessie Ive forgotten because its been a while, has even been remotely helpful in any way. I know this product has been turned over to the open source community, but I have yet to see anyone document a quality scan, or show examples of lighting setups, calibration settings, or anything positive about the out of box product. I want to make the thing function how it is described to. I am also disappointed that we jumped the gun on our purchase of the unit when it was in the ballpark of $450 and saw the .stl files to build our own show up on thingiverse the day after our unit arrived at our shop. #bummed. If anyone has theirs up and running that can give me any helpful hints other than, “they just played around with the ambient light and threshold settings until one day it worked” that would be awesome, because Ive done everything to get just a simple scan, a texture scan, and have invested about $250 into laser and camera upgrades, a photographer’s diffuser light box with black background and about 150 man hours just trying to get a simple scan of vacuum tube socket ( a simple cylinder ) all the way to action figures, and taxidermied jack-alopes. NOTHING works. Even had the head of the mathematics department at our local university, and a student getting double masters degrees in advanced mathematics (trig and calculous), review the measurements, the calibration settings, and the .pdf specifying the optimal triangulation of unit, and they both told me that the geometry of the build was no where near where it should be in order to achieve a 3D scan of a rotating object.

I would gladly hand out reddit gold for life, to anyone who was able to guide me through getting a workable scan using the out of box unit and board, Horus on a Mac Running El Capitan, the stock camera and lasers, and a photographers diffuser box.

For what it’s worth, I’m pretty much in the same boat, but with a windows box. The “manually adjust the lense focus of the camera” trick helps some, but there is something just not working correctly in the capture to 3d point conversion engine on mine. I fully suspect it is something user error related. but I’ve not been able to get it working to the point I can get an approximation of the items I’ve tried to scan. Even simple geometric shapes with a variety of different coating surfaces and textures that I bought specifically as test objects (cubes, spheres, pyramids, white matte finish, black gloss finish and every color and texture in between) I end up with a scan that looks like a potato. If I scan a potato, it looks like a different, more potato-like potato.

It should work. The hardware is sound. The theory is sound. The David scanner has been operating the same way for years. It almost has to be the software, or the position of the lasers or environmental considerations.

I got to work first time when I built it up for the local Makerspace to try it out and report how well it worked. But I know they had issues with getting it to work for them(Background movement/light issues). It has been way too many months now that I do not remember what I did to get it to work. I do know I had to tweak a number of settings and point cloud settings in Meshlab to get a usable file. :blush:

Just seeing this now and it wins the internet for the day

I’m late to the ciclop party, and wished I hadn’t showed up. I read Farny3D’s entry and am in total agreement. How is this company allowed to keep selling this product? I have only recently been able to produce a barely useable scan, and then I find out that I cannot modify it in any program but Zephyr. Zephyr won’t let me save it or export it! I can open the PLY files in mesh lab, but they won’t let me edit cleanly or export to stl or obj to use in any other program either. Just venting… If anyone knows how I can get my file to be useable, please let me know.