scanning and software - thoughts for the devs

The thing that is called “texture” in Horus is not really what I would consider “texture” I would expect that option to be called “photo data” or “color” especially if it doesn’t affect the geometry of the point cloud. I’m not sure if it does yet.

Currently, the basic approach to turn scan data into a printable model involves many steps. best best results are produced when each step is properly configured, which usually involves trial and error and fine tuning. I think writing the software so that many of those steps could be combined into one step, with settings pulled from a configuration would make the process easier for users. For example, alignment, merging, subsampling, computing the normals, and reconstructing the mesh could be combined into a single operation (with a big button labeled “make printable” or something), and could have different recommended configs, resulting in a range of outputs that the user can choose from. The configs could be vetted, and shared, removing a lot of guesswork for new users.

Trimming the mesh before the reconstruction and flattening the base after are less automatable but could be made easier too, with something like a “remove isolated patches” feature with a threshold slider that would automatically remove data below a relative group size, and a “cut off base” feature, similar to that same feature in Cura.

There is a lot of functionality in the monolith that is meshlab that is not really required for this process, and is distracting in the workflow. I think it would be easier for users if the tool were more purpose-built and lean.

Link:

http://it-bqcom15-media.s3.amazonaws.com/prod/resources/manual/Horus_Guide_to_post-processing_of_the_point_cloud-1429180787.pdf

I’m still trying to figure it out on mine too. I even attached a higher resolution camera to mine as well. I’m certain its user error and settings, but trying to find what they are has been troublesome. A better starting point would be awsome.

Nice, I’m still in the “armchair philosophy” mode of my scanner endeavors, I need to rustle up one of those tiny philips head screwdrivers to take apart the webcam and then I’ll find some time to do some actual testing. Hopefully I’ll uncover some methods that will help, and I’ll be sure to post them here if/when I do.

Do you have any .ply data laying around from your testing that I could play with?

you can save it as an .stl if the forum doesn’t take .ply

I’ll see if I can get you some tomorrow. I haven’t been saving them because they haven’t been great yet.

Ok, cool thanks.

what i’m wondering is if they are using some sort of displacement modifier like this using the image data, which would be awesome if they are:

https://www.youtube.com/watch?v=_owcNpxp8h4