Website powered by

Meet Noot: DIY Rangefinder for Photogrammetry

General / 13 May 2020

While on scan trips, I've found that it is paramount to keep a consistent distance from objects to ensure the maximum texture output with consistent texel density across surfaces. I'd like to say I've been pretty good about keeping things consistent and I'm definitely pleased with the results I get freehand but we can improve it!

Enter Noot, an arduino-micro rechargeable rangefinder. (thanks to Luan for the name)

The concept is really simple: any distance-sensor could work (with some having much better accuracy and less light interference for more $$) giving a live read-out in centimeters. Some hardware components wrapped in a 3D printed box with hotshoe fitting and you're good to go!

Parts used:

Files available here: https://www.thingiverse.com/thing:4337829 

Huge shout out to George Takahashi for helping me with design and construction! Always great to have an engineering friend to help with understanding hardware. Especially to answer my silly Fusion360 questions :)


Moving to the future, this rangefinder will be most effective paired with a ringflash for mobile cross-polarization (check this out). With that in mind, here's the Noot v2 with an adaptive system for locking the flash to the front edge of my AR400.


Photogrammetry: Likeness for Realtime Faces

General / 25 October 2019

Saw some exciting face rigs recently and noticed the buzz around using photogrammetry face poses to drive blendshapes for face rigs. I don't have access to an awesome multi-cam setup to actually capture different poses, but I thought about using a single camera to get the 'likeness' and possibly texture. Then you can take that rough scan to wrap good topology to the likeness shape of the person's face!

Similar to my decal post; I used a Canon t6i and Reality Capture for scanning. The quality is about as expected for single-camera head scanning: noisy, bumpy, and incorrect micro details all over. Even if you have your subject sit perfectly still, there is going to be some movement just from breathing and blood flowing through the face. You can combat this to a certain extent, check the end of the post for some resources I found on single-camera head scanning.

Initially, I took 15 photos with no flash and tried to move quickly using standard room lighting. I had to do a lot of tweaking in post to balance things out to a decent quality in Lightroom before going into RC. Regardless, 15 photos solved to make this: 

After getting this convincing of a result as a 10-minute test, I grabbed my dirt-cheap Neewer Speedlite flash and a CPL filter. The Speedlite has linear polarizing film hastily scotch-taped over the end; allowing me to filter out specific specular highlights with the circular polarizing filter. This method isn't as accurate as I'd like (ring lights are more recommended for this unless you can get a ~$1000 strobe setup mounted on the camera lens) but it cancels out at least a majority of the oily specular highlights from the skin. I forgot to take an example of the before-after of this effect, so here's an example from Andrew Wilkoff:

With the strobe setup and taking a little more time to balance my camera settings, I took 30 photos. Still aiming just for semi-circles in front of the face, my pattern was basically half rings at different heights and angles. I did make sure to get better coverage to include the ears this time though. 30/30 photos aligned, and here's the result from the second scan: 


If you haven't figured it out by now, I'm completely ignoring attempting to scan the back of the head. That's a talk for another day when I want to try 3D-printing bobbleheads of my coworkers...

With this level of result, I can do some auto-smoothing in Reality Capture and simplify it down. What I ended up taking out looks like this: 

Now comes the magic. There's a super cool piece of specialized software called Wrap3D. Designed around shrink-wrapping good topology to scan poses, this tool lets you iteratively match up whatever topology to specific poses. There are some sweet examples, like wrapping a hand model to a pose from a scan of a hand holding a ball. Generally, you help the software guide parts of the geometry to wrap with point pairs.

Apart from masking out the polygons I actually wanted to wrap (Wrap3D supports polygroups so I could quickly mask out eyes, jaw, body, etc), the process was really painless and fast. Then it was time to compute!

Ta-da! Wrap your awesome topology to your scan, now you have likeness! This was an interesting experiment and have some ideas to take this further, especially with cross-polarization and blending reprojected textures.


+ Some valuable resources on head scanning:
- https://vimeo.com/188877674 (Jeffrey Ian Wilson also has a class covering this workflow)

- https://adamspring.co.uk/2018/10/25/single-camera-facs-scanning-photogrammetry/ 

- https://adamspring.co.uk/single-post/2017/08/30/Single-Camera-Head-Scanning-Photogrammetry 

If you made it this far, thanks for reading! Feel free to hit me up with any questions, we're having lots of fun with this at work experimenting with digital humans. Also, credit to https://www.artstation.com/nickfrompurdue for letting me post his face a bunch of times on the internet!