Some time ago, I got myself into 3D printing. Given that my interests are minis, costumes, and making things that mitigate the problem of “this thing I have sitting on my desk is slightly shitty, let’s 3D print a fix for that,” having a way to scan real life objects in appropriate size would certainly be nice.
The best option is to learn how to do photogrammetry with the camera you have. That’s probably going to give you the best results, but it’s said to be kinda time consuming. I wanted something that would require less fuss. At just about the right time, I encountered an indiegogo campaign for Seal 3D scanner that promised to do just that, and I decided to bite the bullet.
And soon, I got the ability to test it out.
The Adversary
Late last year, I was given a small statue of a dragon. As is typical with models that big, they come in pieces that require assembly.

However, the guy lost one of the horn pieces and couldn’t find it. Much unfortunate, and while I normally wouldn’t let that bother me (especially I was given that for free) … hey look, I have something to test the 3D scanner with.
The Experience
To scan models with Seal scanner, you need to install appropriate software (JMStudio) on your computer. And JMStudio leaves much to be desired.
Turns out that the handheld experience of using the scanner kinda sucks. It’s very easy to hold the scanner too close or too far — the goldilocks zone isn’t that big — and when you do manage to keep the scanner an appropriate distance away, the thing will keep losing tracking all the time.
Luckily, the scanner has a camera stand mounting screw at the bottom. I have a camera stand, as well as all the legos I had as a kid, which allowed me to quickly improvise a turntable. This made the process of scanning pretty easy once I figured out that I don’t need to scan the model ultra-slowly and rack up 1000 frames. However, then the real problem showed up.
Aligning the scans.
Auto-align in that program is … pretty bad.

So you need to align things manually.
Manual alignment gives you three panels in the same window: object A, object B, and result. You align them by picking three landmarks on the model to align them. This is often hard, because if you miss the landmark by a bit, the alignment can also end up being off by a bit. It’s even harder when you try to align the scans of the upside-up and upside-down scans, as those don’t share many landmarks you can easily find.

Which means that at the end of the day, aligning objects with tools this program provides can take much longer than it would if the program just dumped all the scans in the same space and gave you blender-like controls (move, scale, rotate + ability to limit movement/rotation/scaling along any of the three axis). Seriously, finding good alignment with three points took me over an hour. In Blender, I could have it done in 5 minutes.

You’d think that someone making software for a 3D scanner would at least have the foresight to open Blender, mess around in it for like 15 minutes while watching a “blender basics” tutorial, and figured how to replicate the UX instead of creating a square in their attempt to reinvent the wheel.
At the end of the day, I did get a workable model that I needed to clean up a bit, but holy hell. But it still took me far too long, even though — if modern programmers weren’t so afraid of adding advanced features to their software in the name of simplicity — it didn’t need to be this way.