Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
October 24, 2014
arrowPress Releases
October 24, 2014
PR Newswire
View All

If you enjoy reading this site, you might also want to check out these UBM Tech sites:

Watch the intriguing '3-Sweep' 3D modeling technique in action
September 9, 2013 | By Mike Rose

September 9, 2013 | By Mike Rose
More: Console/PC, Art, Design, Video

A group of computer science professors is working on a new interactive 3D modelling technique, which can easily read information from various photos, and turn the objects into 3D models in just seconds.

The '3-Sweep' technique, as demonstrated in the above video, takes three mouse strokes which define the three different dimensions of an object. The first two sweeps define the object's profile, while the third sweep finds distinct outlines, and reads the object's shape along its main axis.

This defines a 3D body which will snap to the object in the photo, and can then be rotated, adjusted and generally played around with.

Notably, the video describes numerous instances in which the algorithm would not work, including when there is a non-ideal perspective, or when an object has a fuzzy, non-readable edge. No information is as-of-yet available on when this technique might be made available for public use.

Related Jobs

Next Games
Next Games — Helsinki, Finland

Senior Level Designer
Magic Leap, Inc.
Magic Leap, Inc. — Wellington, New Zealand

Level Designer
DeNA Studios Canada
DeNA Studios Canada — Vancouver, British Columbia, Canada

Analytical Game Designer
University of Texas at Dallas
University of Texas at Dallas — Richardson, Texas, United States

Assistant/Associate Prof of Game Studies


Merc Hoffner
profile image

I have no background in the field, but the speed, complexity, precision and integration of the inferences are astonishing. This isn't just one grand algorithm is it? This surely must be a whole slew of brilliant computational solutions seamlessly stitched together with astonishing context interpretation. And it's fully parametric!

Can someone with a background enlighten us? Is this a significant leap forwards or is this a clever collection of techniques that were kinda-sorta already there?

Freek Hoekstra
profile image
essentially it is a technique called camera mapping. (in this case merged with intellingent shape construction). camera mapping is essentially doing a projection from the image onto geometry and has been used for a long time in film.

a picture is taken, and an environment recreated on top of it, the textures are then projected, it looks very realisitc beacuse it is, however it has limitations:
the illusion only really works when you keep looking the same way, lighting is baked in, so is specular and everything else. the texture also is not "calculated" on the opposite side, it is just projected through the mesh.

but the shape recognition is a bit newer, I expect they are using some smart tools to find the boundaries of the shape, and may use a medial axis system to find the "center" of the shape. and essentially use that to guide their sweeps.

overall very impressive tech, and above all else very nice implementation.

Phil Maxey
profile image
I'm not an expert or anything, but why would the lighting have to be baked in? couldn't the angle of lighting be extrapolated from the scene and applied dynamically as well?

Freek Hoekstra
profile image
because the textures are just the image projected onto the shape.
that means all the characteristics are baked in, lighting, specular reflection etc.

one could also just generate a material without extracting the texture, but then it would just look like simple objects.

Kenneth Poirier
profile image
Who is doing this and how do I give them money?

Kale Menges
profile image
Man... That ain't right... We've been doing it wrong all this time...

Kevin Fishburne
profile image
Holy skullfuck, Batman. That is amazing.

Eric Shofe
profile image
This is so AMAZING.

Quentin Preik
profile image
That's so cool! So it's probably going to be ZBrush 4R6 then? :)

profile image
Why can't the object be rotated a full 360? I notice that they steer clear of any rotation beyond about 180. Is this because the back side cannot be textured do to lack of photo information?

Merc Hoffner
profile image
Evidently yes, (you can't generate real information from nothing afterall) but there were a couple of spots such as with the Menorah where the backsides have been rendered and textured by copying the front sides. It looks like the software (combined with the user's guidance) is smart enough to estimate when an object is likely symmetrical and the texture is consistent across that symmetry and appropriately copy/extrapolate both geometry and texture data. Furthermore the software had seamless 'patchmatching' that clearly and very nicely filled in occluded backgrounds - one would assume that kind of smart extrapolation applies to the object textures as well.

Moreover the implication of smart geometry/texture matching should mean that with multiple photos, one could rapidly cover and integrate the occluded faces. Another really interesting point: If the same geometry is depicted in each photo, presumably it won't take much for the software to match objects from image features, and use the user's guided geometry derivation from the first photo, to guide the geometry derivation of the subsequent photos without the user's help.

Nagesh Hinge
profile image
Pretty Cool!