The smart people at Disney Research have successfully created a (small) 3D model from photos. They combine a bunch of photos and compare how light reflects off of the surfaces to understand where those objects are in relative 3D space.

Gizmag reports that the system demonstrates the core functionality but is not yet perfect:

Unlike other systems, the algorithm calculates depth for every pixel, proving most effective at the edges of objects.

The algorithm demands less of computer hardware than would ordinarily be the case when constructing 3D models from high-res images, in part because it does not require all of the input data to be held in memory at once.

The system is not yet perfect. Depth measurements are less accurate than they would be if captured with a laser scanner, and the researchers admit that more work is needed to handle surfaces which vary in reflectance

This is conceptually similar to what Autodesk is doing with 123d Catch.