![]() |
|
![]() Photosynth technology showing Spider Meadows in Central Washington
|
|
Developer(s) | Microsoft |
---|---|
Initial release | August 20, 2008 |
Last release |
2.110.317.1042 / March 18, 2010
|
Development status | Discontinued |
Type | 3D modeling, panorama stitching |
Website | photosynth |
Photosynth is a discontinued app and service from Microsoft Live Labs and the University of Washington that analyzes digital photographs and generates a three-dimensional model of the photos and a point cloud of a photographed object. Pattern recognition components compare portions of images to create points, which are then compared to convert the image into a model. Users are able to view and generate their own models using a software tool available for download at the Photosynth website.
Photosynth is based on Photo Tourism, a research project by University of Washington graduate student Noah Snavely. Shortly after Microsoft's acquisition of Seadragon in early 2006, that team began work on Photosynth, under the direction of Seadragon founder Blaise Agüera y Arcas.
Microsoft released a free tech preview version on November 9, 2006. Users could view models generated by Microsoft or the BBC, but not create their own models at that time. Microsoft teamed up with NASA on August 6, 2007 allowing users to preview its Photosynth technology showing the Space Shuttle Endeavour. On August 20, 2007, a preview showing the tiles of Endeavour during the backflip process was made available for viewing.
On August 20, 2008, Microsoft officially released Photosynth to the public, allowing users to upload their images and generate their own Photosynth models.
In March 2010, Photosynth added support for Gigapixel panoramas stitched in Microsoft ICE. The panoramas use Seadragon based technology similar to the system already used in synths.
On 7 February 2017, Microsoft decommissioned the Photosynth website and services.
The Photosynth technology works in two steps. The first step involves the analysis of multiple photographs taken of the same area. Each photograph is processed using an interest point detection and matching algorithm developed by Microsoft Research which is similar in function to UBC's Scale-invariant feature transform. This process identifies specific features, for example the corner of a window frame or a door handle. Features in one photograph are then compared to and matched with the same features in the other photographs. Thus photographs of the same areas are identified. By analyzing the position of matching features within each photograph, the program can identify which photographs belong on which side of others. By analyzing subtle differences in the relationships between the features (angle, distance, etc.), the program identifies the 3D position of each feature, as well as the position and angle at which each photograph was taken. This process is known scientifically as bundle adjustment and is commonly used in the field of photogrammetry, with similar products available such as Imodeller and D-Sculptor. This first step is extremely computationally intensive, but only has to be performed once on each set of photographs.