The Pix4D imaging system uses a small unmanned aircraft to acquire several hundred 2D photographs of a given geographical area. Those photos are then merged into one image, which users can explore in three dimensions on a computer screen.
The UAS (which the user has to provide) flies back and forth across the town, field, or other area, taking pictures as it goes – essentially, it’s acting as an airborne scanner. The still images it collects are then loaded onto the Pix4D software. The system automatically detects “interest points,” which are items with high visual contrast, or that are otherwise visually distinct. By comparing the orientation of these same interest points in photos taken from a variety of angles, it is able to build a three-dimensional photographic representation of that item, along with everything around it.
It’s the same principle that allows our brains to take slightly differently-angled images of the same scene from each of our eyes, and combine them into one 3D image. Pix4D then proceeds to build a sparse 3D model of the entire area by combining all the images, using GPS tags on each photo as a guide. That’s followed by a dense 3D model, to which texturing is finally added. The entire process (not counting the initial fly-over) varies with the size of the area, although in one example, 412 photos were combined into one 3D aerial image in less than one hour.
Users can choose between a high-speed cloud-based service, a private server based within a network, or a light version of the program that can run on a laptop.
The technology was developed at Switzerland’s Ecole Polytechnique Fédérale de Lausanne, and the Pix4D company is a spin-off of that project.
Source: New Scientist