Close
0%
0%

Photogrammetry and Image Acquisition

An exploration into image acquisition techniques and its effects on the quality of 3D models using Pix4D Mapper Pro software.

Similar projects worth following
I examined different methods for image acquisition in the 3D modeling process in order to best determine optimal practices and parameters. Among the parameters tested were field of view, percent overlap of images, distance of the camera from the object, irregularity of the object, angle of the camera to the base of the object, and scaling of the object. Additional tests included an attempt on a model of a object rotating at a constant angular velocity against a largely reflective background of sky and water and a complete 360 degree model of a Megaladon Sharktooth. Initial trials and experiments have been completed, and data analysis is currently taking place.

In most modern day 3D modeling applications and attempts, images are acquired through one of many methods and then processed with software to produce an accurate model. Once the model is produced, the model can be printed on a 3D printer, displayed to colleagues, published online, or a multitude of other applications. 3D imaging is gaining ground in applications in numerous fields, including that of construction, surveying, historical preservation, geophysics, land use, urban planning, renewable energy development, and much more. With the strong presence of 3D imaging and modeling now, and it’s continued growth into the future, it seems appropriate to examine how to create the most accurate, detailed models possible.

As mentioned above, the first crucial step in the process is the acquisition of images. Although this may seem as a fancy way of “taking pictures”, the image acquisition is much more detailed and specific. Without the proper images, and method of acquiring those images, the basis of the model can be poor before processing even begins, rendering the model invalid. Instead, careful planning and image acquisition can lead to a fantastic model without much effort on behalf of the user. It is because of this that I chose to focus my work on the image acquisition process.

The two main schools of thought describe taking pictures in the Nadir and Oblique fashion. In the Nadir fashion, photos are taken at a lens angle of 90 degrees (or close to it) and at the same elevation above a flat object. This is optimal for creating models of farmland, quarries, and other relatively flat areas that cover a large surface. The other method of image acquisition is Oblique, which is optimal for cases of modeling statues, more complicated surfaces, and surfaces that have a great deal of depth. In Oblique imagery, images are taken at angles ranging from 15 to 75 degrees from perpendicular.

There are many other factors in image acquisition that contribute to the quality of the model, and all the factors can be applied to both methods of aerial photogrammetry. Image overlap is perhaps one of the most important as the repeated capture of pixels of the same item only allow for greater accuracy. Field of view is another important factor, which describes how much of the object is included in each photo. Complexity of the object being modeled has a great deal to do with how the images are acquired, as many models are made of plain objects while others have countless minute details. Finally, the angle that the camera is at relative to perpendicular can affect the quality of the model, as is shown in the project submission.

In addition to testing the above parameters that affect the quality of the model, I explored some other aspects of 3D modeling that were applicable in a few special cases. First, I tested and perfected scaled models of many objects during the course of the project. Scaled models are not a new phenomenon, but many are of large areas. Instead, I was able to create scale models of objects smaller than 1 meter with accuracy in the scale of less than a centimeter. Secondly, I was able to create a complete model of a sharktooth, capturing all sides of the object. Most 3D models have some area, no matter how small, that cannot be captured because it is the support for the object. These include pedestals for statues, the foundation of houses, etc. Therefore, some part of the model is always either left out or fabricated using modeling software such as Blender. Lastly, I attempted to model a moving object on a background of mainly water and sky. This is incredibly difficult as the model is moving and the background has a low number of reference points. Although this aspect of the project is not yet complete, the methods of doing modeling this moving statue are in the process of being tested, and will be elaborated on further.

More information on each of the tests that were done can be viewed in the project logs below....

Read more »

FOV_Test_data.ods

Data from the FOV test, including qualitative and quantitative results and images of the objects used.

spreadsheet - 10.54 MB - 07/11/2016 at 13:50

Download

Complexity_Data.ods

Data from the complexity test with architecture models, including qualitative and quantitative results.

spreadsheet - 17.65 kB - 07/11/2016 at 13:49

Download

Overlap_Angle_Test_data.ods

Data from the overlap angle test, including qualitative and quantitative results.

spreadsheet - 14.43 kB - 07/11/2016 at 13:49

Download

FOV_Calculations.xlsx

Excel file for user input to determine FOV and distance from object for specific camera parameters and desired resolution.

sheet - 9.47 kB - 07/11/2016 at 13:43

Download

Bismarkturm_final.mp4

A video of a 3D model of the Bismarkturm, a tower in Konstanz, Germany. This model was created by selecting photos from a collection of images taken during a drone flight. The original collection of photos did not calibrate and produce an accurate model. But, after careful selection based on results from previous trials in photogrammetry from my project, a quality model was produced.

MPEG-4 Video - 33.64 MB - 07/11/2016 at 13:43

Download

View all 17 files

  • 1 × Pix4D Mapper Pro software I used a monthly license of Pix4D Mapper Pro software for all of the processing of the images and creation of the models
  • 1 × Blender software I used blender to join the triangle meshes for the sharktooth,and for creating the videos found in the project files.
  • 1 × Aorus X-7 Laptop with graphic card This very reliable, efficient computer was used for the computing power required for the processing of models.
  • 1 × Sony NEX-5N Digital Camera Used in image acquisition. Specifications on the camera can be found online.
  • 1 × GIMP Photo Editor in cases when the weather was not optimal, GIMP was used to improve the brightness and contrast of the images.

View all 8 components

  • Oblique Photogrammetry: Examining Angle and Overlap

    Travis Broadhurst07/11/2016 at 12:18 0 comments

    Oblique photogrammetry was by far the method that I used the most. It is also the method that has the most degrees of freedom as Nadir imaging does not have any flexibility on the angle of the lens relative to the object. Oblique photogrammetry, on the other hand, can be performed even with changes in FOV, overlap of the images, and angle of the camera relative to the object. I wanted to see how each of these affected the quality of the model.

    To do this, I selected a basic object of a paper mache giraffe about 0.8 meters in height. I placed the giraffe in a local playgound on a background of mixed sand and gravel to allow for a large number of keypoints. In this way, I could be sure that background shift or a deficiency of keypoint matches would not factor into this experiment. I then tested the angle of the images as well as the percent overlap, while keeping the entire giraffe in the FOV of the camera. I was able to create rings in the sand of varying radii and had the camera on an extendable arm to make sure the height above the ground was constant. I then changed the distance from the object to alter the angle of the camera, and captured images at varying degrees in order to test the overlap. I performed 9 trials, testing 3 different angles (15, 45, 75 degrees) and 3 different amounts of overlap (50%, 75%, 90%). Processing was done with Pix4D Mapper and each model was scaled so that information on the density of triangles could be recorded and compared.

    Overall the quantitative and qualitative data both confirmed that the model improves as the percent overlap is increased. Additionally, the best models were from the images taken at 45 degree, so that seems to be the optimal angle (at least in this case). More analysis of the data is currently in progress,but further testing that could include multiple angles or multiple levels of oblique images could prove to be useful.

    Although it was shown that the quality of the model was best at a high overlap and at 45 degrees, the level of accuracy and precision of a model is completely determined by the user and depends on personal preference. Thus, although 90% overlap and 45 degrees provide the best model, it is unreasonable for me to recommend this as 75% overlap and 15 degrees might provide the necessary quality for someone else in my position.

    The data for this experiment is included in the project files under "Overlap_Angle_Test"

    Thank you.

  • Scale Models

    Travis Broadhurst07/11/2016 at 11:53 0 comments

    One commercial application of 3D modeling is creating scale models of different areas. This has largely been used in the construction industry to create a model of a house, map a work site, measure the amount of gravel in a stockpile without having to use heavy machinery to otherwise do so, etc. However, I thought that it could also be used to provide a reference for models of smaller size, such as the sharktooth or historical artifacts. Additionally, Pix4D Mapper Pro reports a parameter that counts the density of triangles in the mesh per cubic meter. This is a perfect measurement of the precision of the finished model, as a higher number of triangles per cubic meter denotes a much more exact model and can show more detail. However, this is not an accurate measurement if the scale is not included and the model is not calibrated to that scale.

    In order to best create scale models, I would include a meter stick in the photos and create the model fro these photos just like any other model. I would then create manual tie points in the Pix4D Mapper Pro software, add a scale constraint, and reoptimize the project. This gave amazing results, and allowed for measurement of any other item or length in the project. Surface areas and volumes can also be reliably and accurately calculated in a scaled model. The only concern with this method was that the meter stick would be in the final model. So, to avoid this, I would create the model and then measure the distance between two definite points with the meterstick to add the scale constraint without having to include the meterstick in the model. This was done in subsequent trials and models to have an accurate measurement of the density of triangles in the mesh. After many trials and attempts, I was eventually able to use the scale constraint and measure items in the model to an accuracy of half a centimeter.

    The commercial application of this practice to approximate materials needed to extend the roof of a biergarten in Konstanz is awaiting third party permissions.

    Thank you.

  • Object Complexity

    Travis Broadhurst07/11/2016 at 11:42 0 comments

    The complexity of the object is also a crucial component of 3D modeling. Most software that processes photos for 3D modeling is similar to Pix4D mapper in that they recognize similar keypoints in images and match those keypoints. If there are more keypoints, and thus a more complex object, more keypoints can be matched between images and more 3D points can be determined. If there are more 3D points, the triangle mesh will have a better resolution and be more precise, rendering a much more optimal model.

    In these trials, which are mostly taken from an public architecture model display in Konstanz, display how the complexity of an object can detract or add to a model, even with a similar photo acquisition method. The architecture model display included over 50 models of different buildings and city plans from around the world and were all done by the architecture college here in Konstanz. I took pictures and made models of 25 of those examples. All were of varying complexity, and so captured images of each using Oblique photogrammetry. In all the cases observed, the best models were the ones of very complex objects. In many cases, I had to manually calibrate some of the images for the objects that were very uniform in order to even get a mesh that would show the majority of the object. Quantitative and qualitative data for this part of the project are included in the Excel file labeled "Architecture_Photo_Acquisition".

    The other trial that is a perfect example of the complexity of the model is of the Bismarkturm. The Bismarkturm is a tower commemorating Otto von Bismark and is located in Konstanz, Germany. I used images from a drone flight of the tower to create a model and the video that is included in the project files. However, the initial images from the drone flight did not render an accurate model even though there were plenty of photos with plenty of overlap. I noticed that many of the photos either included parts of the sky or included too much of the tower within the FOV of the camera. Since the tower was largely featureless, these images were detracting from the keypoint matches and actually detracting from the model. I then only chose the images that included the entire tower and included some of the nearby scenery to add to the complexity of the object. The second attempt was much more successful and produced an accurate model with less than half of the images originally used.

    The FOV trials also support the observations on complexity that were noticed in the above trials.

    Thank you.

  • Imperia: Modeling a Moving Object

    Travis Broadhurst07/11/2016 at 11:27 0 comments

    Imperia is a statue in Konstanz, Germany in the town harbor. It was built in the 1990s to commemorate Konstanz as a popular town in Germany for past German kings and as the location of the only Council of the Catholic Church that occurred outside of the Vatican. The statue rotates on it's base once every 4 minutes 24 hours a day and 7 days a week, is 9 meters tall, and is situated on a background of the Bodensee (water) and sky. These all combine to make it a difficult, if not impossible object to model. However, it is of considerable interest as it is the symbol of Konstanz and a 3D model could be very beneficial for historical preservation in the case of floods (which happen every year).

    The model is difficult due to the rotation and background. If the object is moving relative to its surroundings, then any keypoints detected on the object relative to keypoints on the background will be invalidated in the next image since the same keypoints on the image could be at a different position relative to the background. To make matters worse, the background is composed of sky and water, both of which are reflective surfaces and have a low number of keypoints. If the number of keypoints is low, the number of matches will also be low the the model will be inaccurate.

    In order to model Imperia, I first tried using images captured during drone flight around the object. I carefully chose images to try to get the statue in the same rotational position but with the drone in at a different angle around the statue. Although these Oblique photos would normally produce a decent model, these images did not calibrate due to the low number of keypoints since the majority of the background was water and sky.

    The next attempt that is currently in process is to fabricate a background for the statue and use image editing software (GIMP) to edit the photos. There is a rotunda in a cathederal in Konstanz that has a circular wall. I have created a model of this wall, and have taken images of Imperia at different angles. I have also edited the images of Imperia to make the background transparent. My next plan is to use Blender to take screenshots of the mesh of the wall of the rotunda every 10 degrees. I will then superimpose the images of Imperia (also taken every 10 degrees of her rotation) on the images of the wall to create the images as if Imperia were in the rotunda and not at the harbor. This has taken a great deal of trigonometric and optical calculations to ensure that the scaling of Imperia is accurate and that enough of the background is included in the FOV of the camera. I hope that this effort will prove fruitful, and will keep this project log updated.

    I have uploaded a video file of the altar at the center of the rotunda, but will also upload images of Imperia and the mesh of the wall of the rotunda with the tag "Imperia_"

    Thank you.

  • Field of View Tests

    Travis Broadhurst07/11/2016 at 11:04 0 comments

    Field of view (FOV) is defined as the angle of a natural scene that the camera is able to capture. This depends on a few parameters including the principal point of the lens, lens distortion, size of the sensor, and focal length. I have created an Excel file to calculate field of view (and other parameters) based on the known measurements for each camera model. This file is attached, and its formulas are embedded.

    It is imperative to consider FOV in planning for image acquisition. By capturing the entire object in each image, and therefore being far away enough for the FOV to encompass the size of the object, you can ensure that your images will have enough keypoints and reference points to be able to calibrate all cameras and create an accurate model. However, being far from the object also means that a degree of precision must be sacrificed since the camera has a limited number of pixels and each pixel will cover a larger area as distance from the object increases. Therefore, the FOV has a large impact on the accuracy-precision tradeoff.

    I wanted to test this and determine which quality was more important (accuracy vs. precision) in creating a model, especially on different objects. After all, if the object selected is largely featureless, precision is not of great concern. However, an object such as a rock with many minerals and different grains could gain from occupying more of the FOV to increase the precision of the model in determining the size of the grains, color of the grains, ex.

    In order to do this, I tested four different objects on the same background in the same outdoor weather conditions. I tested each object at a distance of 15cm to allow each object to occupy a large area of the FOV of the camera. I then tested each object at the appropriate distance so that the entire object fit into the FOV of the camera. Overlap of the images was kept roughly constant during the image acquisition process.

    The four objects ranged in complexity from almost featureless, to very detailed. The objects included a rock, a model of a grain mill, a watering bucket, and a cardboard box with a few Euro coins placed on the box to add keypoints to the images. Pictures of the four objects are attached.

    The background was a featureless background of either a white and green polka dot patterned tablecloth or a plain white desk. However, all the trials that included photos taken close to the object were on the polka dot tablecloth so as to eliminate the concern of large background shift. The trials where the full object was included in the FOV were taken on the white desk with a small bit of natural background in each photo. The trials were done in the shade to remove shadows, and no reflective surfaces were included. Oblique image acquisition was used in all cases.

    The results confirmed my thoughts of a precision-accuracy tradeoff, especially in the case of the rock. However, the trials also displayed other interesting phenomena. In all the cases where the entire object was included in the FOV, the background shift did not affect the quality of the model. Background shift is distortion and confusion in the processing software caused by a large shift in the background which occurs when the object is much closer to the lens than the background. So, in these cases, a few degrees of rotation in taking pictures around the object is a large change in the background. This reduces the number of keypoint matches in the images and can cause images to not be calibrated correctly. I had assumed this would affect the trials where the object was occupying a larger part of the FOV, and so I placed each object on a mostly uniform background to try to eliminate this. It did not work perfectly, as there was some excess point detection on those objects, but was not disastrous. There was no background shift issues on the trials in which the FOV captured the entire object.

    In addition, the complexity of each object rendered the accuracy-precision tradeoff invalid...

    Read more »

  • Sharktooth Model Trials

    Travis Broadhurst07/11/2016 at 10:01 0 comments

    It was a lengthy process, but I have finally created a complete model of the Megaladon Sharktooth. This is difficult to create the main reason that all objects must be stationary in order for a model to be made of that object. Thus, if the object is stationary, it has to have an attachment point. In most cases, this attachment point is a pedestal for the object, or the ground if the object is on the ground. Because of this support, the model cannot be complete because the camera cannot see the surface of the object that is covered up by the support.

    One solution is simply trying to work with and ignore the support. In this case, the surface has to either be interpolated using the 3D modeling software, or must be fabricated and merged with the existing, incomplete mesh using Blender or a similar software. Although this method works, it is not optimal nor is it always accurate.

    Another possible solution to the support problem is to create a support that is as small as possible. For example, I could have drilled a hole through the sharktooth and tied a thin finshing line through that hole. This would be a very minimal support, and the fishing line would likely not even show up. However, since the sharktooth is a collector's item and is very valuable, it is not practical to drill a hole through it.

    Instead, I was able to create the attached video by merging two meshes of the sharktooth. I used Pix4D Mapper and images taken of the top and bottom of the tooth to create both triangle meshes, and then used Blender to merge them together into one mesh that includes fine detail of all surfaces of the sharktooth.

    Before being successful in this, I attempted many unsuccessful trials that included supporting the sharktooth on styrafoam tabs to reduce the coverage of the support. This covered a great deal of the tooth and was not successful. I also tried taking images of the top and the bottom, calibrating both, and then merging the projects into one using Pix4D Mapper. This was not successful because of the sharp angle of the tooth. The sharp angle creates a large difference in the camera angle between the top and the bottom, as compared to a much smaller difference in camera angle if modeling a basketball.

    Please see the attached images for more examples and please view the files in the project with the tag "Sharktooth_" at the beginning.

    Thank you.

View all 6 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates