Field of view (FOV) is defined as the angle of a natural scene that the camera is able to capture. This depends on a few parameters including the principal point of the lens, lens distortion, size of the sensor, and focal length. I have created an Excel file to calculate field of view (and other parameters) based on the known measurements for each camera model. This file is attached, and its formulas are embedded.
It is imperative to consider FOV in planning for image acquisition. By capturing the entire object in each image, and therefore being far away enough for the FOV to encompass the size of the object, you can ensure that your images will have enough keypoints and reference points to be able to calibrate all cameras and create an accurate model. However, being far from the object also means that a degree of precision must be sacrificed since the camera has a limited number of pixels and each pixel will cover a larger area as distance from the object increases. Therefore, the FOV has a large impact on the accuracy-precision tradeoff.
I wanted to test this and determine which quality was more important (accuracy vs. precision) in creating a model, especially on different objects. After all, if the object selected is largely featureless, precision is not of great concern. However, an object such as a rock with many minerals and different grains could gain from occupying more of the FOV to increase the precision of the model in determining the size of the grains, color of the grains, ex.
In order to do this, I tested four different objects on the same background in the same outdoor weather conditions. I tested each object at a distance of 15cm to allow each object to occupy a large area of the FOV of the camera. I then tested each object at the appropriate distance so that the entire object fit into the FOV of the camera. Overlap of the images was kept roughly constant during the image acquisition process.
The four objects ranged in complexity from almost featureless, to very detailed. The objects included a rock, a model of a grain mill, a watering bucket, and a cardboard box with a few Euro coins placed on the box to add keypoints to the images. Pictures of the four objects are attached.
The background was a featureless background of either a white and green polka dot patterned tablecloth or a plain white desk. However, all the trials that included photos taken close to the object were on the polka dot tablecloth so as to eliminate the concern of large background shift. The trials where the full object was included in the FOV were taken on the white desk with a small bit of natural background in each photo. The trials were done in the shade to remove shadows, and no reflective surfaces were included. Oblique image acquisition was used in all cases.
The results confirmed my thoughts of a precision-accuracy tradeoff, especially in the case of the rock. However, the trials also displayed other interesting phenomena. In all the cases where the entire object was included in the FOV, the background shift did not affect the quality of the model. Background shift is distortion and confusion in the processing software caused by a large shift in the background which occurs when the object is much closer to the lens than the background. So, in these cases, a few degrees of rotation in taking pictures around the object is a large change in the background. This reduces the number of keypoint matches in the images and can cause images to not be calibrated correctly. I had assumed this would affect the trials where the object was occupying a larger part of the FOV, and so I placed each object on a mostly uniform background to try to eliminate this. It did not work perfectly, as there was some excess point detection on those objects, but was not disastrous. There was no background shift issues on the trials in which the FOV captured the entire object.
In addition, the complexity of each object rendered the accuracy-precision tradeoff invalid in some cases. For example, the water bucket was so featureless that the images did not convey much precision even when the object took up a great deal of the FOV. In this case, there were still a great deal of keypoints but a very low number of keypoint matches because most of the surface of the water bucket looked so similar. In examples such as this, it is useless to be concerned with precision as it will not be accounted for in the image processing. The project that was attempted with the water bucket did not even process because there were too many uncalibrated cameras due to the low number of keypoint matches. However, when the entire water bucket was included in each image, the project did produce some results.
Finally, the rock was a perfect example of accuracy and precision. In the trials that included images taken closer to the rock, one could see fine grains in the triangle mesh and the level of detail was very high. But, the each of the edges of the rock included extraneous mesh that was not an accurate depiction of the object. Whenever the FOV included the entire rock, the mesh was not as detailed but was a very complete, accurate model of the rock.
More data and files on these trails can be seen in the attached files with the label "FOVtest_" at the beginning of each file.