Finally, we can actually calibrate our camera and undistort our images!
We'll start with the real camera on our RC and then we'll also calibrate our simulator camera!
Once calibrated on the calibration images, you can use the same undistortion matrix for any other image that the same camera takes (given the focal length hasn't changed)!
I'll be using my RC camera, the EleCam Explorer 4K, which has an advertised 170 degree FOV.
First, we need to print a checkerboard pattern so we can take some calibration rig photos.
If you use another pattern, make sure to update the number of inner corners for the rows and columns in the
getObjectAndImagePoints function we made earlier.
After printing it out on an A4 paper, you should take at least 10 photos of it in different angles, e.g.:
Also, most action cams with a FOV as large as this one (170°) will have some built-in distortion correction, probably named something like fisheye correction or adjust:
![Fisheye adjust](/images/ai/fisheye adjust.png)
I've intentionally left mine off, to show how distorted the images are by default at such a big FOV, and to show that they can be undistorted even for those whose camera doesn't have a built-in option to do so.
After copying the images to my
calibration_images folder and calling the
getObjectAndImagePoints script, here's what the detected image points look like:
After getting the image points, we can call the calibrateCamera function once, and then undistortImage to undistort our images for as many new images as we want, here's what the previous two image look like undistorted:
First, we need to put a checkerboard object into the simulator so we can take photos of it with our RC:
If you want to make the cube proportions fit the original checkerboard size, e.g. an A4 paper, after clicking on the cube, you can edit the scale values in the Inspector panel and set the values to be e.g. 0.297 for X and 0.21 for Y, since an A4 is 29.7 cm x 21.0 cm.
After you've got yourself some images, you can run them through the same procedure as you would if you used a real camera. Here's what mine looked like:
You can undistort every image like this before you input it to the neural network. Here's what the code could look like:
# At the beginning of run getObjectAndImagePoints() calibrateCamera(inputImage.shape[1::-1]) # For every input image to the NN undistortedImage = undistortImage(inputImage) # Pass it along to the NN