Week 8: Breakpoint and Retracing Steps
April 20, 2024
Before I got into the testing phase of inverse, I was made aware of a huge miscalculation: the application of the matrix inverse I had found in the mesh command only took inputs of 3×3 matrices. When I searched the repository for the other applications, I came up with the same result. In fact, even though there were templates of Eigen matrices of n by 1 and n by n defined in the types.h file, the variable-dimensioned matrices were only used to verify whether an object was a matrix of that type.
This was a pretty big setback for me because that means there isn’t really any room for optimization with inputs that small. While it is disappointing to not be able to apply BSI to such an orthodox linear algebra operation as of yet, it will definitely be something I want to continue in the future because of how perfect BSI is for linear algebra operations consisting only of row operations (partial pivoting). For now, it’s back to the drawing board for me.
compute_depthmaps
To find what operations do take the longest, I put logging statements and timer functions throughout the code in addition to the ones already in place for reports. Here’s an example of some of the outputs from compute_depthmaps, which generates a dense point cloud from depth maps which store information about the location of points relative to each camera position, in microseconds. A lot of the function names in the output are repeated because of parallel computing.
From here, I found my first candidates for implementation, PatchMatchForwardPass and PatchMatchBackwardPass, which apply a function to update all the pixels in a depthmap of dimensions 852 by 640 with patch size 8.
I will have to read the function more to see whether it’s possible to run the algorithm on the whole array of points at once rather than one by one.
reconstruct
The operation that took the next longest was the global bundle, which bundle adjusts all of the cameras/images in the current reconstruction to fit the information from the new image.
Since the operation that takes the longest is the Ceres solve function, I also have to keep in mind that I might have to either add the Ceres library as a subdirectory to the project instead of just including it from my local system so that I can edit it, or write a different object to replace the Ceres Solver.
Other options
My remaining candidates are the NumPy operations in reconstruct that do operate on large vectors/matrices. While they don’t take a large percentage of the total time to run OpenSFM, it could be valuable as a stand-alone testcase.
Many NumPy operations can be done using row operations alone because many of them rely on solving systems of equations. While the LAPACK routine algorithm isn’t entirely suited to BSI because it comprises of breaking the matrix into submatrices and operation on smaller chunks, which BSI is not optimized to do, BSI can do elementary row eliminations in linear time and reduced space complexity. This makes it trivial to convert a matrix to reduced row-echelon form, and performing the same row operations on an identity matrix will result in the inverse. This was the method I opted for when implementing inverse using BSI.
For the upcoming week, I will likely be reading the library to understand my options and hopefully devise a plan to implement them because they are not the conventional linear algebra or vector operations.
Leave a Reply
You must be logged in to post a comment.