The abstractions inside Eigen are there to enable powerful compile time optimisations, such as compiling the addition of
three vectors `Eigen::VectorXf x = a + b + c;` in a single loop instead of naively allocating a temporary to evaluate `Eigen::VectorXf tmp = b + c;` then evaluating `Eigen::VectorXf x = a + tmp;`.
This logic is omnipresent in Eigen and is crucial to its performance. It is also used to have specialized versions of some decompositions when the matrix's size is known at compile time.
I've never used Eigen, could you explain specifically what lazy evaluation implies for this expression? Is it to do with the diagonal() call constraining the amount of work to be done?
Because when you string a couple of those please-compute-eigen-vectors-and-also-eigen-values in nested calls, you easily end up with a multi-page text in no time.
Besides, the standard way to call eigenvectors/eigenvalues function is ev in LAPACK - sgeev/dgeev.
I like eigen, but their documentation is very lacking (only contains basic usage and toy examples). However, this isn't what caused me to abandon the library, it was the lack of support for 3-dimensional matrices.
While 'Derived::Scalar' could very well be replaced by 'double' in that case, you need the templated function to catch the temporary type (expression template) that represents the u-v computation. This lets Eigen compute u[i]-v[i] and the dot product inside the same inner loop.
Without the templated function, the compiler would first generate the subtraction into a temporary vector, and then compute the dot product on that vector.
In programming it is typical to have an “eig” function, or something of the sort, which returns the eigenvalues of a matrix. eig(M) seems no less vomit inducing than M.eigenvaues(). Usually when mathematicians want eigenvalues they write something like “Given a matrix M with eigenvalues \lambda” and the eigenvalues just show up.
reply