When trying to initialize a Vector using the result of some operation in Eigen, the result seems to be different depending on what syntax is used, i.e., on my machine, the assertion at the end of the following code fails:
const unsigned int n = 25;
Eigen::MatrixXd R = Eigen::MatrixXd::Random(n,n);
Eigen::VectorXd b = Eigen::VectorXd::Random(n);
Eigen::VectorXd x = Eigen::VectorXd::Zero(n);
Eigen::VectorXd y = Eigen::VectorXd::Zero(n);
y = R.triangularView<Eigen::Upper>().solve(b);
x << R.triangularView<Eigen::Upper>().solve(b);
assert((x-y).norm() < std::numeric_limits<double>::epsilon()*10E6);
I am aware of potential rounding errors, but in my understanding, the two evaluations of
R.triangularView<Eigen::Upper>().solve(b)
should have the exact same precision errors, and therefore the same result. This also only happens when initializing one variable with <<
and the other withoperator=
, but not if both variables are assigned to the same way.
When not using only backwards substitution on the upper triangular part but evaluating R.lu().solve(b)
on both and comparing the result, the difference is far smaller, but still exists. Why are the two vectors different if assigned in nearly the same, deterministic way?
I tried this code on Arch Linux and Debian with a x86-64 architecture, using Eigen Version 3.4.0, with C++11, C++17, C++20, compiled with both clang and gcc.
3
Answers
The condition number of the matrix that defines the linear system you are solving is at the order of 10⁷. Roughly speaking, this means that after solving this system numerically the last 7 digits will be incorrect. Thus, leaving you with roughly 9 correct digits or an error of around 10⁻⁹. It seems like
produce slightly different machine codes. Since your matrix is that illconditioned we expect an error of the order of 10⁻⁹. Or in other words, that the computed solutions differ by around 10⁻⁹.
You can verify the behavior using the code below. If you activate the line
you decrease the condition number of the matrix, by adding a diagonal term, and hence the error is significantly reduced.
Note that Eigen heavily relies on function inlining and compiler optimization. For each call to the solve function the compiler generates an optimized solve function depending on the context. Hence,
operator<<
andoperator=
might allow for different optimizations and hence lead to different machine codes. At least with my compiler, if you computethe values for
x
andy
agree."Nearly the same" is not "the same". In this case
y = R.triangularView<Eigen::Upper>().solve(b);
uses theassign_op
whilex << R.triangularView<Eigen::Upper>().solve(b);
usesCommaInitializer
.The
CommaInitializer
uses theblock
method to copy the data while theassign_op
appears to do the alignment and other things to establish theEigen::Matrix
.The type of
R.triangularView<Eigen::Upper>().solve(b)
isEigen::Solve
, not aVectorXd
. You can explicitly copy theSolve
into theVectorXd
withVectorXd(R.triangularView<Eigen::Upper>().solve(b))
or more simply just useauto
and let the compiler figure it out. For example:The "official" answer by Eigen maintainer Antonio Sànchez is the following:
It goes into the same direction as H.Rittich theorizes in their answer:
operator<<
andoperator=
simply lead to different code paths, which in turn allow for different optimizations.