Polynomial: fixes for Stein gcd#45
Open
jeremy-murphy wants to merge 41 commits intoboostorg:developfrom
Open
Conversation
Define constant_coefficient() for readability.
Also adds negate() method for in-place negation.
Contributor
Author
|
Good news is that I think I've perfected Stein gcd for Z[x]. It's just R[x] that remains tricky. |
# Conflicts: # test/test_polynomial.cpp
# Conflicts: # include/boost/math/common_factor_rt.hpp
Contributor
Author
|
After not looking at this for months, I've come back to it and worked through some issues. What it has left though is a lingering question as to why the Euclidean algorithm does not work consistently for R[x]. And, the need for computing the gcd of floating point numbers, using std::fmod in place of operator%. |
The answer is yes, for now.
Contributor
Author
|
Oh dear, I kind of did forget about this. :( |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Still slightly a work in progress...
Primarily, this PR is to fill in the missing gaps for making Stein gcd actually support polynomials.
However, in working on it, I realized that there is an awkward expectation of gcd, which Stepanov actually rails against in this video starting at 1:22:07. And although I am inclined to agree with him, he is arguing against Knuth, so I don't expect everyone to jump on board easily.
Essentially, he argues that, for example,
gcd(-6, -9) == -3is perfectly acceptable, rather than requiring it to return the absolute value, because, the requirement of the greatest common divisor of u and v is that it be divisible by all other common divisors of u and v, which according to that requirement, multiplication by a unit (-1) is irrelevant.This becomes particularly apparent with polynomials, where the sign and thus absolute value is... what? A convention at best. And what about magnitude? Is
4x + 2 > 2x + 1? Again, only by lexicographical convention; for divisibility of F[x],u(x) == c·u(x).With Gaussian integers it becomes even harder to judge which divisor would be greater by magnitude when you have three units, 1, -1 and i.
So, my sense is that gcd has been defined in terms of integers for so long that the assumption of total ordering and thus providing "greatest" not just in terms of divisibility but also in terms of positive magnitude has become all melded together. But really, there are two separate things going on: firstly, the true gcd computation, and secondly, normalization to a unique value.
I think we have to stick with the normalization because convention demands it, but it's mathematically icky when we start making gcd generic. I suspect we will find cases where the gcd is defined but the normalization is not. It will be interesting. :)
And another thing! Although this implementation appears to be correct, the inherent imprecision of floating point is a major factor affecting useful operation of this algorithm. So although simple textbook examples with nice round divisors that are multiples of 2 work like a charm, anything less than ideal can end in completely bogus results.
An alternative implementation of the crucial
subtractfunction--from Algorithmic Cryptanalysis by Antoine Joux--uses multiplication and no division, so avoids the imprecision of fractions, but instead grows coefficients to enormous size, returning values 8 (decimal) orders of magnitude larger than expected. Now the interesting thing about that is that because there is no division of coefficients any more, this also works for integer coefficients. But aside from that, we can treat these huge floating point coefficients as integers and normalize them by dividing through by their gcd usinggcd_range.I think this could actually be a solution...