almost is very far from done. If you have REALLY done it then probably somebody else might do it as well in the (not so) near future.
Also, what is the degree of the polynomial solution? If it is high then fast approximate solution might be preferable to exact slower solution (example: Simplex vs. Ellipsoid algorithms for LP) . If the solution is not linear or quadratic the most this hugely decreases your potential market.
If I am at such position I would look at problems for which I can beat precision/time for approximate algorithms.
Years ago I came up with the first algorithm to solve a problem in a niche area in polynomial time. It uses some fairly contrived techniques; the power of the polynomial, if we really tried to estimate it, would probably have been >=10, or even >=20. But some time later another group of researchers expanded a technique used in the algorithm to help solve the problem in a much more practical manner.
I also know this approach as Dynamic Programming. While the subproblem is not cleanly seperated it is present in the sense that you consider one element first and then deal with the rest later, never touching the first again.
And yes, the runtime here is polynomial, however it is polynomial in the price k, which only needs log k bits to encode in the input. Therefore it is exponential in the encoding length. These algorithms are called pseudo-polynomial.
There's a lot of results like that. You can do some pretty fast algorithms that are provably "not so bad" compared to the optimal solution and run in polynomial time. The problem is that you generally can't combine these with problem reduction to get a "not so bad" solution to a different problem (at least as far as I know).
The Fastest and Shortest Algorithm for All Well-Defined Problems [1]
An algorithm M is described that solves any well-defined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and low-order additive terms.
(The constants in the "low order" terms are... large. Basically this is a delightful reductio of asymptotic analysis)
It appears to me that what they've accomplished is to devise an algorithm that can, in polynomial time, find a solution to the ATSP with cost no worse than a constant multiple of the optimal solution.
Can any domain experts confirm or deny my interpretation?
You can also just use the Christofides-Serdyukov algorithm. It's fast and it actually has a performance guarantee (it always produces a solution that is at most 1.5 times the length of the optimum).
Right, but per the article, Korf’s algorithm takes hours and Thistlethwaite’s algorithm doesn't necessarily find the optimal solution. I wonder if a self-learning AlphaZero approach could result in finding optimal solutions in seconds.
Hah ok point taken. Here’s the paper I found about traveling salesman - looks like a version of Grover’s algorithm so I think that means it provides a speed-up but not polynomial time? Paper is paywalled: https://link.springer.com/article/10.1134/S1063776123080095#....
Solving it in general is quite trivial - solving it 'fast' is another story. I can write a backtracking algorithm or a constant propagation one on spot. Yet, I am not quite sure how efficient the latter would be, the former is obvious a brute force one.
In any case by fast polynomial solution I meant one with a low degree. Let's assume this algorithm is as fast as existing approximate ones.
reply