Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Not sure I agree with you on SNOPT. There are many many tools out there that I see people use for convex (and non convex) optimization. Especially considering a single license costs $6000 I would be surprised if SNOPT even captured a plurality of solvers in use.

FWIW in my department (Electrical Engineering) No one is using SNOPT for their work that I know of.



sort by: page size:

Also in EE, and I don’t know anybody using SNOPT for solving any kind of optimization (of which there are many).

Would you consider proprietary solvers or optimization software in that category?

I strongly disagree. This has very little to do with Excel Solver. This seems to be Microsoft's take at large-scale, real-world optimization problems, possibly with millions of variables and constraints. Can you do that with Excel? I doubt it. This Solver Foundation framework can do Constraint Programming, Quadratic Programming, and Mixed-Integer Programming. This is serious optimization, not the kiddie stuff Excel users deal with. SF seems to be based on commercial C++ solvers such as MOSEK.

I would say that open-source is not of great interest in optimization software. The idea is to write as little code as possible, and to trust that everything performance-critical has been optimized. Even speed is not the main issue for me. I want something that is flexible. I want to write little code because the less I write the less bugs there are. Correctness trumps everything else.

If I am using SF to allocate investments, I want to make sure an optimal solution is found, even if it takes a little longer. Computer time is cheap. Buy a bigger computer. Developer time is more precious. There are only 24 hours in a day.

You want to change the source code? With all due respect, but I would speculate that 99,9999% of HN users are not qualified to write numerical optimization code. Looking at it is of little use unless you have a PhD in Applied Math and years and years of experience.

Of limited use to HN readers? To those writing web-apps, perhaps. Those doing Machine Learning will probably be ecstatic to find this.


How does your product compare to major optimization software/mathematical programming products? Do you have public benchmarks?

Unless you're solving toy problems, it matters quite a bit. In mathematical optimization, constructing the problem tends to be just as expensive as solving the problem is – sometimes more so. The existence of expensive commercial systems like AMPL and GAMS that only exist to express optimization problems demonstrates that this is a non-trivial issue that people are willing to pay money for. Using C++ APIs to solvers is extremely painful, inflexible, error-prone, and completely locks you into a specific solver.

We’re trying to bring new ideas to the optimization space. We’ve used a bunch of technologies over time (CP, MIP, LP, heuristics, etc.). We’ve built our own solver (for www.nextmv.io) and learned a lot along the way.

Solvers are amazing. MIP solvers, for example, are some of the fastest and most mature software packages that exist. They’re also wickedly challenging to use effectively.

Do you use optimization solvers? Which ones? Why? Do you like them? Why or why not?


I have a Phd in optimization and I still think Cplex and Gurobi seem magical. The devil is in the details. Open source solvers are comparatively not very good.

Optimisation solvers are complex and incredibly powerful pieces of software. However, there are lots of different options and choosing which to use can be a daunting task.

There are some open source options ([COIN](https://www.coin-or.org/), [OR-tools](https://developers.google.com/optimization), [Minion](https://constraintmodelling.org/minion/), [CVXPY](https://www.cvxpy.org/)) and other commercial offerings ([gurobi](https://www.gurobi.com/), [Mosek](https://www.mosek.com/)), other people write their own for thiner specifics purposes. Some [benchmarks](http://plato.asu.edu/bench.html.) are maintained by Hans Mittelmannt.

Personally, I have used OR-tools, maintained by a team at Google, for vehicle routing optimisation and found it powerful but poorly documented with an inconsistent API. I've also used R's [optim](https://www.rdocumentation.org/packages/stats/versions/3.6.2...) function and [lpsolve](https://cran.r-project.org/web/packages/lpSolve/index.html) for linear and integer problems.


How often do these come up? I studied some OR stuff in grad school when I was evaluating solvers and brushed up against the OR tools community for some research ideas and from what I could tell the businesses that really needed it already use these tools and the other businesses just don't have enough business dependent on optimization to be worthwhile.

I’m in the optimization space and have never heard of this. Thanks. I recognize many of the names behind it.

I think part of the reason it’s not more popular is because QPs aren’t the most popular problem type. They’re used in MPC problems and any number of L2-loss function problems like ML problems, but these already have custom methods for solving the QP. Doesn’t meant they can’t adopt a standard high performance solver — I think this is very promising.

Also I’m not sure if there’s any political issues that limits them from listing this on COIN OR. That’s usually most folks go to site for discovering new solvers.


> They may not be as good as the commercial solvers, but they're still way better than the other open-source options.

Agreed. CBC/Clp are the best open-source LP/MIP solvers out there. SCIP is better, but it's not fully free.

> For general optimisation, i've used NLopt. It was very easy to use, and has a pretty sensible set of algorithms.

I noticed the omission of IPOPT support in NLopt. IPOPT is one of the best large-scale nonconvex NLP solvers out there (if you have exactly 2nd order derivative information, typically from AD).

> One thing i've learnt is that comparing optimisation algorithms is really hard. One might be better at one problem, one at another. One pretty general axis of variation is problem size: some algorithms scale to orders of magnitude more variables (or constraints etc) than others.

Very true. The academic world has standardized on Dolan-More charts for comparing optimizers on a standard corpus of problems, and those are the closest we have to quantifying general performance. But the no-free lunch theorem applies. There's also a lot of randomness and chance involved, e.g. initialization points can drastically affect nonconvex solver performance on the same problem. So can little things like ordering of constraints or the addition of redundant non-binding constraints (!) for MIPs (this was demonstrated by Fischetti). That's why modern MIP solvers include a random perturbation parameter.

> ML hyperparameter optimisation

Most types of hyperparameter optimization are black-box optimization problem with no analytic derivative information, which much of research in deterministic optimization does not address. Black-box problems typically rely on DFO (Derivative-Free Optimization) methods. DFO methods are even harder to quantify in terms of performance. The Nelder-Mead Simplex method is as good a method as any.


Using an optimization library seems like the completely wrong approach to solving this interview question. Was that intentional?

I am not an expert on convex, unconstrained optimization but I frequently use these methods. I'd wait until it's peer-reviewed and published in a good journal (SIAM) before I get too excited about this one.

Im of the opinion thats the only optimization function worth using. I dont know of any others in the state Im in - I just graduated from a mediocre state school and cant pass interview loops, but everyone that has attempted to convince me otherwise hasnt, from what i can tell, experienced my long string of getting barely anything for my 10+ years of dilligence and work. Unlike my friends im not a crypto millionaire, an FB employee, etc.

What tool is now used for the modeling of the optimization problem?

While the work on JuMP is impressive, isn't the fact that all major devs are from the same place worrisome?


I think it depends if you're interested in knowing how to solve these problems, or simply in solving them. If the latter, there's a bunch of commercial solvers out there. I won't name any for fear of an adverse reaction :0) but a trawl through the first page of Google results for 'optimization solvers' would probably work.

Nevergrad, a Python optimization platform open-sourced by Facebook, is easy to use. Can't say how it compares to other solvers though.

https://facebookresearch.github.io/nevergrad/


>even the current interior point implementation in HiGHS can be unacceptably slow relative to Gurobi (60-100 times slower). Considering that even Gurobi requires a couple of days to solve huge practical problems illustrates that the use of current open source solvers is totally impractical. The aim of this proposal is to identify key solver enhancements in the short, medium and long term that will help to bridge the significant gap in performance between HiGHS and the major commercial solvers.

It's a great need in the scientific community. Gurobi is the leading commercial solution with a strict license and a high price tag. As the proposed open-source project is only focused on linear programming (as opposed to a general nonlinear optimization), the goal may as well be achievable. Best wishes for the authors.


NR performed poorly, in my experience.

Many years ago, I tried using NR for optimization problems. The functions were dog slow, because they tended to set aside space at the start and free it up at the end, and that burns up a lot of time for functions that are called frequently.

The alternative, used in the much-faster NAG and IMSL systems, was for the user to set aside memory before calling any functions, and to hand functions "working memory". This scheme demanded more of the user (yielding bad problems when you gave insufficient memory!) but it delivered much better performance.

For my particular problem, a particular NAG routine (name now forgotten) was almost magical in finding solutions in a relatively small number of iterations. As I recall, the NAG code was closed, so I never knew how the authors got such good results. I imagine the secret was in the realm of clever numerical analysis, as opposed to clever coding.

Relative to other systems, NR had the advantage that you could see the code. Quite often, the code provided examples of how not to do things. But at least you could see the code.

As for matters of licensing, I think they are covered very well in other comments in this thread.

next

Legal | privacy