There has already been some significant progress on this front. E.g., SQL and logic programming let you describe what you want to happen, and let the computer figure out some of the details. Any compiler worth using does this, too. Smarter machines and smarter programs will mean smarter programming languages.
I've got quite a lot of hope that a "sufficiently smart compiler" is not a pipe dream, just an idea well in advance of its time.
I think eventually we'll be able to describe data structures and algorithms in very high level languages in an architecturally neutral way and spit out near optimal code for the desired parameters.
Although I wouldn't be surprised to learn it does happen, but not in my lifetime.
It's interesting that this paper talks about compilers becoming a lot more sophisticated and powerful, and simplifying many aspects of programming. I've always hoped for this, and want to work on this as a personal side research project.
Compilers have gotten better, but in comparison to a human, they are still not particularly good at the edge cases. Things like instruction selection and register allocation in particular, sometimes I wish I had the time to write my own more intelligent algorithms for them.
Agreed, I guess that when we have build smarter compilers a Centaur approach would work best (like in chess). The computer can do a whole lot by bruteforce and smart algorithms and the human uses his knowledge of the context to steer it in the right direction.
Compilers are getting smarter over time. Better analyses and clever optimization schemes are being designed and implemented. In my opinion though, as a compiler writer, the best route is to make languages easier to analyze. Things like purity and referential transparency (e.g.: as seen in Haskell) make code analysis and transformation much simpler. They enable things like pattern-matching type transformations on code. If the compiler has to reason about pointers, invisible side-effects and alias analysis, it greatly limits the kinds of things that can be inferred about a program.
I agree with this. A smart coder is going to write it all in a high-level language and then profile it to see which tiny bits might be better off in assembler and then profile it again to test their theory. Modern compilers know a lot of tricks and can often find optimizations that would be hard to do manually in machine code.
That sounds like an optimization pass of a compiler. I'm not convinced it's meaningfully different.
My background is linguistics, so perhaps I'm biased to see everything as a compiler. However, automata theory generally provides a similar perspective.
Isn't that what we're all waiting for, the mythical "smart enough" compiler. Imagine constructing programs from abstract and composable high-level concepts, and the compiler does all the dirty hacks behind the screen. Producing fully parallelized, secure, efficient machine code on-the-fly. Could it? One can dream :)
http://shaffner.us/cs/papers/tarpit.pdf
There has already been some significant progress on this front. E.g., SQL and logic programming let you describe what you want to happen, and let the computer figure out some of the details. Any compiler worth using does this, too. Smarter machines and smarter programs will mean smarter programming languages.
reply