This is certainly an important discovery, seeing that they extended previous solutions to work for light of different wavelengths, but it's hardly an invisibility cloak if the observer is assumed to be at a fixed position. Just like "quantum teleportation" is not actually teleportation, but nevertheless every article about it mentions Star Trek.
A sufficiently-smart spell-checker would be indistinguishable from an artificial intelligence. A company producing one of those things would make billions, but not by selling a spell-checker.
There is a vast stretch of appropriate solutions between the maintainability hell of "hardcode the coordinates of every pixelchange in your animation" and perfect code.
I don't think that would be a very effective way of teaching. The hard part of programming is usually not writing things like linked lists, but the ability to decompose a real world problem into sufficiently small parts that can be solved by writing down some common patterns. Repetitive learning of patterns won't help at all with developing this skill and may very well scare off students because it becomes boring very quickly.
Unlike playing an instrument, programming is not dependend on muscle-memory skills to produce adequate performances, hence repetition is of limited use while studying.
We have self driving cars right now, see for example DARPA's urban challenge. We only need to increase reliability and decrease costs to make them suitable for the market. There are also some legal problems (e.g. who is responsible if the robot car kills a kitten), but technologically self driving cars are a reality.
I wanted to learn how to draw. I read a book about it and sat down one afternoon and tried to draw something. I looked like shit and I realized the only way to improve it was to scrap it and start again. Is it worthwhile?
And it turned out that the nanoparticles produced by combustion are pretty cancerous. That's the main reason why there are limits on their concentration in the air. The smaller the particles are, the more dangerous. See http://en.wikipedia.org/wiki/Particulate#Health_effects
So far I like the recursive subdivision algorithm best. It also has the possibility to stop the recursion for different subdivisions at different times, to generate larger dungeons in your mazes. That's not so easy to do with the other algorithms.
I would be interested in some papers on generating hard mazes, e.g. mazes that take the maximum amount of resources to solve. Is every maze solvable in logspace? Probably so, right, because Connectivity is in logspace iirc.
Carpooling is just a crutch to migitate the effects of the abismal city planning that has happened in America during the last few decades. Building cities that are totally dependent on cars is just an aweful idea. From what I've heard it's hardly possible to do even such elementary things as grocery shopping without driving twenty minutes on each way.
Do you have any data on that? Most CPU bound programs should have pretty good instruction locality, negating the effects of smaller code. But without some numbers this is pointless guesswork.
If I hit a rock and it cried in pain, I would believe it to be sentient.
Sentience does not come from the raw materials -- we're just a bunch of carbon, hydrogen, nitrogen etc. ourselves -- but from the way those materials are arranged to process information.
Well obviously everything is based on quantum effects. When a transistor changes its state, quantum effects need to happen. But I don't think it's legitimate to postulate that the whole complexity of the quantum state is necessary to simulate the behaviour of a transistor.
I also think that general AIs will not be the kind that simulates brains.
But I think its wrong to tie consciousness to the hardware its running on. Consciousness is a property of the software. I think its fallacious to postulate a difference between an intelligence that runs on brains and an intelligence that runs on other machines if the difference can't be observed.
And yes I would be worried that a robot might suffer if it is sufficiently advanced to exhibit the same behaviour as, say, a cat.
There is not much of an objective difference between the pain reactions of e.g. insects and robots we have today. So I would say yes, those robots experience a similar reaction as insects. But then again, I don't think insects have a sufficiently advanced neural system to suffer, so it's okay to kill the critters.
Computers also calculated pi long before we used them to design circuits.
"However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving"
How would you tell the difference between a human-level AI and a human? Sentience is not a well defined term. The best thing you can do is to ask the intelligence in question whether has subjective experience, if it says yes you have to believe that. I couldn't prove to you that I have subjective experiences. I think it's irrational to say that every cognitive ability a human has can be simulated perfectly fine on a computer, but only a biological brain is capable of subjective experiences.
I'm currently implementing a complicated algorithm in C++. I don't have to use many language features for that, so I don't have a strong oppinion about them.
But what I can say is that, compared to Java, the tools for C++ coding suck. Hard. In Eclipse you can write some gibberish and it automatically turns it into valid Java that mostly does what you want. The Java debugger is excellent and works without hickups. The experience you have with Eclipse and CDT just isn't the same. No automatic inclusion of header files, autocompletion doesn't always do what you want, no documentation included, even for the STL. Working with gdb is painful, most of the time it is not possible to properly inspect datastructures in memory and it is generally just easier to litter the code with print statements.
Many problems aren't parallelizable for reasonable definitions of the word (i.e. polylogarithmic depth and polynomial work: Nick's class) unless there are some major breakthroughs in complexity theory.
The terrorist attacks shaped a lot of foreign and domestic policy. There was a major shift towards nationalism and decreased privacy rights all over the western world.
That would be quite expensive both for the server and the client. Besides, I certainly wouldn't use a site that tracks my mouse movement to determine whether I'm a human or not.
If it provides enough legitimate content to be useful to the users, it will get upvotes regardless of its ulterior motives. At a certain point an advertisement is entertaining enough to be viewed voluntarily. cf Old Spice guy.
Put in some proper cable ducts so you don't have to tear up the wall once some new standard is introduced.
Put up some solar thermal panels; they last forever and really do work. Look into geothermal, maybe it's worthwhile where you live. But the biggest money saver will be better insulation. You can save up to 70% of your heating/cooling costs by proper insulation.
I don't think (mostly) self sustaining outposts on the Moon and Mars are technologically infeasible. See for example the studies Robert Zubrin did for NASA.
Whether there are sufficient economic incentives to actually build them is unfortunately a different question.
"The original copyright holder retains: [...] The right to post author-prepared versions of the work covered by ACM copyright in a personal collection on their own Home Page and on a publicly accessible server of their employer, and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include this notice both embedded within the full text file and in the accompanying citation display as well: