Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
user: adrianN (* users last updated on 10/04/2024)
submissions comments favorites similar users
created: 2010-12-11 08:00:15
karma: 29995
count: 10785
Avg. karma: 2.78
Comment count: 10731
Submission count: 54
Submission Points: 968
about: See my website http://adriann.github.io


page size: | Newest | oldest

Also, Vulcans are far stronger than humans, so Spock wins in hand-to-hand combat too.

Why not? You can still observe how people behave differently if they have higher or lower taxes.

This is certainly an important discovery, seeing that they extended previous solutions to work for light of different wavelengths, but it's hardly an invisibility cloak if the observer is assumed to be at a fixed position. Just like "quantum teleportation" is not actually teleportation, but nevertheless every article about it mentions Star Trek.

A sufficiently-smart spell-checker would be indistinguishable from an artificial intelligence. A company producing one of those things would make billions, but not by selling a spell-checker.

There is a vast stretch of appropriate solutions between the maintainability hell of "hardcode the coordinates of every pixelchange in your animation" and perfect code.

I don't think that would be a very effective way of teaching. The hard part of programming is usually not writing things like linked lists, but the ability to decompose a real world problem into sufficiently small parts that can be solved by writing down some common patterns. Repetitive learning of patterns won't help at all with developing this skill and may very well scare off students because it becomes boring very quickly.

Unlike playing an instrument, programming is not dependend on muscle-memory skills to produce adequate performances, hence repetition is of limited use while studying.


We have self driving cars right now, see for example DARPA's urban challenge. We only need to increase reliability and decrease costs to make them suitable for the market. There are also some legal problems (e.g. who is responsible if the robot car kills a kitten), but technologically self driving cars are a reality.

Pidgin also supports off the record chatting, for the truly paranoid.

I wanted to learn how to draw. I read a book about it and sat down one afternoon and tried to draw something. I looked like shit and I realized the only way to improve it was to scrap it and start again. Is it worthwhile?

If you're interested in a human mission to Mars, I highly recommend the book "The Case for Mars"

Those extensions may not be very interesting for the consumer market, but scientific applications will probably see more benefits.

> The Ackermann function grows extremely quickly (quicker than any primitive recursive function I believe)

This is correct. It was basically invented to show that primitive recursion is not capable of computing every computable function.


And it turned out that the nanoparticles produced by combustion are pretty cancerous. That's the main reason why there are limits on their concentration in the air. The smaller the particles are, the more dangerous. See http://en.wikipedia.org/wiki/Particulate#Health_effects

That would make the stalkers happy too.

You can bound the expected cover time of a random walk on a graph. This is an intensively studied problem:

http://scholar.google.de/scholar?q=covering+time+random+walk

iirc the expected cover time for any graph is polynomial with high probability. Something like |E|^3 or so.


So far I like the recursive subdivision algorithm best. It also has the possibility to stop the recursion for different subdivisions at different times, to generate larger dungeons in your mazes. That's not so easy to do with the other algorithms.

I would be interested in some papers on generating hard mazes, e.g. mazes that take the maximum amount of resources to solve. Is every maze solvable in logspace? Probably so, right, because Connectivity is in logspace iirc.

It's pretty dull even before studying philosophy.

Carpooling is just a crutch to migitate the effects of the abismal city planning that has happened in America during the last few decades. Building cities that are totally dependent on cars is just an aweful idea. From what I've heard it's hardly possible to do even such elementary things as grocery shopping without driving twenty minutes on each way.

> It solidly loses on almost everything else.

Do you have any data on that? Most CPU bound programs should have pretty good instruction locality, negating the effects of smaller code. But without some numbers this is pointless guesswork.


You only need one brain to tell the hands what to do.

So instead you just called your car pool?

Your iris doesn't leave prints on your gun.

Why not do both?


Because a position as researcher is usually not considered to be a "job".

You are basically arguing about P-Zombies[1]. I think that line of argument is fallacious.

What if the computer were powerful enough to perfectly simulate the workings of a human brain? Does that brain not have a consciousness?

[1] http://en.wikipedia.org/wiki/P-Zombie


If I hit a rock and it cried in pain, I would believe it to be sentient.

Sentience does not come from the raw materials -- we're just a bunch of carbon, hydrogen, nitrogen etc. ourselves -- but from the way those materials are arranged to process information.


Well obviously everything is based on quantum effects. When a transistor changes its state, quantum effects need to happen. But I don't think it's legitimate to postulate that the whole complexity of the quantum state is necessary to simulate the behaviour of a transistor.

I also think that general AIs will not be the kind that simulates brains.

But I think its wrong to tie consciousness to the hardware its running on. Consciousness is a property of the software. I think its fallacious to postulate a difference between an intelligence that runs on brains and an intelligence that runs on other machines if the difference can't be observed.

And yes I would be worried that a robot might suffer if it is sufficiently advanced to exhibit the same behaviour as, say, a cat.


There is not much of an objective difference between the pain reactions of e.g. insects and robots we have today. So I would say yes, those robots experience a similar reaction as insects. But then again, I don't think insects have a sufficiently advanced neural system to suffer, so it's okay to kill the critters.

Computers also calculated pi long before we used them to design circuits.

"However there is a danger that if machines do become sapient and surpass us in intelligence that based on such arguments they will conclude that human beings are merely an inferior form of intelligence which is not worth preserving"

You mean like we act towards animals today?


How would you tell the difference between a human-level AI and a human? Sentience is not a well defined term. The best thing you can do is to ask the intelligence in question whether has subjective experience, if it says yes you have to believe that. I couldn't prove to you that I have subjective experiences. I think it's irrational to say that every cognitive ability a human has can be simulated perfectly fine on a computer, but only a biological brain is capable of subjective experiences.

I'm currently implementing a complicated algorithm in C++. I don't have to use many language features for that, so I don't have a strong oppinion about them.

But what I can say is that, compared to Java, the tools for C++ coding suck. Hard. In Eclipse you can write some gibberish and it automatically turns it into valid Java that mostly does what you want. The Java debugger is excellent and works without hickups. The experience you have with Eclipse and CDT just isn't the same. No automatic inclusion of header files, autocompletion doesn't always do what you want, no documentation included, even for the STL. Working with gdb is painful, most of the time it is not possible to properly inspect datastructures in memory and it is generally just easier to litter the code with print statements.


Many problems aren't parallelizable for reasonable definitions of the word (i.e. polylogarithmic depth and polynomial work: Nick's class) unless there are some major breakthroughs in complexity theory.

I'd venture to say that depends strongly on the problem the brain is trying to solve and the problem the processors are trying to solve.

The terrorist attacks shaped a lot of foreign and domestic policy. There was a major shift towards nationalism and decreased privacy rights all over the western world.

I think it's hard to downplay protests when there are thousands on streets rioting.

That would be quite expensive both for the server and the client. Besides, I certainly wouldn't use a site that tracks my mouse movement to determine whether I'm a human or not.

If it provides enough legitimate content to be useful to the users, it will get upvotes regardless of its ulterior motives. At a certain point an advertisement is entertaining enough to be viewed voluntarily. cf Old Spice guy.

There certainly are real energy savings. But without any numbers we can only speculate whether this whole setup is worth the effort.

A simpler way to save energy would have been to wrap a thick blanket around the tank to improve the insulation.

A better way would have been to replace the tank with a tankless heater.


Put in some proper cable ducts so you don't have to tear up the wall once some new standard is introduced.

Put up some solar thermal panels; they last forever and really do work. Look into geothermal, maybe it's worthwhile where you live. But the biggest money saver will be better insulation. You can save up to 70% of your heating/cooling costs by proper insulation.


The Ultimate Physical Limits of Computation:

http://arxiv.org/pdf/quant-ph/9908043


I don't think (mostly) self sustaining outposts on the Moon and Mars are technologically infeasible. See for example the studies Robert Zubrin did for NASA.

Whether there are sufficient economic incentives to actually build them is unfortunately a different question.


Actually it does, at least as outlined in his book.

The Cisco VPN client for OS X is crap though. Whenever my Mac crashes with a kernel panic, Cisco is to blame.

You should better whish for your employer to switch to something that can be readily used with onboard methods.


How much does that job pay, and how hard is it to become a lookout observer?

From the same site:

"The original copyright holder retains: [...] The right to post author-prepared versions of the work covered by ACM copyright in a personal collection on their own Home Page and on a publicly accessible server of their employer, and in a repository legally mandated by the agency funding the research on which the Work is based. Such posting is limited to noncommercial access and personal use by others, and must include this notice both embedded within the full text file and in the accompanying citation display as well:

"© ACM, YYYY. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in PUBLICATION, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn



You can't prevent exceptions when you do IO or dynamically allocate memory.
next

Legal | privacy