Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This is sort of off-topic because it's not using Mathematica, but this[1] is another really neat intruder detection system that I've found that's surprisingly accurate.

It's based on a technology[2] created by the guy who made Palm.

And here's a guy trying to fool it and mostly failing.[3]

[1] http://vitamindinc.com/

[2] http://www.numenta.com/htm-overview/education/HTM_CorticalLe...

[3] http://www.youtube.com/watch?v=tXtaZSHs77A



sort by: page size:

This relies on an unsuspecting victim wearing a complicated nonstandard headset and then looking at a series of images / numbers slowly enough to register each of them consciously.

In what world would the victim not become suspicious?

(I appreciate things may change in the future, and if brain control headsets become common then a malware model (ad popups, for example) could provide a plausible vector for this attack.)


I read about this in the Silence on the Wire by Michal Zalewski. And you don't need a fullblown AI, a good statistical model is enough to make a guess on passwords, and if you have a bunch of probabilities to cut down your search space to a more probable set. And the book is from 2005, so I wouldn't say it is new. https://nostarch.com/silence.htm

I even remember reading about how Clifford Stoll recognized the different attackers by "typing rhythm" in Cuckoo's Egg.


Has anyone tried fooling the system as an experiment? Considering that computer vision remains relatively non-robust, I imagine this shouldn't be too hard.

On the lighter side,

https://www.youtube.com/watch?v=b-RXDCEErWQ

:D


I built some network security software in the early part of the 2000s. Around 2005, a local guy built a keystroke pattern recognizer utilizing neural networks to learn your keystrokes and was able to correctly identify who you were after a minimal amount of learning (typing). He brought it buy to see if we were interested in licensing it and using it in our product.

While somewhat of a black box demo, we were able to play with the technology. We tried a ton of stuff to fool the system (physical only, we didn't use keystroke macros or anything like that) and it would correctly identify us every time. It was showing us the probabilities as they'd change and it was uncanny how it would immediately know that I started typing instead of a coworker.

So, it's not only probable/possible/exists, it's only drawback is the lack of necessity. Outside of the the highly paranoid using it to prevent outside intrusions (government mostly), not many systems need it due to lower-end attacks that are much easier to do and typically successful enough.


Or smart detection. :)

Thank you, I was wondering how it differed from Wikipedia's existing vandalism detector systems like ClueBot.

Marcus J. Ranum's "Artificial Ignorance" concept is pretty good. I did an implementation and have been running it for more than 10 years:

http://www.ranum.com/security/computer_security/papers/ai/

Someone's also done a Python implementation: https://github.com/lchamon/ai

...and it looks like there's one in syslog-ng Premium Edition:

https://syslog-ng.com/documents/html/syslog-ng-pe-latest-gui...


Sometimes it is useful to make useless things. It will build experience while being fun and making you more able to come up with "useful" things. Tried this with a friend of mine. Our plan was.

1. Take webcam at desk. 2. Build model that detects whether he himself is sitting at his desk. 3. If it is not him, you've detected an intruder, so spray the intruder with a squirt gun or something.

We only got as far as a model that distinguished between him and other people, even deploying it according to this guide. https://course.fast.ai/deployment_render.html

Also the material is just really interesting. If you want to know how a lot of products work, this is one of the most fun ways to learn.


I saw this guy the other day - very interesting tech, but doesn’t address your specific question.

https://m.youtube.com/watch?v=rDC34awd0f8

There are a lot of people doing research in this space:

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=hum...


Working on an anti-forensic tool just for the fun of it.

In lab settings, it seems very close ;)

Detecting Invisible People

http://www.cs.cmu.edu/~tkhurana/invisible.htm


If you combine this with many other types of sensor data you can get a pretty high dimensional array of stuff that can pretty easily form person-unique clusters. Like combining screen res, Touch Support, User agent, Time Zone, Hash of canvas fingerprint, etc [0] gets you to a pretty unique signature even though any single bit of info is essentially worthless to this end.

[0] https://panopticlick.eff.org/


This is very cool; as computer vision gets better and the database grows, I can easily see this being the default way to do phishing detection.

If I would try something like that, I would use a spy camera and image recognition.

And perhaps some smarts that kick in when some heuristics detect when you are being assaulted and upload imagery of that event straight away.

(I'm taking this from Transmetropolitan if anyone's in the know)


I'm quite sure that malicous code has been written to do similar things already. I think what they've done with the technique is of interest here, not the technique (reading pixels from the screen and intercepting the mouse / keyboard).

I know literally nothing about this science, so that paper had me concerned about the following question:

Given a visual face recognition door lock or similar system. If I want to break such a door lock, can I install that system at home, train it with secretly taken pictures of an authorized person, and evolve some kind of key picture with my home system until I can show it to the target door lock and fool it into giving me access?

OK this is a very simplified way to put the question, but is that something this paper would imply to be possible (in a more sophisticated way)?


Latest idea I already posted about a few days ago: a keycard based doorlock with actual security (ed25519 based authentication, instead of the standard crackable 48-bit DES). Nobody really cares about that level of security.

A phone app that makes scanning documents stupid easy by using machine learning to identify when a document is on screen and automatically take a picture, and then use more machine learning to convert the picture into a cleaned up image. The concept to train the machine models is pretty easy; just print out a known document, take a bunch of pictures, and train the model to convert from the pictures to the original document. But it would likely require massive kernels to get enough context to do the necessary 3D transformations. Could probably reduce the strain on the algorithm by using traditional picture->document algorithms and then use the model to clean it up further. Either way, it's new tech that would require a lot of work.

Vocal isolator for extracting vocals from music, using a machine learning model. These tools kind of already exist, but they don't use machine learning, usually require user interaction to tweak knobs, and don't work particularly well. My bet is that a 1D conv network can do great, and generating examples for training is super easy. But, again, new tech, and as easy as machine learning sounds it usually devolves into long nights of debugging, tweaking, and banging your head trying to figure out why changing one hyper-parameter ever so slightly completely changes your results.

A hardware password manager. Like KeePass/LastPass/etc but as an actual hardware device, possibly a wearable/watch. The idea is that a hardware password manager provides more security than a software password manager, by virtue of not being susceptible to malware (though yes passwords can still be stolen 1 by 1 if used on an infected machine), and most importantly works with phones/tablets over bluetooth. Requires developing hardware, productization, etc. Not a big market; nobody cares about security.

I have a quiz website that I built for my wife, who's a teacher. It allows here to give online fill-in-the-blank type quizzes to her classes, with a set amount of time to take the quiz, and a set amount of time that the quiz is available. She's happy with it; I made it because she had something similar with Edmodo but the school dropped Edmodo. I can't imagine such a simple website is worth much though, and it's currently only designed for her and a bit rough around the edges. It'd take quite a lot of UI updates and a bit of backend tweaking to make it usable by the general public.

A couple game ideas, mostly geared towards programmers. Pretty niche, and games take immense effort versus profit.

A service to provide email to Bitcoin addresses. You authenticate yourself using your Bitcoin wallet's signature mechanism, and then can receive email like at e.g. 1p7H5w1LfgLT6tat951e85dFXEDQjNp8L@example.com. I got this mostly working, but rendering emails is super hard, who wants to use yet another mail client, how do I monetize, how do I fight spam, etc, etc. Lots of security and anonymity angles to this, but security is hard, so again lots of work.

NSFW: Machine learning based porn assistant. You thumb up/down images, it learns what you like, and finds more of it. Keeps learning as you thumb up/down its results. Classic machine learning model. Requires new tech, and it probably requires training a model per user (super expensive), or some kind of transfer learning.

Lots more beyond that, all very pie-in-the-sky, new tech related, or super niche.


Wait, so it's actually trained on user drawings? I honestly thought it was someone trying to play a prank on us.
next

Legal | privacy