Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> But many states have explicit laws against surreptitious audio recording. I suspect that comes from politicians being caught with wires (yeah, I'm a cynic).

Voices can be easily faked, that's the reason. Same will happen to video as deepfake technologies keep improving.



sort by: page size:

> By that logic, any device with a microphone and potential to record should not exist.

Not at all, it's how you use it. Cars can run over people but cars can exist; driving over someone is a crime committed by the operator. A machine with a microphone need not be, of itself, illegal.

There is an issue of a secret microphone, as with Google's camera that secretly included microphones. That seems fraudulent because the user is not able to knowledgable decide if she can trust the device.


>> There's also a more fundamental issue: absent explicit recording, we treat everything that's output to the screen and speakers as ephemeral.

This is very much like why it feels quite different to have automated license plate readers tracking and recording where your car has been - even though that is clearly public and visible by the fact that your car is driving down public roads.


>Millions more are reluctant to invite the devices and their powerful microphones into their homes out of concern that someone might be listening.

Sometimes, someone is.

This wording is misleading, because it sounds like someone is listening through the device as the speech is being picked up by it, but in fact it's recordings.


I meant it as you can't trust recording of what someone said. Basically any audio could be faked.

Which would be great for people like Trump. "Trust me, that audio is fake news. Fake!"


> To my understanding, these smart speakers only phone home when you say the keyword, right? They aren't storing or sending everything they hear throughout the day.

1) There has been various cases of such devices being triggered incorrectly and uploading chunks of recordings

2) It's all implemented in software. It's extremely easy for the vendor to enable more keywords or record for longer times.

3) It's impossible to prove that 2 is not happening already in limited cases

4) There is a proven long history of very effective global surveillance programs targeting every electronic device (phones, cell towers, carrier-grade routers, PCs, servers).


> I know enough about tech to know that it's very very very hard technical problem, and hiding it is basically impossible.

This gets repeatedly asserted, and it's false. Low-accuracy voice transcription is a solved problem, and is relatively easy to hide, as long as you have API access[1]. (so, Facebook is probably in the clear, as neither Apple nor Google are crazy enough to let them have invisible microphone access, but it would be relatively easy for Google (Play Services hook anyone?))

[1] https://news.ycombinator.com/item?id=27142812


> That assumes everyone knows all the words which trigger a recording device and that those devices actually remotely record sound. Including a 90 year who hasn't use anything beyond a rotary telephone.

Don't put a Google Home in babushka's apartment, she wasn't going to buy one anyway, and she doesn't need it!


>However, they clearly CAN, and while they almost certainly don't on any massive scale, if they by their overlord or a security hack, did on a targeted basis listen, record and send conversations on, that would be very possible, and very harmful for whoever was targeted.

Besides the fact that they have better antennas, this is true for pretty much any device with a microphone and a processor with audio processing capability, no?

My laptop has a microphone and video. Surely Microsoft could just start recording with the with right windows update?

My Avaya VOIP phone has speaker phone, could do the same.


Quick Facts: > Needs a decent set of audio data to work with, 30 minutes of audio book will do > They are working on watermarking the waveform some how try and hamper it being used to impersonate people in a illegal way.

Obviously bringing this tech to the masses is an interesting development. In a post Donald Trump world, anything we can do to get people to use critical thinking is great, perhaps eventually we will all develop a photoshop radar for audio too. Its still pretty crazy to me what we will be able to produce with audio and visual manipulations on the fly, as well as advanced AI.


You're picking a piece out of my full response and adding context I didn't provide. I said the overall statement was absurd, not that capturing audio was.

> what tech does this even use? Do they mean using Pegasus or similar malware that the govt has to first get onto the suspect's devices, or is this via Google/Apple or the device manufacturers that makes 'remotely and secretly activating a microphone' even possible?

It would have to be after compromise, which would mean its likely only used on a very small number of cases due to the sensitivity and cost of the technology involved.


> it's technically possible they really do

That's not the problem; it doesn't matter if your particular device is listening at any particular time. Using an always-on microphone normalizes the expectation that audio in the home might be sent to a 3rd party's remote server. This is important because Kyllo v United States created[1] a bright line test for when a technology is a "search".

Normalizing audio eavesdropping technology in previously private areas will eventually mean use of that technology is not a search, and thus the police/etc can use their own device with similar technology without a warrant to see "details of a private home that would previously have been unknowable without physical intrusion"[2].

[1] https://news.ycombinator.com/item?id=15853560

[2] http://caselaw.findlaw.com/us-supreme-court/533/27.html


> As someone else mentioned, they could of course record constantly, then compress and transmit in batches, and that would probably go unnoticed.

Possibly, but it can be fingerprinted by using known audio samples and intentionally producing very large amounts of data. See my other reply.


> If 99% of the screen is blurred and the audio has been transcribed then how does the receiver know this is a real leak at all?

There is probably a way to eliminate the possibility of revealing the source without compromising any significant amount of fidelity.


> These devices don't send audio over the network unless a shooting-like noise is detected.

How does this jibe with the fact that the police can apparently request a ShotSpotter operator review of audio recordings for up to 30 days if a known shooting is missed by the system? How does this jibe with the fact that the system has apparently at least twice recorded voice conversations that were used (and thrown out in one case due to violating wiretap laws) in court cases?

Edit and even if we trust ShotSpotter to do the right thing, how do we know their systems are secure enough to keep those recordings away from less-honorable actors?


The parent's post means that the overreach has never had the technology we have now. This drastically changes the situation. The government could secretly mandate that all phone manufacturers must secretly record the phone microphone all the time and then this data gets analyzed. If this happens in 10 years, then the general public might not even find out for years, and even if they do, most of the public will just shrug and move on.

I think this is a bad example of what a well executed smear might look like. The recording sounds very unnatural (and one sided).

Further, it is not that different from just making spurious allegations or a meme with a fake quote. However, this is going to get worse and worse in the future. I am glad that the government (who come from the opposing party) decided to confirm that it's a fake. A more plausible version of something like this could be very damaging though.

Audio is pretty hard to fake in general, especially in the UK, who have some of the best forensic audio capability out there (mains hum is recorded in the UK for forensic purposes, and there are other techniques that make low hanging forgeries like this pretty easy to detect) [1].

[1] https://en.wikipedia.org/wiki/Electrical_network_frequency_a...


> Any device that responds to voice commands must be listening. Always. This is obvious.

Processing sound is required, but recording or sending to a server are not. I think the latter two is what most would be concerned with.


>playing recordings or videos of people talking has essentially no benefit.

Really? I thought it had some, but possibly only if access to real conversation was limited. Any good reading on this?

next

Legal | privacy