> Did anyone honestly think it was anything other than humans manually entering data from the submitted images?
They can have humans entering data manually but have safeguards in place to protect data - usually done by either hiring people in-house or having a contract with a firm that employs a regular set of people to do this work.
Sending the data to Mechanical Turk implies basically none of that safeguarding. Yeah, it's possible to do it, but from the sounds of it - especially given Expensify's appalling response - it's pretty apparent that they didn't.
> There is nothing improper about how we were doing this.
You were automating it, weren't you?
With bookkeepers, accountants, etc. the login and work done on the system was manual. It was an actual person doing it.
In your case, a computer is doing it instead.
Virtually every site out there, from Facebook to Twitter, prohibits the use of bots and scraping. Not surprising ADP isn't a fan.
What is surprising is that you feel you're entitled to access their system however you want? It's their system. If they want to prohibit bots and allow only people, that's their biz. If you think ADP is full of it, create your own system with a public API and put them out of business.
This is very unethical as there is real people involved behind this sort of operations. It’s not fancy algorithms but people freaking typing captchas all day.
By using captcha solving “services” you are often supporting slave-like working conditions.
> How does a Big Tech company like Uber mishandle private data like email addresses? Presumably this action was not coordinated by the C-suite. Did some mid-level/low-level employee with access to the data actually steal and sell the addresses?
This is bizarre. You're giving them way too much credit. They're villains. Of course they're going to sell your data. They'd spit in your mother's face for a nickel.
>Recently there was a story about Ghostery reselling user data to advertisers [3]. How much can you really trust these basically unaccountable groups (in comparison)?
Except all the data collected and sold was opt-in and they told you it was anonymized and sold on the page next to the opt-in checkbox
> Where do you think banks got the idea? They just copied an existing shady business model.
Tracking user input methods is not a shady business model. I was doing this for web forms back in 2003/4 to fight fraud, improve the user experience, and find problems.
> If my boss got a contract with a bank that said I had to track users against their will
But they aren't being tracked against their will, so you would have done this.
>Heck with the entire world being able to audit and review the source code
That's irrelevant when we are talking about a company being paid specifically to audit something. The entire world is able to send me food as well, but I don't get mad when it doesn't except for when I pay someone to do it.
>I simply asked a commentor to show the work they've done
And it was a dumb question. An auditing company that failed to detect massive fraud either willfully ignored it to sellout or was too incompetent to recognize it.
> I think you're completely ignoring the reality of frauds.
Or maybe their strategy still catches all of the frauds and it has therefore never been a problem to them?
I have to agree with their take, and just asking a bunch of technical questions -even without any code- is good enough to filter out the obvious incompetents.
>>>I routinely generate images of checks which I then snap a picture of right off the monitor in order to remote deposit. Of course this is always done with the written and signed consent of the account holder.
This is clever on your part, but holy cow there should be a better system than this.
> I don't have any reason to not believe their explanation.
Why not? They've been engaging in very, very dark gray area things before.
> And I don't think my clipboard is useful to them for anything else.
Don't you ever copy paste anything mildly interesting, like email-addresses, phone numbers to add to your address book, form contents before submitting a form in case the site is terrible and resets the form on error etc? I'm sure they'd love to get their hand on that information, as it'll allow for data mining to better "understand you".
> i read it twice and i still don’t understand the article.
Any recommendations to improve are welcome
> is this site complaining about 3rd party analytics?
If a bank page includes a script tag that loads third party JavaScript from a non-bank server, then what is to stop that script from capturing data, submitting forms, spoofing page content?
The bank has effectively given these third parties unaudited remote access, via remote code execution, to consumers bank accounts.
A bank can safely use third party analytics if they adopt appropriate security measures, SRI is likely be one, but alone might not be enough.
In the cases found here, there is no SRI protection or similar to protect users from the third parties doing what they like on the page, acting as customers.
> there is also a table further down that shows various banks sharing data with… themselves?
This is oddity due to the test suite spotting JS from a a separate domain for the same bank ( https://gitlab.com/markalanrichards/access-test/-/blob/main/... ): thank you for highlighting this and when I get time I hope to improve this I hope to filter it out.
> I would assume that any data sales have some sort of written agreement for what the data will be used for
Would you also assume that Grindr would have a privacy policy in the TOS which stipulates this as well? So, data buyer lied and thus made Grindr a liar to you, is Grindr the victim?
At what point do we have to say 'whoops, we got hacked/tricked/sloppy and your data is in the hands of bad guys, our bad' is not gonna fly any more. Shut them down. 1 Billion dollar fine next time Home Mart loses all customer files in a hack or SocialFace sells data to people who cause you to lose your livelihood for being gay.
>"You need to be careful sending bogus data: in some jurisdictions this could be argued to be deliberate targeted commercial sabotage." //
That sounds pretty ludicrous, do you have anything to back it up - caselaw, settlement report? It would be analagous to serving a fake image to combat hotlinking; or a fake page to combat framing.
> This sounds very much like trusting a fox to guard the henhouse. When do they then do with the submitted personal information? Why should we trust that they will behave ethically with it? What happens if, and when, they have a data breach?
They have no incentive to behave incorrectly as all their business is based on trust.
> But I ask myself: How difficult can it be for a billion dollar revenue company?
Am I the only one who has trouble believing that someone who was at a technical level involved in scraping and running facial recognition on 200k photos would genuinely answer "Not very?".
They can have humans entering data manually but have safeguards in place to protect data - usually done by either hiring people in-house or having a contract with a firm that employs a regular set of people to do this work.
Sending the data to Mechanical Turk implies basically none of that safeguarding. Yeah, it's possible to do it, but from the sounds of it - especially given Expensify's appalling response - it's pretty apparent that they didn't.
reply