The article says that it might be: "According to the whistleblower, the security fears raised at that meeting, including concerns that the transfer may be in breach of federal HIPAA rules on data privacy, have so far gone unanswered by Google."
That said, most people do not understand how HIPAA works (I am in no way saying you are one of these people). Unless you are a healthcare provider (think doctor) or a business that is supporting those providers (think 3rd party tools built specifically for managing healthcare records) it's pretty difficult to have a legitimate HIPAA complaint made against you.
I am indeed someone who doesn’t understand how HIPAA works. I have seen instances of healthcare professionals getting jail time for disclosing celebrity health records however. How is google able to legally get access to these records? I suspect they’re not and if so, someone should be held criminally liable for this.
If google is able to get these, what’s stopping anyone else?
Essentially HIPAA is /the/ responsibility of the healthcare provider - not Google. I am not sure about the transferrance and the laws there but not giving out is the less famous provider's job. They operate on an unavoidable consequences to a designated entity for enforcement - no excuses or buck passing they signed off on it sort of thing.
It doesn't preclude other crimes whether from hackers but doesn't technically guarantee them in Google's part. Technically the provider could have just given sensitive information like complete idiots because they were asked.
I, too, am familiar with (and bound by) HIPAA. I agree this is likely a violation.
Having said that, my job in the healthcare IT world is building interfaces, i.e. facilitating the transfer of health data from one system to another. Most likely what's going on here is Google and Ascension have a project together, and part of that project is either an interface or a data dump from Ascension to Google for the purposes stated in the article. I haven't read all the information, but generally the data will be "de-identified", which some interpret as sufficient to avoid HIPAA violations.
Neither company is small or ignorant; they both had their lawyers look at the contract and they signed off on it. So either the lawyers at both companies are mistaken or mislead, or somewhere after the initial scoping the scope changed (which, btw, happens all the time) and nobody updated legal or felt the need to update management or raise a concern
And that's concerning, regardless of which option it is. Either the legal teams at both companies are ill-informed or outright ignorant (perhaps intentionally), or there are no checks -- and no responsible project managers -- in place to prevent this from occurring. Somewhere along the line, someone should have suggested that this was perhaps not cool, and taken the issue up the chain of command. Most healthcare companies have a well established process in place for that, and I can't believe either of these would be different in that respect.
> I haven't read all the information, but generally the data will be "de-identified"
You should read what both Google and Ascension has said about this -- the data is intentionally not being de-identified, although it's not clear as to what the rationale for that decision is.
Even if it were, though, de-identification isn't actually very effective, particularly if you have easy access to a mountain of other personal data (such as Google has).
> Neither company is small or ignorant; they both had their lawyers look at the contract and they signed off on it.
I'm quite certain that, at worst, both companies think that they can get away with this legally. Even if it's entirely legal, though, that in no way means it's right or acceptable.
If you're a covered entity (CE) under HIPAA, you are allowed to have business associates (BAs). BAs are other parties that the CE exchanges PHI with in order to provide services (billing companies, cloud storage providers, etc.). According to the HITECH Act, BAs are bound by the provisions of HIPAA.
Per their press release (https://cloud.google.com/blog/topics/inside-google-cloud/our...), Google is playing the role of a BA as a part of this deal. They have signed a business associate agreement (BAA), as HIPAA requires. This agreement will have defined the permitted uses for the PHI that Ascension is transmitting to Google.
Basically this all sounds utterly ordinary. It's 2019 and even healthcare companies want to be in The Cloud (and especially want to be associated with AI and ML). My last company stored lots PHI in AWS. AWS signed a BAA with us. Now, if someone at Google with access to this PHI misuses it (e.g., accesses it for an invalid reason or sells it on the black market), then they could be in violation of HIPAA and face penalties. But the mere fact that a covered entity is transferring data to a business associate in no way suggests a HIPAA violation its own.
(Disclosure: I work at Google, but know nothing about this project.)
> How is google able to legally get access to these records
As a Business Associate of a health care provider organization, with an agreement in place binding them to the same rules for that data the principal they serve would have, which is enforceable not only by the principal, and by patients, but also directly against Google by the government.
> If google is able to get these, what’s stopping anyone else?
Nothing is stopping anyone else from offering the kinds of services to health care providers and insurers that involve patient data under a BAA; most health care providers and insurers have numerous Business Associates performing various functions involving patient data, including, in many cases, large tech firms like Microsoft, Amazon, and, sure, Google. If anything, Google is behind in this space in terms of volume because of Amazon, Microsoft, and some more specialized forms in the healthcare space have stronger enterprise sales positions in general, and, especially for Microsoft and some of the more specialized forms, more established relations with firms in the space that make it a lower “activation energy” to engage those firms as BAs.
>What about patient data? All of Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and come with strict guidance on data privacy, security and usage. ... To be clear: under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data.
When there's an obvious breach, hopefully. How would we even know if Google were abusing this data though? Does anyone have access to it besides Google? Are we literally asking Google to regulate itself with this data?
EDIT: I guess I don't understand. Once we give Google the sensitive information, how do we have any way of knowing what they do with it? I'm guessing an audit on all of Google's data is out of the question.
The point of this article is that a whistleblower is saying "they're not controlling access properly".
While the Grauniad is trying to spin it to sound worse, the whole point is Google are providing data processing services to a valid HIPAA processor via Google Cloud, not that they nefariously bought the data to integrate it with the search results.
Much like health data stored on AWS with a dedicated internal project team could be accessed by "Amazon" staff. It's kinda the point, the google staff have been brought in to help manage the data.
Your point is valid, but I think there was a mis-read or mis-statement. The parent comment probably should have addressed the difficulty of enforcing such provisions.
> Does anybody enforce this or do we just take Google at their word?
Yes, the DHHS Office of Civil Rights enforces HIPAA Privacy and Security rules. That enforcement is reactive of there is no independent regular compliance certification or monitoring required, however, which is a weakness, but the fact that detection of violation can lead to personal as well as institutional penalties, and that those penalties are criminal as well as civil, means it's not a risk that decision-makers tend to be willing to take on just because it would (so long as undetected) provide a business opportunity.
They have the job of doing so for the whole healthcare industry; and certainly have the authority. Capability is a question I'm less comfortable answering, but I would say that I see no evidence that they have a Google-specific problem in that regard. There is definitely a lot that could be done to improve enforcement capacity in the health data privacy and security space, and that's definitely something that should be pursued independently of whether some firms choose Google as BA.
No you don’t, there are fines if you are found in violation but no one is checking on an ongoing basis. Specific entities may privately pay for audits or do so as part of certifications (HiTrust, etc) but that’s not required.
The dept of HHS requires any organization with HIPAA business associate status to regularly undergo audits.
Can you fly under the radar and potentially get away with not doing it? Of course, anything is possible. Could a multibillion dollar internet organization beholden to shareholders and under public scrutiny get away with it? Not likely.
>The dept of HHS requires any organization with HIPAA business associate status to regularly undergo audits.
Can you provide a link to this requirement? The HIPAA/HITECH laws provide no requirements for an external audit (and self-audits aren't actually audits) and the HHS, as far as I know, only does small sample random audits unless a complaint was made.
Not because I think what Google did breaks HIPAA laws - there are many sub-threads below that can explain that better than I that this doesn't violate HIPAA - but rather the question helps highlight where we should truly be upset.
What Google did was legal; because of that, we should be upset that the government / regulatory bodies created an environment such that this was legal. Rallying against a publicly traded company of 100's of 1000's of employees for doing something "immoral" is not a productive use of your energy.
(The irony here is I usually am _against_ more regulation!)
It's a similar argument I made with the whole martin shkreli debacle: senators & congressmen/women got their picture day grilling him with the whole "how could you price gouge these poor, sick people?" But his consistent response was, essentially, the inverse: "How could you create an environment where this is totally 100% legal?"
I'm having trouble understanding how you feel this is immoral. You have a private healthcare system, and as such your data is already in the hands of numerous private companies. From the hospital itself, to various data processing partners that implement patient record systems, billing systems, image processing for various tests, lab companies and so on.
But suddenly this company, Google, makes it immoral? It seems to me that if you care this much about private companies having your data, you should switch to a publicly owned healthcare system.
Instead of down voting me, you could reply saying that this article has new details (like the leaked presentation). I just assumed it was a repost of yesterday's discussion.
My practice is to usually limit "dupe" to the identical link or story submitted multiple times. Major mainstream breaking news possibly excepted.
For different takes on the same story, "Previously" with a link to earlier discussion, may be better.
For evergreen topics (e.g., Bertrand Russell's "In Defence of Idleness", submitted many times through the years, and again a day or so back), "earlier submissions" noting the years, of 2-3 top instances, can point to earlier interesting discussion.
What is actually happening here? A lot of rhetoric about the "Transfer of data" etc, but other times this just reads like a Google Cloud Infrastructure play, with some consulting on top.
Also - The deal was only just signed, e.g. the transfer hasn't happened yet?
There's a lot of hearsay in all of this reporting...
Without reading the Business Association Agreement it is hard to determine if Google has 'Acceptable Use' of healthcare data. Storing in the cloud is one thing, allowing a Business Associate to siphon that data for other use is another.
This is not "fake news" at all. This is the same factual event covered with a different spin. Use of the term "fake news" to describe reporting that is merely slanted in a direction you don't like--rather than presenting demonstrably false information as fact--is completely unwarranted, and is doing terrible damage to our social institutions.
Thank you for stating this so aptly and concisely. People are throwing "fake news" around for anything and everything they don't agree with. It really is making it difficult to have a reasoned discussion with people when they just dismiss something so generally.
This is a TEXTBOOK case of fake news - a newspaper owned by Google competitor spinning purchase of space on encrypted Cloud Storage and Google Apps productivity suite as a big medical data mining attempt by Google to drive a political agenda.
There are obvious concerns around data security here, but the article is very heavily distorting facts to drive outrage.
No, it really isn't. The Guardian article is factually reporting on what the "whistleblower" told them, both via the video and in a separate interview. They did not fabricate any details, or make any claims they knew to be false. They made a cursory attempt to present the other side of the story. Sensationalized, yes. Slanted, yes. But not fake.
Also the Guardian isn’t owned by a Google competitor - it’s owned by the Scott Trust, an organisation set up specifically to maintain the paper’s editorial independence.
They have viewpoints which include a distrust of very powerful global corporations, and one you may or may not agree with - but it’s nowhere near ‘fake news’.
> What is the work we’re doing with Ascension? Back in July, on our Q2 earnings call, we announced “Google Cloud’s AI and ML solutions are helping healthcare organizations like Ascension improve the healthcare experience and outcomes.
"AI and ML solutions" sounds like a lot more than just data storage.
They have complete access to the entire database of names, addresses, medical history, etc. and are playing with it to develop products to later sell to other healthcare providers. Without the knowledge or consent of the patients.
Also in that article:
> under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data.
If I had a nickel for every time a tech giant outright lied about how they handle data, I'd be able to afford SV rent.
> AI and ML solutions" sounds like a lot more than just data storage.
Everyone in the space, from providers (well, the ones big enough to) to insurers (essentially all of which are big enough to), is looking at AI and ML solutions for clinical, fraud detection, or other purposes, whether in house, or via Business Associates, or (probably most commonly for large orgs) a combination, and much of that involves BAs with cloud firms that also provide AI/ML solutions like Amazon, Google, and Microsoft. From everything I've seen, Amazon and Microsoft are getting a lot more of that business than Google, because of their established relationships and stronger enterprise sales.
It's not just a cloud storage deal. The medical data is being used by Google for research purposes (training AI, apparently). Who knows what else Google will do with it now or in the future?
The real problem, though, is that this data was transferred without even notifying patients, let alone getting their consent.
Google has entered into similar partnerships on a much smaller scale with clients such as the Colorado Center for Personalized Medicine. But in that case all the data handed over to the search giant was encrypted, with keys being held only on the medical side.
It sounds like they are migrating to Google Cloud for some of their infrastructure.. and someone (and the paper) are sure trying to raise a stink about it. its misleading in a few spots, but not quite lying, hard to tell if its intentional, or technical incompetence.
For example, all storage in google cloud is encrypted, and the keys are rotated every 24 hours by default. There is an audit log you can see, any time anyone at google touches your data. (very, very few people can actually access the keys)
Personalised medicine is not about cloud services. It’s about using ML to target healthcare services, for example: predicting patient readmission risk. It’s pretty important, potentially life saving, and can also save millions.
Honesty it’s one of the areas where (in the right hands) I think the benefits of collecting large amounts of data for targeting services to people is justified.
Is Google ‘the right hands’? That’s probably worth debating.
Google's been investing in medical data analysis long before buying Fitbit. They got in trouble for DeepMind's involvement with NHS data over in the UK, which started years ago. And then Google absorbed that whole project from DeepMind into Google Cloud proper, after assuring the British government Google would never have access to the data from the project.
tl;dr: Unrelated avenue in a field they've been interested in for a long time.
I suspect the Fitbit acquisition is more to have a product that competes with Apple's and Samsung's smartwatch offerings. At this point it seems like having a watch that integrates with your phone offerings is table stakes, and I don't think Google had that.
I'd expect it's coincidence but in the other direction; Google's Fitbit motion has led to journalists being curious what other moves Google is making in the medical space.
GDPR will only occasionally and coincidentally (if at all) be relevant to health data held by US health care providers and their business associates, whereas HIPAA will always be relevant.
That has no effect on the central theme of the story (which is health care firms partnering with Google as a Business Associate, and thereby sharing patient data).
Anyone can file a health information privacy or security complaint. Your complaint must:
Be filed in writing by mail, fax, e-mail, or via the OCR Complaint Portal
Name the covered entity or business associate involved, and describe the acts or omissions, you believed violated the requirements of the Privacy, Security, or Breach
Notification Rules
Be filed within 180 days of when you knew that the act or omission complained of occurred. OCR may extend the 180-day period if you can show "good cause"
Any google employees/friends of google employees here with insight as how staff is receiving this news? My guess is like all other egregious abuses of power, the employees will stage a "protest" to feel good about themselves then keep working there.
I don’t see them staging a protest. The issue is too mainstream to earn them any Internet points. However, this is the kind of issue that matters to many people and could affect adversely. Still I think it’s up to the Congress and people to push back on abuses like this.
I wonder what type of biometric and other monitoring of employees by employers that could happen under employment contracts or in the name of security or `Health and Safety`.
After all, many companies trial new ideas and technology in house.
So would be insightful into what companies like Google do inhouse.
Google's employees appear from their past behavior to protest when the issue at hand is one about which the far left cares. [0][1][2][3][4] This issue is rather bi-partisan, so I don't expect any significant protest.
Unlike other companies Google employees atleast protest. In many cases they win. No company is a saint. It's a bit rich asking everyone to leave their jobs. 99% of HN will work for the most money. Google still tries its best unlike Facebook or Oracle. None of the good things it does ever get attention.
I don't think they see this as an egregious abuse of power. Googlers trust Google to do a pretty decent job of securing private information almost all of the time; this isn't an area of moral concern for them.
(i.e. the question in their minds is "Is the data safer in the source repositories?" And it's probably not).
The problem is that Googlers (like far too many tech companies) view data as being secure if outsiders can't get access to it. They don't count access by themselves as a security issue, even though it objectively is.
Googler here. I don't speak for Google and obviously shouldn't and won't divulge internals, but this just makes me cringe so hard: unauthorized or illegitimate access by staff is OBVIOUSLY treated as a security issue. I'm kind of shocked that folks would think otherwise.
I know several smart people at google, and I dont want to be critical of individuals. BUT it seems sometimes like google-the-company is of the mentality that they're smart, and with enough data and some ML chops they can solve anything, and frankly I'm not as confident - but I don't find it scary.
The casual data-grabbing that they're doing is most-scary to me, since I would hope that my data would never be accessible to them/others as readily as I now suspect it is.
Prescriptions aren't required in many countries of the world. Protecting people from themselves is not a good reason. If it were lots of dangerous activities would be illegal.
You can risk your life to make it more fun, but not more healthy?
Prescriptions in the USA have only really been a thing for the century since the Harrison act, which brought drug smuggling with it.
If entering into a BAA under HIPAA for work involving PHI is “harvest”, and you're worried that this reaches “millions” for Google, you probably don't want to think about the deals public and private firms in the healthcare and health insurance/payments space have with Amazon and Microsoft.
From the news article (I don't have time to review the source leak indepently) there doesn't seem to be anything really concerning here. The closest to an indication of anything wrong seems to be that someone raised an issue about the risk of improper employee use of data and a need for training around that in an internal meeting on the project and has not received a formal specific response on that issue from corporate leadership. Having spent a long time in HIPAA-related work, that neither that issue being raised in regard to a new project or the fact that it was raised being merely one of many inputs into a policy generating process that makes general adjustments considering a wide range of concerns, legal parameters, and other issues but not receiving a specific direct response seems...pretty typical. And HIPAA does not require notification or opt-in (or even opt-out opportunity) for data sharing between a covered entityand Business Associate, as BA’s are (while under HITECH independently subject to HIPAA privacy and security rules) basically considered institutional agents of the covered entity to which the covered entity’s authority to have and use data is delegated under the Business Associate agreement.
I don't know if there is really nothing of concern in the dump or the journalists covering it don't have enough understanding of the domain to even distinguish things that would indicate a problem, but what it looks like from the news article is a “whistleblower” making accusations and dumping docs, but nothing substantial and concrete in the docs supporting the thrust of the “whistleblower’s” accusations of wrongdoing.
Not defending the article (I’ve not read it), but I suppose I probably would be horrified with the status quo. I really wish we had a more consent based data culture. I suppose I don’t know how that would be designed. But lots of real things are horrifying and it’s not necessarily fine just that something is normal.
My view is that consent is oversold. If I "consent" to a boilerplate agreement handed me moments before an action is taken, have I really?
Boundaries and distributions should be clearly, specifically specified, with any non-essential distributions requiring specific assent, defaulting to none. If there are consequences to sharing, those can be made known. We've been drawn into a circumstance which has long been untenable.
> If I "consent" to a boilerplate agreement handed me moments before an action is taken, have I really?
Obviously you have not.
I wouldn't say this is consent being "oversold", but rather yet another way that the concept of consent is being actively undermined into a legal fiction.
This is also why the GDPR has the provision for revoking permission to your data at any time - to counter its rights being otherwise nullified through contracts of adhesion.
> If I "consent" to a boilerplate agreement handed me moments before an action is taken, have I really?
A not uncommon practice with HIPAA “disclosures” is to sign an electronic device that records the signature (and provides no evidence that the document your signature is associated with is anything like the one you were given) prior to being provided with documents. So, yeah, the practices around consent with PHI suck pretty hard.
If you inform me, then hand me a pre-printed document with a huge set of conditions on it, or worse as another response notes, simply collect my signature, it's not that I'm not enformed. It's that I'm not empowered to act on the basis of that information in any meaningful way.
It's a sham.
I'm a fan of the power of etymologies to reveal if not necessarily the present meanings of words, the paths by which they've arrived to the present. In the case of consent:
c. 1300, "agree, give assent; yield when one has the right, power, or will to oppose," from Old French consentir "agree; comply" (12c.) and directly from Latin consentire "agree, accord," literally "feel together," from assimilated form of com "with, together" (see con-) + sentire "to feel" (see sense (n.)).
However, their little snafu with SureScripts and Remy Health just got them banned from accessing healthcare history - however there are pending FBI and FTC investigations regarding their mis-management of healthcare data. Worst case, the digital pharmacy Amazon just bought will be barred from sending or receiving digital prescriptions and their HIPAA accreditation will be voided for three years (with a fine).
Seems like a great way to waste a couple hundred million dollars.
I fear all of this will be used as part of a prediction program to find the best employees based on performance metrics. Imagine if before you even gave an applicant a callback you could see if they've ever had a bout of depression, insomnia, anything that may affect their job performance or the performance of their team. That would be standard part of any background check if that information was available.
The nice thing about the modern world is that nobody has to make the decision to do that or be aware. That sort of discrimination can filter through ML-derived correlations at two, three or more levels removed, and every human being can be as innocent as can be.
I know I'm very much in the minority here, but just like we should have more open borders and more open software, we should encourage more openness around medical data.
Google and other large companies have made some significant AI advances in the last decade & I think it's in all of our interests to see if these advances can lead to improvements in health care.
Yes, it's scary how much data these companies have collected about us, but there are other things in the world which are even more scary, like heart attacks and cancer. I think we need to stop having an automatic knee-jerk reaction every time a company gets access to our data, especially if proper legal protocols with privacy protections are being followed, as it appears to be in this case.
Of course, I would love to live in a world with 100% perfect personal privacy AND perfect treatments for all diseases, but we don't live in that world: In our world, as we move forward, there are going to be difficult tradeoffs between health innovation and patient data access: We should try to navigate these tradeoffs in a level-headed way, without just insisting on greater walls around all data in every instance.
Last thing we should do is have radically open medical data. Some busybody parent could go out and search all the kids in in her kids' school who might have HIV or something. Or imagine all the crazies out there searching for a list of women in their town who have had abortions.
The only thing you do with open medical data is ratchet up the "crazy" in society. In an ideal world where everyone is rational, it's fine. But that world doesn't exist.
They probably get all the data in fragments from different EHR systems, pharmacy records, diagnostic monitors, etc etc and need ways of knowing which records belong to the same patient.
Sure, we may want to have properly-designed legislation to come up with standards across databases that make the use of such sensitive data for combining records less necessary, but we better make sure it's well designed or we could end up slowing down medical innovations.
On the other hand, it's been shown that even without information defined as PII, it's very easy to identify individuals from a combination of other data points. When you're dredging the ocean for correlations, this will just automatically catch everybody anyway, whether intentionally or not.
Well, I think most people in this comments section would say "blanket consent when signing hospital forms is unacceptable and patients should get notified and paid every time someone uses an element of their data".
If that's the standard society wants to adopt, so be it- But it might come at a dramatic cost in slowing down medical innovations. Personally, that seems like a bad tradeoff to me, but who can say for sure?
> patients should get notified and paid every time someone uses an element of their data
I don't know about most people here, but I wouldn't say that. Notification or being paid is beside the point. The point is that informed consent should be obtained.
You're right that blanket consent forms don't count as "informed consent" for this sort of thing because they don't actually inform you.
> Personally, that seems like a bad tradeoff to me
Which is fair -- you'd have no problem giving such consent. I, however, would not be willing to give such consent.
I put a lot of effort into reducing the amount of data that Google (and Facebook, Amazon, etc.) can get about me. If/when my medical provider just hands my data over to them, that's a very serious betrayal of trust and undermines my ability to protect myself from those companies.
I find that completely unacceptable, particularly because medical care is not exactly an optional thing.
What you're proposing is that everyone should be subjected to spying because it may lead to some theoretical larger good. That argument also conveniently ignores the theoretical social and personal costs of that spying.
What I'm saying is that everyone has rights that include the right to not be spied on, and a theoretical larger good is not nearly a solid enough reason to strip me of rights.
Getting consent is a way to avoid this deadlock and make everyone happy as well as ensure that nobody get trampled.
Now, while I would never give Google consent for data collection from me, that's because I have zero trust in Google. However, if we're just talking about consent in the general sense, then it's certainly possible to make an argument that would get me to agree to share data. In fact, I do so with a few entities already.
Googler here, my opinions are my own, standard disclaimer.
I'm not going to comment on this specific case but I do have almost a decade of previous non-Google experience working in clinical documentation technology.
As others have said, entering into a BAA with a covered entity, as HIPAA defines it, shouldn't be seen as a controversial action.
There are numerous problems in healthcare that are too complex for individual health systems to tackle. For example:
* Population Health: are there emergent changes in the regional population? What do you do about it?
* Continuity of Care: The number of individual providers involved in a particular person's care continues to grow. How can you effectively inform the entire team--across health systems--what's most important for an individual now? How do you make sure nobody drops the ball?
To give you an idea of the scale, I have two examples. The first is MD Anderson Cancer Center in Houston. They used to have 200+ engineers working on their sophisticated home-grown EMR. It was a huge undertaking. But even with MDACC revenue, that development was unsustainable, and they moved to a 3rd party EMR vendor.
Second is the Mayo Health System. Another huge provider with facilities not just in flagship Rochester MN, but in several other sites. Again, there were realities that even at this scale internal development isn't sustainable across the board and they wound up with a $100M+ adoption of a 3rd party vendor.
And this is mostly straight-forward CRUD-level workflows. The technology is straightforward but the workflow expertise is not.
Now, try and solve some bigger problems. You're going to need help to do this at scale, and trying to solve it necessarily means giving access--not control of!--to medical records to drive R&D. It's happening right now, and Google is not the only player doing this at scale. They're not even the largest one.
Lastly HIPAA controls have real teeth, in comparison to the general consumer space (at least in the US).
> To give you an idea of the scale, I have two examples. The first is MD Anderson Cancer Center in Houston. They used to have 200+ engineers working on their sophisticated home-grown EMR. It was a huge undertaking. But even with MDACC revenue, that development was unsustainable, and they moved to a 3rd party EMR vendor.
I'm not certain what aspect you are trying to highlight with this example, but readers should know that the MD Anderson implementation of the EPIC EMR system led to a 77% drop in income and layoffs approaching 1,000 people (2016-2017 time frame)[1][2]. I'm not up to date enough to know whether they have ever recovered.
The point is that even MDACC figured out is was too expensive to continue their own EMR.
You're correct in that literal books could be written about EMR adoption gone wrong. That doesn't change the fact that even super huge mega-health systems can't afford to do it all themselves.
> The point is that even MDACC figured out is was too expensive to continue their own EMR.
I think then that the example does not prove your point. It would have been vastly better, financially and medically, for MDACC to have continued with their in-house EMR.
> As others have said, entering into a BAA with a covered entity, as HIPAA defines it, shouldn't be seen as a controversial action.
You place more faith in HIPAA than I do. HIPAA does not protect privacy to the degree that most people assume.
> There are numerous problems in healthcare that are too complex for individual health systems to tackle.
True, but that doesn't mean that Google is the right entity to do this. In my opinion, they're the wrong entity, because Google is not exactly trustworthy.
> Google is not the only player doing this at scale. They're not even the largest one.
But they're Google. What this sort of thing means for me is that I need to start asking medical providers if they're participating in this sort of thing with Google (or other companies that I consider bad actors), so I know which ones to avoid using.
> You place more faith in HIPAA than I do. HIPAA does not protect privacy to the degree that most people assume.
That's correct. People would be surprised at the number of HIPAA violations that happen everyday. It is, however, among the strongest and most well-enforced data privacy laws (in the US).
> True, but that doesn't mean that Google is the right entity to do this. In my opinion, they're the wrong entity, because Google is not exactly trustworthy.
You're certainly right to be concerned. I don't share your opinion about Google per se, but this is important data for our society. I'd argue that OpSec at a large provider--let's say Microsoft--is more sophisticated than a start-up. So how does an organization decide who is the "right" entity to deal with?
> But they're Google. What this sort of thing means for me is that I need to start asking medical providers if they're participating in this sort of thing with Google (or other companies that I consider bad actors), so I know which ones to avoid using.
If this is important to you, I would strongly encourage it. Our health industry is better when consumers are better informed, and can make informed decisions. Personally, it's more important to me to be able to actually know how much a procedure is going to cost rather than who owns the AI stack behind their clinical decision support system.
> So how does an organization decide who is the "right" entity to deal with?
Practically speaking, that's up to the company -- but the company needs to make sure that their clients are informed and are able to withdraw their data if they're concerned.
The larger part of what's wrong with this particular deal is that it was done in secret. Patients and doctors were not informed of this until after data has begun to be transferred. They should have been, and patients should have been given the option to remove their data from the dataset and find another health care provider if they wish.
> Personally, it's more important to me to be able to actually know how much a procedure is going to cost rather than who owns the AI stack behind their clinical decision support system.
I agree that knowing costs is very important, but we're miles away from that being a thing that is possible. In the meantime, I think it's important not to backslide in other areas such as this one.
I'd also say that my concern isn't really about who owns the stack, or the cloud. That sort of battle was lost years ago. My concern is the ability of Google to access that information.
You're certainly entitled to that viewpoint. The point is that:
"Google's harvest of medical data includes names and full details of millions"
is hyperbole, when
"Google partners with health system on clinical documentation research" is more accurate.
I'm a proponent of more consumer control over their data, along the lines of GDPR. You could, theoretically, request that your covered entities give you a copy and then delete all of your records from their systems at any time.
Which for consumer data Google already gives you the option to do so (e.g. takeout.google.com)
https://www.hhs.gov/hipaa/for-individuals/guidance-materials...
reply