Elon Musk Asked People to Upload Their Health Data. X Users Obliged.

Xray

Over the past few weeks, users on X have been submitting X-rays, MRIs, CT scans and other medical images to Grok, the platform’s artificial intelligence chatbot, asking for diagnoses. The reason: Elon Musk, X’s owner, suggested it.

“This is still early stage, but it is already quite accurate and will become extremely good,” Musk said in a post. The hope is that if enough users feed the A.I. their scans, it will eventually get good at interpreting them accurately. Patients could get faster results without waiting for a portal message, or use Grok as a second opinion.

Some users have shared Grok’s misses, like a broken clavicle that was misindentified as a dislocated shoulder. Others praised it: “Had it check out my brain tumor, not bad at all,” one user wrote alongside a brain scan. Some doctors have even played along, curious to test whether a chatbot could confirm their own findings.

Although there’s been no similar public callout from Google’s Gemini or OpenAI’s ChatGPT, people can submit medical images to those tools, too.

The decision to share information as sensitive as your colonoscopy results with an A.I. chatbot has alarmed some medical privacy experts.

“This is very personal information, and you don’t exactly know what Grok is going to do with it,” said Bradley Malin, a professor of biomedical informatics at Vanderbilt University who has studied machine learning in health care.

The Potential Consequences of Sharing Health Information.

When you share your medical information with doctors or on a patient portal, it is guarded by the Health Insurance Portability and Accountability Act, or HIPAA, the federal law that protects your personal health information from being shared without your consent. But it only applies to certain entities, like doctors’ offices, hospitals and health insurers, as well as some companies they work with.

In other words, what you post on a social media account or elsewhere isn’t bound by HIPAA. It’s like telling your lawyer that you committed a crime versus telling your dog walker; one is bound by attorney-client privilege and the other can inform the whole neighborhood.

When tech companies partner with a hospital to get data, by contrast, there are detailed agreements on how it is stored, shared and used, said Dr. Malin.

“Posting personal information to Grok is more like, ‘Wheee! Let’s throw this data out there, and hope the company is going to do what I want them to do,’” Dr. Malin said.

X did not respond to a request for comment. In its privacy policy, the company has said it will not sell user data to a third party but it does share it with “related companies.” (Despite Musk’s invitation to share medical images, the policy also says X does not aim to collect sensitive personal information, including health data.)

Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, noted that there may be very clear guardrails around health information uploaded to Grok that the company hasn’t described publicly. “But as an individual user, would I feel comfortable contributing health data? Absolutely not.”

It’s important to remember that bits of your online footprint get shared and sold — which books you buy, for example, or how long you spend on a website. These are all pieces of a puzzle, fleshing out a picture of you that companies can use for various purposes, such as targeted marketing.

Consider a PET scan that shows early signs of Alzheimer’s disease becoming part of your online footprint, where future employers, insurance companies or even a homeowner’s association could find it.

Laws like the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act can offer protection against discrimination based on certain health factors, but there are carve-outs for some entities, like long-term care insurance and life insurance plans. And experts noted that other forms of health-related discrimination still happen, even if they’re not legal.

The Risk of Inaccurate Results
Imperfect answers might be OK for people purely experimenting with the tool. But getting faulty health information could lead to tests or other costly care you don’t actually need, said Suchi Saria, director of the machine learning and health care lab at Johns Hopkins University.

Training an A.I. model to produce accurate results about a person’s health takes high-quality and diverse data, and deep expertise in medicine, technology, product design and more, said Dr. Saria, who is also the founder of Bayesian Health, a company that develops A.I. tools for health care settings. Anything less than that, she said, “is a bit like a hobbyist chemist mixing ingredients in the kitchen sink.”

Still, A.I. holds promise when it comes to improving patient experiences and outcomes in health care. A.I. models are already able to read mammograms and analyze patient data to find candidates for clinical trials.

Some curious people may know the privacy risks and still feel comfortable uploading their data to support that mission. Dr. Malin calls the practice “information altruism.” “If you strongly believe the information should be out there, even if you have no protections, go ahead,” he said. “But buyer beware.”

The New York Times

For more information about  Alfijir labarai/Alfijir news follow here  👇

https://twitter.com/Musabestseller?s=09

https://www.facebook.com/profile.php?id=100089640289165

https://www.threads.net/@alfijirlabarai

https://www.youtube.com/@BestsellerChannel12

https://www.instagram.com/musa_bestseller?utm_source=qr&r=nametag

Alfijir labarai Alfijir News Whatsapp Group 👇👇

https://chat.whatsapp.com/ELQRQyq1zGn7zpAVP8DwNj

Leave a Reply

Your email address will not be published. Required fields are marked *