Mike Magee MD

AI Meets Medicine. Session II: The Impact of AI on Clinical Health Care

Mike Magee M.D.

In the first session of “AI Meets Medicine,” we explored the origins of human language and communication. Along the way , we focused in on the last century of discovery, and focused on the growing fields of human-machine communication. What we uncovered is exponential gains in machine learning, computerized neural networks, generative pre-trained transformers, and the potential for good and evil in our immediate future.

In this session, we concentrate on the current impact of AI on Clinical Health Care – on its institutions, health professionals, and the patients they serve. At the end of Session I, we addressed the tragic case of Sewell Setzer III, who engaged with a virtual companion bot through the app, Character.AI, and committed suicide. He represented great promise, cut short, his lawyer mother claims, by negligence of the App creators functioning in a fully deregulated AI environment.

AI enthusiasts would rapidly counter that providing Tech entrepreneurs with a green light to create, while carrying some risk, carries with it great promise for individuals like 16 year old, Johnny Lubin, a teen born with Sickle Cell Anemia who, until recently, suffered numerous “Sickle Cell crises” causing debilitating pain and constant hospital admissions. The fact that he has not had a single crisis in the past two years, AI advocates attest, is a function of AI assisted scientific progress.

As the Mayo Clinic describes the disease, “Sickle cell anemia is one of a group of inherited disorders known as sickle cell disease. It affects the shape of red blood cells, which carry oxygen to all parts of the body. Red blood cells are usually round and flexible, so they move easily through blood vessels. In sickle cell anemia, some red blood cells are shaped like sickles or crescent moons. These sickle cells also become rigid and sticky, which can slow or block blood flow. The current approach to treatment is to relieve pain and help prevent complications of the disease. However, newer treatments may cure people of the disease.”

The problem is tied to a single protein, hemoglobin. AI was employed to figure our exactly what causes the protein to degrade. That mystery goes back to the DNA that forms the backbone of our chromosomes and their coding instructions. If you put the DNA double helix “under a microscope,” you uncover a series of chemical compounds called nucleotides. Each of four different nucleotides is constructed of 1 of 4 different nucleases, plus a sugar and phosphate compound. The 4 nucleases are cytosine, guanine, thymine and adenine. 

The coupling capacity of the two strands of DNA results from a horizontal linking of cytosine to guanine, and thymine to adenine. By “reaching across,” the double helix is established. The lengthen of each strand, and creation of the backbone that supports these nucleoside cross-connects, relies on the phosphate links to the sugar molecules.

Embedded in this arrangement is a “secret code” for the creation of all human proteins, essential molecules built out a collection of 20 different amino acids. What scientists discovered in the last century was that those four nucleosides, in links of three, created 64 unique groupings they called “codons.” 61 of the different 64 possibilities directed the future protein chain addition of one of the 20 amino acids (some have duplicates). Three of the codons are “stop codons” which end a protein chain. To give just one example, a DNA chain “codon” of  “adenine to thymine to guanine” directs the addition of the amino acid “methionine” to a protein chain. 

Now to make our hemoglobin mystery just a bit more complicated, the hemoglobin molecule is made of four different protein chains, which in total contain 574 different amino acids. But these four chains are not simply laid out in parallel with military precision. No, their chemical structure folds spontaneously creating a 3-dimensional structure that effects their biological functions.

The very complex function of discovering new drugs required first that scientists understand what were the proteins, and how these proteins functioned, and then defining by laborious experimentation, the chemical and physical structure of the protein, and how a new medicine might be designed to alter or augment the function of the protein. At least, that was the process before AI.

In 2018, Google’s AI effort, titled “Deep Mind”, announced that its generative AI engine, when feed with an DNA codon database from human genomes, had taught itself how to predict or derive the physical folding structure of individual human proteins, including hemoglobin. The product, considered a breakthrough in biology, was titled “AlphaFold 2.”

Not to be outdone, their Microsoft supported competitor, OpenAI, announced a few years later, that their latest ChatGPT-3 could now “speak protein.” And using this ability, they were able to say with confidence that the total human genome collective of all individuals harbored 71 million potential codon mistakes. As for you, the average human, you include 9000 codon mistakes in your personal genome, and thankfully, most of these prove harmless.

But in the case of Sickle Cell, that is not the case. And amazingly, ChatGPT-3 confirmed that this devastating condition was the result of a single condon mutation or mistake – the replacement of GTG for GAG altering 1 of hemoglobins 574 amino acids. Investigators and clinicians, with several years of experience under their belts in using a gene splitting technology called CRISPR (“Clustered Regularly Interspaced Short Palindromic Repeats”), were quick to realize that a cure for Sickle Cell might be on the horizon. On December 9, 2023, the FDA approved the first CRISPR gene editing treatment for Sickle Cell Disease. Five months later, the first patient, Johnny Lubin of Trumbull, CT, received the complex treatment. His life, and that of his parents, has been inalterably changed.

As this case illustrates, AI has great promise in the area of medical discovery, in part because we are complex sources of data, so complicated that it has remained beyond the human mind to solve many of life’s mysteries. But tech entrepreneurs are anxious to enter the human arena with a proposition, a trade-off if you will – your data for human progress.

Atul Butte, MD, PhD, Chief Data Scientist at UCSF is clearly on the same wavelength. He is one of tens of thousands of health professionals engaged in data dependent medical research. His relatively small hospital system still is a treasure trove of data including over 9 million patients, 10 hospitals, 500 million patient visits, 40,000 DNA genomes, 1.5 billion prescriptions, and 100 million outpatient visits. He is excited about the AI future, and says: “Why is AI so exciting right now? Because of the notes. As you know, so much clinical care is documented in text notes. Now we’ve got these magical tools like GPT and large language models that understand notes. So that’s why it’s so exciting right now. It’s because we can get to that last mile of understanding patients digitally that we never could unless we hired people to read these notes. So that’s why it’s super exciting right now.”

Atul is looking for answers, answers derived from AI driven data analysis. What could he do? He answers:

“I could do research without setting up an experiment.”

“I could scale that privilege to everyone else in the world.”

“I could measure health impact while addressing fairness, ethics, and equity.”

His message in three words: “You are data!” If this feels somewhat threatening, it is because it conjures up the main plot line from Frank Oz’s classic, Little Shop of Horror, where Rich Moranis is literally bled dry by “Audrey II”, an out-of-control Venus Fly Trap with an insatiable desire for human blood. As we’ve seen, with the help of parallel programming on former gamer chips like Nvidia’s TEGRA-K1, that insatiable desire for data to fuel generative pre-trained transformers, can be actually lead to knowledge and discovery.

Data, in the form of the patient chart, has been a feature of America’s complex health system since the 1960’s. But it took a business minded physician entrepreneur from Hennepin County, Minnesota, to realize it was a diamond in the rough. The Founder of Charter Med Inc., a small, 1974  physician practice start-up, declared with enthusiasm that in the near future “Data will be king!”

Electronic data would soon link patient “episodes” to insurance payment plans, pharmaceutical and medical device products, and closed networks of physicians and other “health providers. His name was Richard Burke, and three years after starting Charter Med., it changed its name to United Health Group. Fortry-five years later, when Dr. Burke retired as head of United Healthcare, the company was #10 on the Fortune 500 with a market cap of $477.4 billion.

Not that it was easy. It was a remarkably rocky road, especially through the 1980s and 1990s. When he began empire building, Burke was dependent on massive, expensive main frame computers, finicky electronics, limited storage capacity, and resistant physicians. But by 1992, the Medical establishment and Federal Government decided the Burke was right, and data was the future.

Difficulties in translating new technology into human productivity is not unexpected or new. In fact, academicians have long argued over how valuable (or not) a technologic advance can be, when you factor in all the secondary effects. But, as Nobel economist, Paul Krugman, famously wrote in 1994, “Productivity isn’t everything. But in the long run, it’s almost everything.”

Not all agree. Erik Brynjolfsson PhD, from MIT, coined the 1993 term, “Productivity Paradox.” As he wrote then, “The relationship between information technology and productivity is widely discussed but little understood.” Three decades later, he continues to emphasize that slow adoption by health professionals of new IT advances can limit productivity gains. Most recently, however, Erik has teamed up with physician IT expert, Robert Wachter MD, who wrote the popular book, “The Digital Doctor,” in a JAMA 2023 article titled, “Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?” They answer with a qualified, “Yes.” Why? Five reasons. 1) Ease of Use, 2) Software vs. Hardware dependent, 3) Prior vendors (from Electronic Health Record products) are already embedded in the supply chain, 4) GPT technologies are self-correcting and self-updating, 5) Applications first will be targeted at eliminating non-skilled workforce.

But back in 1992, the effort first came to grips with new information technology when the Institute of Medicine recommended a conversion over time from a paper-based to an electronic data system. While the sparks of that dream flickered, fanned by “true-believers” who gathered for the launch of the International Medical Informatics Association (IMIA), hospital administrators dampened the flames citing conversion costs, unruly physicians, demands for customization, liability, and fears of generalized workplace disruption.

True believers and tinkerers chipped away on a local level. The personal computer, increasing Internet speed, local area networks, and niceties like an electronic “mouse” to negotiate new drop-down menus, alert buttons, pop-up lists, and scrolling from one list to another, slowly began to convert physicians and nurses who were not “fixed” in their opposition.

On the administrative side, obvious advantages in claims processing and document capture fueled investment behind closed doors. If you could eliminate filing and retrieval of charts, photocopying, and delays in care, there had to be savings to fuel future investments.

What if physicians had a “workstation,” movement leaders asked in 1992? While many resisted, most physicians couldn’t deny that the data load (results, orders, consults, daily notes, vital signs, article searches) was only going to increase. Shouldn’t we at least begin to explore better ways of managing data flow? Might it even be possible in the future to access a patient’s hospital record in your own private office and post an order without getting a busy floor nurse on the phone?

By the early 1990s, individual specialty locations  in the hospital didn’t wait for general consensus. Administrative computing began to give ground to clinical experimentation using off the shelf and hybrid systems in infection control, radiology, pathology, pharmacy, and laboratory. The movement then began to consider more dynamic nursing unit systems.

By now, hospitals’ legal teams were engaged. State laws required that physicians and nurses be held accountable for the accuracy of their chart entries through signature authentication. Electronic signatures began to appear, and this was occurring before regulatory and accrediting agencies had OK’d the practice.

By now medical and public health researchers realized that electronic access to medical records could be extremely useful, but only if the data entry was accurate and timely. Already misinformation was becoming a problem.

Whether for research or clinical decision making, partial accuracy was clearly not good enough. Add to this a sudden explosion of offerings of clinical decision support tools which began to appear, initially focused on prescribing safety featuring flags for drug-drug interactions, and drug allergies. Interpretation of lab specimens and flags for abnormal lab results quickly followed.

As local experiments expanded, the need for standardization became obvious to commercial suppliers of Electronic Health Records (EHRs). In 1992, suppliers and purchasers embraced Health Level Seven (HL7) as “the most practical solution to aggregate ancillary systems like laboratory, microbiology, electrocardiogram, echocardiography, and other results.” At the same time, the National Library of Medicine engaged in the development of a Universal Medical Language System (UMLS).

As health care organizations struggled along with financing and implementation of EHRs, issues of data ownership, privacy, informed consent, general liability, and security began to crop up.  Uneven progress also shed a light on inequities in access and coverage, as well as racially discriminatory algorithms.

In 1996, the government instituted HIPPA, the Health Information Portability and Accountability Act, which focused protections on your “personally identifiable information” and required health organizations to insure its safety and privacy.

All of these programmatic challenges, as well as continued resistance by physicians jealously guarding “professional privilege, meant that by 2004, only 13% of health care institutions had a fully functioning EHR, and roughly 10% were still wholly dependent on paper records. As laggards struggled to catch-up, mental and behavioral records were incorporated in 2008.

A year later, the federal government weighed in with the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH). It incentivized organizations with $36 billion in Federal funds to invest in and document “EHRs that support ‘meaningful use’ of EHRs”. Importantly, it also included a “stick’ – failure to comply reduced an institution’s rate of Medicare reimbursement.

By 2016, EHRs were rapidly becoming ubiquitous in most communities, not only in hospitals, but also in insurance companies, pharmacies, outpatient offices, long-term care facilities and diagnostic and treatment centers. Order sets, decision trees, direct access to online research data, barcode tracing, voice recognition and more steadily ate away at weaknesses, and justified investment in further refinements. The health consumer, in the meantime, was rapidly catching up. By 2014, Personal Health Records, was a familiar term. A decade later, they are a common offering in most integrated health care systems.

All of which brings us back to generative AI.  New multimodal AI entrants, like ChatGPT-4 and Genesis, are now driving our collective future. They will not be starting from scratch, but are building on all the hard fought successes above. Multimodal, large language, self learning mAI is limited by only one thing – data. And we are literally the source of that data. Access to us – each of us and all of us – is what is missing.

Health Policy experts in Washington are beginning to silently ask, “What would you, as one of the 333 million U.S. citizens in the U.S., expect to offer in return for universal health insurance and reliable access to high quality basic health care services?”

Would you be willing to provide full and complete de-identified access to all of your vital signs, lab results, diagnoses, external and internal images, treatment schedules, follow-up exams, clinical notes, and genomics?  An answer of “yes” could easily trigger the creation of universal health coverage and access in America.

The Mayo Clinic is not waiting around for an answer. They recently announced a $5 billion “tech-heavy” AI transformation of their Minnesota campus. Where’s the money for the conversion coming from? Their chief investor is Google with its new Genesis multimodal AI system. Chris Ross, the Chief Investment Officer at the Mayo Clinic says, “I think it’s really wonderful that Google will have access and be able to walk the halls with some of our clinicians, to meet with them and discuss what we can do together in the medical context.” Cooperation like that he predicts will generate “an assembly line of AI breakthroughs…”  Along the way, the number of private and non-profit vendors and sub-contractors will be nearly unlimited, mindful that ultimately health delivery will directly impact 1/4 of our economy.

So AI progress is here. But Medical Ethicists are already asking about the impact on culture and values. They wonder who exactly is leading this revolution. Is it, as David Brooks asked in a recent New York Times editorial, “the brain of the scientist, the drive of the capitalist, or the cautious heart of the regulatory agency?” DeKai PhD, a leader in the field, writes, “We are on the verge of breaking all our social, cultural, and governmental norms. Our social norms were not designed to handle this level of stress.” Elon Musk added in 2018, “Mark my words, AI is far more dangerous than nukes. I am really quite close to the cutting edge in AI, and it scares the hell out of me.” Of course he joined the forces of AI accelerators one year later by launching his own AI venture, XAI, and raising $24 billion in private equity funding.

Still, few deny the potential benefits of this new technology, and especially when it comes to Medicine. What could AI do for healthcare?

  1. “Parse through vast amounts of data.”
  2. “Glean critical insights.”
  3. “Build predictive models.”
  4. “Improve diagnosis and treatment of diseases.”
  5. “Optimize care delivery.”
  6. “Streamline tasks and workflows.”

So most experts have settled on full-speed ahead – but with reasonable guard rails. Consider this exchange during a 2023 Podcast, hosted by Open AI CEO, Sam Altman, who recently testified before Congress to appeal for full Federal backing in what he sees as an AI “Arms Race” with the rest of the world, most especially China. His guest was MIT scientist, Lex Friedman who reflected, “You sit back, both proud, like a parent, but almost like proud and scared that this thing will be much smarter than me. Like both pride and sadness, almost like a melancholy feeling, but ultimately joy. . .”  And Altman responded, “. . . and you get more nervous the more you use it, not less?” Friedman’s simple reply, “…Yes.”

And yet, both would agree, literally and figuratively, “Let the chips fall where they may.” “Money will always win out,” said The Atlantic in 2023. As the 2024 STAT Pulse Check (“A Snapshot of Artificial Intelligence in Health Care”) revealed, over 89% of Health Care execs say “AI is shaping the strategic direction of our institution.” The greatest level of activity is currently administrative, activities like Scheduling, Prior Authorization, Billing, Coding, and Service Calls. But clinical activities are well represented as well. These include Screening Results, X-rays/Scans, Pathology, Safety Alerts, Clinical Protocols, Robotics.  Do the doctors trust what they’re doing. Most peg themselves as “cautiously moderate.”

A recent Science article described the relationship between humans and machines as “progressing from a master-servant relationship with technology to more of an equal teammate..” 

The AMA did its own “Physician Sentiment Analysis” in late 2023, capturing the major pluses and minuses in doctors’ views. Practice efficiency and diagnostic gains headed the list of “pros.” 65% envisioned less paperwork and phone calls. 72% felt AI could aid in accurate diagnosis, and 69% saw improvements in “workflow” including Screening Results, X-rays/Scans, Pathology, Safety Alerts, and Clinical Protocols. As for the “cons” – the main concerns centered around privacy, liability, patient trust, and the potential to further distance them from their patients.

What most everyone understands by now, however, is that it will happen. It is happening. AI and Medicine now balance, as David Brooks wrote, three separate personalities: “the drive of a capitalist, and the brain of a scientist, the cautious heart of a regulatory agency.” Another commentator suggested it is time to get aboard, stating, ”The relationship between humans and machines is evolving. Traditionally humans have been the coach while machines were players on the field. However, now more than ever, humans and machines are becoming akin to teammates.”

The question of whether to take risks has now shifted to risk mitigation. What are the specific risks? First, is cybersecurity. Last year’s security breach by Ascension Health placed the entire corporation at risk, and reinforced that health care institutions have become a target of cybercriminals. Second, personal liability. It is notable that Sewell Setzer III’s mother is a lawyer, and that the App that played a role in his suicide is potentially vulnerable. Third, the stability of constitutional democracies is in play. Consider the interface between behavioral health and civil discourse. Social media has a “heavy focus on impressionable adolescents”; “floods the zone with deepfakes to undermine trust”; “allows widespread, skillful manipulation by super-influencers with customized feeds”; and  “allows authoritarian tracking and manipulation of citizens.”

The counter-balance to these threats, according to one 2024 JAMA editorial titled “Scalable Privilege – How AI Cold Turn Data From the Best Medical Systems Into Better Care For All” is the authorized sharing of our medical records. What might we actually find if we deep mine the data of our national patient data bases. Are we prepared to deal with the truth? The results would likely include a list of negatives like excessive costs; bias and inequity by race, gender, wealth and geography; harmful and manipulative direct-to-consumer drug advertising, and more. But positives might  appear as well including personalization and individualization of health planning; continuous medical education; real-time risk prevention; insurance fraud protection; continuous outcome improvements; and equal care under the law. We as a nation have choices to make if we wish a voice in our collective futures.

Our final session will focus on AI and Medical Research, a remarkable story that is just beginning to unfold.