Until recently, the term “symbolic” bore negative connotations in AI because it harkened back to the failures to achieve symbolic thinking during the so-called “AI winter” of the 1980s. Since then, generative AI has demonstrated the masterly capacities of neural networks at scale.
Generative AI has relieved doctors of their administrative and cognitive burdens, for example, by handling administration and paperwork, including clinical summaries. More than 70 percent of healthcare organizations are pursuing generative AI capabilities, according to McKinsey.
Today, however, we see generative AI’s deficiencies, especially in medicine. It can behave like a parrot, mindlessly telling users whatever its blind algorithms have told it to say. They also “hallucinate” or invent data, unacceptably compromising care and research.
AI-generated clinical summaries don’t necessarily come under FDA oversight on standards and could cause more harm than benefits, according to JAMA. These concerns are partly why large multinational technology companies have abandoned many AI-powered healthcare efforts in recent years. That said, the big tech companies are still investing heavily in experiments related to AI in healthcare because they recognize its transformational potential.
More symbolic understanding in AI especially has arisen as the solution to hallucinations and other problems. Large language models (LLMs) are a specific type of neural network architecture designed to process and generate human-like text. These models, such as generative pre-trained transformers (GPTs), work by predicting likely sequences of words based on patterns learned from vast amounts of text data. Symbolic AI then grounds the LLMs with reasoning and logic to prevent fanciful hallucinations and other misreadings.
Now, new neuro-symbolic AI solutions are integrating LLMs and symbolic thinking, creating more robust AI that can reason, learn, and engage in cognitive modeling to understand context and other factors when performing. Advances in neuro-symbolic AI have been reflected in recent venture capital AI investments from Elemental Cognition, Symbolic AI, and HarmonicAI, founder of Robinhood.
Consider how people understand the periodic table and its 118 elements, which constitute every compound in the known universe. All of chemistry can be abstracted to these elements, yet understanding chemistry requires more than memorizing the table. It involves recognizing patterns, predicting interactions, and applying knowledge to complex molecules. This complexity mirrors the challenge in AI: a purely symbolic approach (like memorizing element properties) or a purely neural approach (only observing reactions without understanding elements) is insufficient.
Just as chemists combine knowledge of fundamental elements with pattern recognition to understand complex chemical systems, neuro-symbolic AI aims to integrate explicit representations of basic concepts with the pattern recognition capabilities of neural networks. This combined approach in AI, like in chemistry, could potentially tackle complex reasoning tasks that neither pure neural networks nor pure symbolic systems can handle alone–though achieving this level of integration remains an active area of research.
Apply this thought to medicine.
Pharmaceutical companies want to find patients for trials. Providers want to find trials for their patients. Generative AI applications with only LLMs are notoriously bad at reading EMRs and other data, however. They regularly overcount based on simple references in EMRs while missing legitimate information in the same pool of patients.
Diagnostic companies have encountered similar roadblocks when deploying AI. For example, if they ask a generative AI solution to calculate how many patients in a given population who submitted colon samples were later diagnosed with colon cancer, they invariably receive a high count that they can’t trust for accuracy. This unreliability stems from the AI’s lack of access to specific patient data and its tendency to “hallucinate” plausible but incorrect information based on general patterns it has learned.
AI in these use cases needs to do more than spit out answers like a parrot. It needs neuro-symbolic understanding that can read EMRs like a doctor but infinitely more quickly. Neuro-symbolic AI satisfies this need. It can instantly employ the logic necessary to make rational judgments that are much more accurate.
Neuro-symbolic AI solutions can unlock 80 percent of the world’s clinical data through contextual understanding that transforms unstructured EMRs and research into analytics-ready data. They can abstract data 27,000 times faster than the manual methods commonly used in healthcare, too.
Clinical experts oversee these solutions to guarantee research-grade output, of course.
These new neuro-symbolic AI solutions are ideally suited to the precision medicine that healthcare industry leaders are seeking to foster today. Precision medicine requires intense focus on individual patients and data, a process akin to looking for needles in haystacks.
Looking for patients with specific cohorts–finding needles in a haystack–is an awesome task for AI.
About Karim Galil
Karim Galil, MD, is co-founder and CEO of Mendel AI. Mendel’s mission is to make medicine objective by enabling the largest index of patient journeys, leveraging AI that understands medicine like a physician. Dr. Galil’s experience as a physician demonstrated that medicine does not advance at the same rate as technology. With Mendel, he aims to bridge this gap, facilitate clinical research at scale, and make medicine truly objective. Dr. Galil is an entrepreneur by spirit; his first company, Kryptonworx, led health tech in the MENA region with customers including Fortune 500 companies.