AI Model Pillar-0 Raises Diagnostic Bar in Medical Imaging

0
5
Adam Yala

BERKELEY, Calif. — University of California, Berkeley and University of California, San Francisco researchers have unveiled Pillar (Pillar-0), an open-source artificial intelligence model designed to analyze CT and MRI scans with significantly higher diagnostic accuracy than existing public AI tools. Built to interpret full 3D imaging volumes and identify hundreds of conditions from a single exam, Pillar-0 is positioned as a next-generation backbone for AI-enhanced radiology.

Pillar-0 addresses growing capacity pressures as more than 500 million CT and MRI scans are performed each year. The model achieved an average AUC of .87 across more than 350 findings, outperforming publicly available systems such as Google’s MedGemma (.76), Microsoft’s MI2 (.75), and Alibaba’s Lingshu (.70).

“Pillar-0 marks a major milestone in our mission to push the frontier of AI for radiology,” said Adam Yala, Assistant Professor of Computational Precision Health at UC Berkeley and UCSF and senior author of the research. “Pillar-0 outperforms leading models from Google, Microsoft and Alibaba by over 10 percent across 366 tasks and four diverse modalities; Pillar-0 also runs an order of magnitude faster, finetunes with minimal effort, and drives large downstream performance gains.”

Validated on chest CT, abdomen CT, brain CT and breast MRI exams from UCSF, Pillar-0 demonstrated broad performance gains across clinical tasks. As a flexible general-purpose platform, the model can be adapted to new challenges. In external testing at Massachusetts General Hospital, fine-tuning Pillar-0 improved upon the state-of-the-art lung cancer risk-prediction tool Sybil-1 by 7 percent. For brain CT hemorrhage detection, Pillar-0 surpassed all baseline models while operating with only a quarter of the usual training data.

“Leading foundation models for radiology have relied on processing 2D slices independently, because they are too inefficient to scale to the full imaging volumes,” said Kumar Krishna Agrawal, a PhD student at UC Berkeley and first author of the research. “To enable Pillar-0 to effectively process 3D volumes, we implemented innovations across data, pretraining and neural network architectures. Our novel Atlas neural network architecture is over 150 times faster than traditional vision transformers at processing an abdomen CT, allowing us to train models at a fraction of the cost.”

The team is also introducing RaTE, a new evaluation framework developed to reflect real clinical workflows. “Existing benchmarks, like VQA-Rad, have relied on artificial questions posed on 2D slices that are poor measures of model utility,” said Dr. Maggie Chung, Assistant Professor in Radiology and Biomedical Imaging at UCSF and senior author of the research. “To address this gap, we assembled a large collection of diagnostic questions and findings that radiologists routinely evaluate in clinical practice. We also developed tools that enable any hospital to independently test or fine-tune Pillar-0 on their own data.”

All Pillar-0 code, trained models, evaluation tools and data pipelines are being released publicly. The team plans to broaden the model’s capabilities to include additional imaging modalities and full grounded report generation.

“Transparency is essential to advancing the science of AI in health,” Yala said. “Open-sourcing enables the global research community to independently validate our tools and build on our work. We’re excited to support folks building on the Pillar series.”

Leave A Reply

Please enter your comment!
Please enter your name here