There is tremendous value in the data radiologists produce as part of their reading workflow where they interpret and annotate imaging studies. The resulting data areis only useful, however, if it's mineable and its semantic meaning is computer accessible, according to Visage. These annotations can flow into a model training infrastructure to build new artificial intelligence (AI) models. The algorithms output can then be integrated into the radiologist's diagnostic viewer. That's precisely what Visage is working on.
Visage is collaborating with the American College of Radiology (ACR) Data Science Institute to integrate Visage 7 semantic annotations with the ACR AI-LAB, a tool that helps radiologists learn the basics of AI and help shape it.
Visage straddles both sides of AI -- it creates semantic annotations directly in Visage 7, exchanges those annotations with other third-party applications, and then consumes the output of AI models and integrates them directly into Visage 7.
Nonprofit organization Partners HealthCare -- founded by Massachusetts General Hospital and Brigham and Women's Hospital Center for Clinical Data Science -- are piloting Visage 7.