Imaging: One of AI’s best healthcare applications

It is an exciting time for AI and its applications in healthcare. More specifically, imaging techniques and computer vision are transforming the way the health industry works. I want to briefly provide some insight into how ElectrifAi is helping hospitals and medical practitioners take advantage of computer vision.
What are we doing?
First, let’s start with the technology. We’ve built an image analytics engine that allows us to understand images in an entirely different way than established methods. In contrast to regular convolutional neural networks, which require tens of thousands of images as training data, our approach requires only requires several dozen. Using a significantly smaller number of images has a direct impact on time and cost spent annotating.
When given training images, we segment them manually or automatically, meaning that we isolate the anatomical features of interest and suppress everything else that is irrelevant. We then transform the resulting ‘objects’ into numerical features, describing them in a unique way. This information is then used to build a model that we can inject into our system for any new, previously unseen data set to be automatically segmented.
It is important to mention that the images making up our training data are two dimensional, but the feature vectors that we extract in order to populate our model, are applied to three-dimensional data sets. We get all the numerical evidence/ key structural properties of how our targets look like in 3D from a handful of 2D images!
Transmitting numerical representations of the pixel content of each segment accelerates building of ML applications, opens up a wider range of neural network-based technologies for image analytics, and addresses issues related to patient privacy.
How are we applying this technology right now?
We are currently working with clients to apply computer vision technology in three main ways:
Supporting diagnostics
We support the diagnostic process by extracting anatomical features of interest in either a supervised or automated manner with the segmentation process I described above. Each segment can then be measured automatically and objectively. Based on that numerical measurement, a doctor can decide, for example, if a tumor is operable or not. Or what the best approach would be in removing/handling the tumor. In order to be able to make this initial call, you need as precise information as possible. Right now, most of the methods for doing this are manual and subject to one expert’s opinion, but this is something we’ve been able to address through our segmentation process.
Bringing data-lakes to life
Our ability to do massive data annotation in very little time is incredibly important in the healthcare space. Hospitals often have hundreds of thousands of data sets stored in their data lakes. But because of data privacy regulations, they cannot share them with the larger scientific community for building AI-based applications. With our image analytics engine, we can identify and annotate features of interest, remove all identifying information, and just ship numbers, not even pixels to the outside world.
Because this is lightweight and can be easily deployed in the cloud, we can perform this work in a matter of minutes even for massive image archives.
Improving the patient experience
Many of the physicians that we have worked with have told us that our imaging technology has helped them better communicate with their patients. Diagnostic reports are often too technical for the typical person to understand, and leave them unsure about the status of their health. What many people do understand, however, are pictures. With our technology, patients can get a detailed visual aid to get a better understanding, irrespective of technical terms, of what a problem really looks like.
A doctor can use that 3D image to assist them in providing an explanation of a patient’s condition in as clear a way as possible.
What’s next for medical imaging and AI?
Improving the doctor-patient relationship and creating value from big data are two issues that the healthcare world has been trying to solve for a long time, without any doubt ElectrifAi’s imaging tools can be a part of the solution to these problems.
I want to end this piece by mentioning something very real and urgent, happening in the world right now. At ElectrifAi, we’ve been thinking about how our imaging engine could be used as a screening tool for the coronavirus (COVID-19). With just a handful of images of lungs, we’ve built a model of how lungs look like in 3D and under different physiological conditions. With that model we have successfully identified outliers internally in the organs, segmented them and put them through a classifier to get a binary decision; tested positive or not!
Our innovative COVID 19 solution has many applications. It is entirely possible that hospitals will become the bottleneck for thousands of patients who present themselves with symptoms. Our solution can speed up diagnosis and has the added advantage of being able to determine the progression of the disease not just in determining if they have the disease, but to also provide additional insights e.g.the stage of the disease. Therefore, the solution could be utilized to steer those patients who are severely ill to dedicated facilities. In addition, this solution could be used at points of entry such as airports to quickly identify patients that may present symptoms and who need to be segregated from the community. This is ElectrifAi at its best, leading with #AiforGood.