I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...Full Bio
I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.
Success story of Haptik
373 days ago
Who is afraid of automation?
373 days ago
What's happening in AI, Blockchain & IoT
374 days ago
3 million at risk from the rise of robots
374 days ago
Can Artificial Intelligence really change Healthcare?
Artificial Intelligence today has become a buzz word and this technology is set to disrupt many industries. Artificial Intelligence application in the medicine field is a promising area, but the question is whether doctors & patients will feel comfortable using it?. There are many new startups out there that are leveraging AI to improve patient outcomes, but we are yet to see AI being used pervasively in clinics.
Till now, the primary application of AI in healthcare has been of pairing algorithms with structured exercises in reading patient data & medical images in order to train machines to detect abnormalities. This kind of training is called "Deep Learning". Similarly, algorithms are being used to sift through data of vast amounts of medical literature to inform treatment decisions in cases where a human would take a huge amount of time to go through the same material.
Companies like MedyMatch and Viz are doing just that. They're using proprietary algorithms and applying deep learning to aid physicians in making faster diagnoses of strokes in emergency treatment situations. Their algorithms produce an output by ingesting patient CT scans and using the programmed deep learning to aid in the diagnosis of a stroke. Advancement in this particular instance is especially significant because receiving appropriate treatment quickly has a big impact on patient outcomes.
The annual Radiological Society of North America (RSNA) conference was held in Chicago at the end of November, and overwhelmingly the topic of the week was the use of AI in radiology and medical images. I heard firsthand accounts that most scientific speaking sessions involving AI were standing room only and researchers presented on the many promising applications of AI in areas of care of stroke patients, finding and classifying the risk of lung nodules, and identifying imaging cases that need priority review by a radiologist.
While these technologies and approach to AI in the clinical setting hold promise, there has been a recent backlash in the marketplace from the failure to live up to the great hype of IBM's Watson. Watson was to play a central role in establishing an Oncology clinical decision support system at the MD Anderson Cancer Center, but the well-publicized breakup of the partnership with IBM has given some in the industry pause about the great promise of AI in the healthcare setting.
Facing the Challenges
Companies developing AI and machine learning are forging ahead with the understanding that they face uncertainty as they navigate the FDA clearance or approval pathway needed to commercialize these quickly-changing technologies. Many of these technologies fall under the clinical decision support software classification with FDA. There is new guidance for those classifications, but a significant gray area remains in understanding how FDA is going to regulate AI offerings.
FDA has recognized that the existing commercialization paradigm quickly becomes too burdensome to continue to innovate at such a rapid pace. The agency has created the Digital Health Innovation Action Plan to address these concerns and create the new regulatory pathway for these emerging technologies. With this action plan, FDA is partnering with some of the world's most innovative companies (Apple, J&J, Roche, Samsung, Verily) to create a new and tailored approach to regulating digital health technologies like AI. The proposed output will likely be a new way for FDA collaborate with the industry and ensure that the focus is on clearing and approving the highest-risk technologies.
Clinicians and regulators may find it difficult to trust a deep learning algorithm that doesn't share any information about how it arrived at a certain diagnosis. This "black box" of information makes it difficult to provide transparency to regulators as well as the physicians relying on it. Where this black box exists, it is going to become ever more important that FDA is comfortable with the technology behind it as well as the company producing it.
There are security concerns around AI technologies using cloud & working with Protected Health Information (PHI). Factors are like HIPPA regulations & cybersecurity concerns need be to thought through. Both these factors require some dedicated staff as well as involve some costs on the part of manufacturers.
In the end, the main drive for this growth is coming from our motivation to do things better. The use of AI in medicine comes from the assumption that humans are imperfect & computers can help reduce errors as well as bias from healthcare. As AI evolves further, there is likelihood that it will not remain just hype but start giving some tangible results.