The burden that is not allowing Artificial Intelligence to reach its full potential.

By arvind |Email | Oct 10, 2018 | 7734 Views

The omnipresence of AI is felt in every industry, it considered as a technology that needs to be shaped ahead with innovation. Even after be adapted into most companies workings, there is still a sense of concern that surrounds AI. There is no denying the fact that the use of AI is increasing ever so fast, in a survey conducted recently found that 69% of companies are using AI, machine learning, deep learning, and chatbots - yet only a fifth (21%) of those that adopted AI felt their projects were providing meaningful outcomes.
As the honeymoon phase for AI and companies is coming to an end companies have started to realize the negative side of this relation. One of the few examples of this case is IBM's Watson; been unable to prove its worth in healthcare, and specifically cancer treatment, due to the data used in the system often coming from only a small number of sources - it has even been reported to suggest 'unsafe' treatments. There are a bunch of barriers a company needs to overcome just so AI pays its reward.
This post will cover 4 challenges are preventing AI to reach its full potential.
  • There is a shortage in technical skills in AI
    There is no shortage of workers in the field of AI, but the area of concern is that it does not have the qualified workers that have the right skill sets or the technical sets. A recent survey covered this concern and found that 56% of senior AI professionals agreed that the lack of talent and the supply of qualified workers in AI was the single biggest barrier impacting AI projects. Adding to the pile of issues many companies find it difficult to attract the 'O.G's' of the digital world. Another issue that is cause of this chaos is the pay difference between the industries science and technology. In the recent time many companies have gained the title of 'hire and fire' with the community. As more people unfamiliar with the environment join the industry, upskilling those already in the industry will be a key factor in improving AI, as well as altering job-seekers' impressions to attract skilled data scientists to roles in life sciences.
  • Outcome is affected by the quality of data
    When it comes to accessibility to quality data the reach is a bit limited which directly affects the final AI results. A recent study showed the possibility that the technology and data providing. When it comes to the success of AI these are all encouraging signs, because at the end of the day AI is as good as the data it is being feed (dataholic). In AI, the 'garbage in, garbage out' concept is critical when building algorithms and even the most experienced technology companies can get it wrong. For example, in 2016, Microsoft's AI-driven Twitter chatbot, Tay, went completely rogue and tweeted racist statements when attempting to use language patterns of its 18-24 demographic. Tay was said to have found herself 'in the wrong crowd' - and while this example likely didn't result in physical harm to anyone during its short run, it highlights that when AI is making decisions about people's health the need for a correct, impartial response is paramount.
  • Data standards deficiency
    As well as a challenge in accessing patient data there are currently no industry-wide data standards. These standards need to include patient data in the broadest possible sense and from a wide range of sources including mobile devices, wearables and more - from healthy populations and not just those that see themselves as patients. As a result, significant time and resources are required to integrate data into corporate systems and make it usable. Standardized data formats would tackle this issue but will require much greater collaboration between pharma and biotech organisations, and data and technology firms. Currently, there are guidelines that promote data sharing, such as the FAIR principles (Findable, Accessible, Interoperable, Reusable), but these need to be further encouraged to help maximize the usability of data. A survey carried out found a quarter of respondents are aware of FAIR but haven't yet implemented FAIR-driven policies. While awareness is important, this finding illustrates the extent of work needed to ensure the principles are followed across the whole sector.
  • The fear of change can damage progress
    The progress of AI has also been hindered by anxiety over change - such as the ethics of AI, and employee concerns over potential job losses - with a recent survey finding 67% of workers are worried about machines taking work away from people. But these fears over robots taking our jobs are misplaced; AI will augment researchers by helping to tackle repetitive, time-consuming work, allowing them to be more creative and follow different paths to enable fruitful research. On the other hand, reservations over how 'biased' or 'unethical' AI might be will need to be addressed, especially within the life sciences and healthcare industries, as it will directly impact patients. In clinical trials, for example, worries have been expressed that recruitment is not truly representative of demographics. This is a problem given that age, race, sex, genetic factors, other drugs being taken, and more, can play a vital role in a person's response to a drug or intervention. A report published in Nature found that although since the 90's, the number of countries submitting clinical trial data to the FDA has almost doubled, the equivalent increase hasn't been seen in the diversity of the clinical trial population - in 1997, 92% of participants were white, the figure in 2014 was 86%. Additionally, adult males also dominate the clinical trial population, representing about two thirds. The diversity of clinical trial recruitment must be improved to ensure we are building AI algorithms that will provide the best recommendations for all groups.

Source: HOB