By THE NATION
Most AI adopters – which now account for 72 per cent of organisations globally – conduct ethics training for their technologists (70 per cent) and have ethics committees in place to review the use of AI (63 per cent), according to the latest study, AI Momentum, Maturity and Models for Success (Link).
The study was commissioned by SAS, Accenture Applied Intelligence and Intel, and conducted by Forbes Insights in July 2018.
According to the study AI leaders – organisations rating their deployment of AI “successful” or “highly successful” – also take the lead on responsible AI efforts: Almost all (92 percent) train their technologists in ethics compared to 48 percent of other AI adopters.
The findings are based on a global survey among 305 business leaders in the Americas, Europe and the Asia-Pacific region, more than half of them chief information officers, chief technology offers, and chief analytics officers.
AI now has a real impact on peoples’ lives which highlights the importance of having a strong ethical framework surrounding its use, according to the report.
“Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” said Nontawat Poomchusri, Country Managing Director and Financial Services Lead, Accenture in Thailand.
“These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’. They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society. Data scientists are hungry for these guidelines.”
AI leaders also recognise the strong connection between analytics and their AI success. Of those, 79 percent report that analytics plays a major or central role in their organisation’s AI efforts compared to only 14 percent of those who have not yet benefited from their use of AI.
“Those who have deployed AI recognise that success in AI is success in analytics,” said Nontawat. “For them, analytics has achieved a central role in AI.”
Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognise that oversight is not optional for these technologies. Nearly three-quarters (74 percent) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33 per cent). Additionally, 43 percent of AI leaders shared that their organisation has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28 percent).
Still, the report states that oversight processes have a long way to go before they catch up with advances in AI technology.
“The ability to understand how AI makes decisions builds trust and enables effective human oversight,” said Nontawat. “For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”
It stands to reason that companies are taking steps toward ethical AI and ensuring AI oversight because they know that faulty AI output can cause repercussions. Of the organisations that have either already employed AI or are planning to do so, 60 percent stated that they are concerned about the impact of AI-driven decisions on customer engagement – for example, that their actions will not show enough empathy or customers will trust them less.