TUESDAY, April 23, 2024
nationthailand

Ethics must come first when embracing artificial intelligence

Ethics must come first when embracing artificial intelligence

WHENEVER artificial intelligence (AI) comes up in conversation, there is often an underlying sense of fear –of having identities traced and profiled (or worse, hacked), of losing jobs to machines, or that conventional post-apocalyptic fear of a man-against-machine world where robots pose a mortal threat to humanity.

This sense of fear has been widespread in society for as long as technological innovations have been improving our lives. It’s worth noting that people feared the invention of all sorts of transport, from cars to trains to airplanes; they feared assembly lines would wreak havoc on employment; and more recently, during the run-up to Y2K, some doomsayers worried computer-crashes worldwide would lead to mayhem and destruction.
The reality is that innovation and technological advancements do lead to disruption and job displacement. Are we to stop evolving or innovating? No, in fact, the solution is to anticipate the disruption and proactively build strategies to develop the skills and jobs that will be needed in the future.
This mindset must remain front and center with all business executives. Many corporate leaders are considering incorporating AI across their enterprises not only to increase efficiencies and reduce costs, but more importantly, as an opportunity to completely reimagine their businesses and create entirely new business models and processes. But with the advent of AI in business, executives need to think about something we call “collaborative intelligence,” or how they think about the relationship between employees and machines. 
This idea isn’t about humans versus machines. Nor should one think of AI as trying to build robots with superhuman capabilities. In fact, AI is about humans plus machines working together to do things differently and to also do different things. Think of it like creating Marvel’s Iron Man versus building a cyborg like the Terminator.
But even Tony Stark’s Iron Man can be seen as a cautionary tale. At times, in both the comic and the movies, he goes overboard with technology without thinking of the social ramifications. Executives cannot afford to make similar mistakes and must recognize that AI needs to be anchored in a strong set of corporate values and compliance – as any misuse could be in violation of employment and data privacy rules.
Executives must also be careful to ensure data veracity and data privacy. They must manage data and algorithms to ensure both are human-centric, fair, accountable, transparent and honest – the core principles of Responsible AI. Indeed, fear of loss of control – that the power of AI is left in the hands of a few digital elites and the rest of the world is left at its mercy, is another standard theme voiced by critics and which has played out in sci-fi movies.
Levering AI and kicking-off a transformation of your business without thinking through the social impacts on your staff and customers can put your company’s reputation, brand and operations at risk.
So, what red flags must executives consider? 
They need to first ask questions about the AI itself, to make sure there aren’t design/data biases included in it. Many of these issues can be addressed by ensuring transparency is in place and a policy of using people with machines together.
Executives will also need to think through the societal impact of the AI they are using. Will there be loss of jobs? Or job creation? Is this good for society or to its detriment?
They also need to clearly show that ethics are at the forefront of their thinking – this is particularly important with regards to data privacy. Recent events seem to be pushing, at least in the western world, toward individual ownership of data regulated by commercial law. Robust dedication to corporate responsibility will be a differentiator. Consider how in Europe, the General Data Protection Regulation (GDPR), which aims to give individuals more control over their data, has gone into effect on 25 May. Under GDPR, individuals will have to give explicit consent to companies gathering and processing information, which means we’ll all be seeing many more “click to proceed” or “do you agree?” windows popping up in the future—if you have not already received a few of these. However, in an environment of fear, people will gravitate to companies they believe are taking data privacy seriously not because they are required to, but because the company believes in its value. 
Used ethically, AI can be more uplifting and not creepy. It’s a transformational business tool that will change how companies operate and do business. But it shouldn’t just be viewed as a means for executives to grow their businesses. They should look at AI as a way to enhance human capabilities – making people better and happier in their jobs, and resulting in customers being more satisfied with services and confident in a company’s brand. 

Contributed by GIANFRANCO CASATI, Accenture’s Group Chief Executive for Growth Markets and NONTAWAT POOMCHUSRI, Country Managing Director and Financial Services Lead, Accenture in Thailand
 

nationthailand