2023 has been completely dominated by the promise of AI systems that can improve our lives. ChatGPT’s launch made the conversation truly reach a fever pitch, setting the record for the fastest growing user base in history.

Yet, ChatGPT represents merely a single aspect of a more profound transformation. The real transformation is in the evolving maturity of AI, which opens up novel realms of interaction and uncharted prospects for business.

Organisations must learn to ride the wave of all these changes, while still carefully handling the associated risks, both unintended and intended. Global tech companies have been forced to understand the potential dangers of the technology – like bias and privacy concerns – to ensure they are protecting their customers and stakeholders.


Putting people at ease

Our latest research data – which surveyed over 10,000 consumers globally – reveals that the majority of British consumers (77%) are feeling nervous about GenAI adoption in business. So much so that a third (35%) think that businesses should stop the rapid development of GenAI until effective government regulations are developed.

This underlines that businesses must understand that gaining the public’s confidence through ethical AI is not just a regulatory obligation, it is a strategic competitive advantage.

For all the heated discussions on the threats and benefits AI poses to the workforce, a good guiding principle is to remember it’s your people who will actually be using the technology. Therefore, their view on GenAI usage matters the most. I urge business leaders to closely listen to how their people want to safely experiment and use AI, inside and outside of the workplace.

After all, they know better than anyone where it can make a true and recognisable difference.


Bias in, bias out

According to the MIT Technology Review and Thoughtworks joint report on ethical technology priorities, combatting AI bias is a prominent concern. But, bias in technology isn’t a new problem. Take the internet for example. If we ask a search engine a question we are likely getting our answer from the first sources which appear. They may be, unbeknownst to us, sponsored. This answer will likely differ from other sources, making the information we have absorbed bias to one point of view. For example, pre-set user choices or defaults in searches can influence your selections and choices, which means you may be choosing not the most reliable or relevant source. Naturally, we’re seeing similar bias problems spill over into GenAI adoption, from hiring processes to finance creditworthiness. It is crucial to remember that not all AI solutions are built following the kind of robust engineering practices you’d expect.

Our consumer AI research data additionally found that when respondents were surveyed on their top ethical concerns, 47% stated ‘human societal bias’ as a concern. Breaking the bias means involving a range of individuals from diverse backgrounds who can look at algorithmic considerations to give feedback to the AI system. An essential part of this means diving into a range of metrics – not just performance – such as data and fairness. AI is moving fast, so these conversations are becoming more and more commonplace in businesses.


Don’t be lulled into a false sense of security

As AI becomes more advanced and integrated into software and everyday tasks, there’s a risk of people relying on it too much in the workplace. For example, a developer might blindly trust AI-generated code, or an autonomous car driver might become overly confident in the car’s autopilot features.

It’s crucial to test AI models properly and ensure they’re not rushed to market. Instead, follow the same tried and tested principles used in good product development, and crucially remember that changes can be undone if they don’t work as expected before they go to market.

Despite concerns about AI taking jobs or posing threats, remember that your employees will be the ones using this technology. Consider how AI impacts your workforce and find a balance between encouraging them to embrace AI innovation and addressing potential issues like privacy and intellectual property. To redress these biases, we must learn to be curious and engage with a wide range of voices to work together to enact change. Let your employees have a say in how AI enhances their work since they know where it can be most beneficial.