Katharina Cavano l Palladium - Jun 16 2022
There's More to Google's AI Than Sentience

This week, a Google engineer went public with his claims that the company’s artificial intelligence (AI) had come to life, and was for all intents and purposes, a sentient being. While Google has refuted his claims and much of the technology world agrees with its response and findings, it’s sparked a debate and shone a spotlight on the ethics of AI and what it means for today’s society.

The program in the spotlight is LaMDA, Google’s artificially intelligent chatbot generator, and system for building chatbots based on its advanced language models. The program mimics speech by reflecting patterns observed through huge volumes of communications curated by Google.

“What we have here is brute force,” says Jonathan Friedman, Palladium Senior Technical Advisor, as he explains the AI technology in question. “We now have way more computing power and way more data than ever before and what Google has done has forced a massive amount of data through a computer program that can interpret a sequence of words and then spit back a sequence of words that makes sense.”

He adds that while this is not how early AI visionaries imagined computers would achieve the quality of being indistinguishable from humans in communication, and it’s not a sentient being as the Google engineer has claimed, there’s still a potential for dangerous outcomes. And that risk lies with how humans interact with the technology.

What does that mean and why does it matter if the chatbot you’re talking to feels like you’re talking to a human? For Friedman, it starts blurring some serious lines. “The more realistic you experience the AI to be, the harder it is to look for biases, and if a person is basing decisions off the answers they’re getting, it can be slippery slope.”

And if you think that’s a far-off reality, he notes that most people are making AI influenced decisions every day. “From the ads we’re served to product recommendation we get, the directions we follow from point a to b, to the news we watch, and even what we invest in, they’re all rooted in AI.”

“The more realistic you experience the AI to be, the harder it is to look for biases."

“By design, AI models are prone to ethical issues,” adds Sana Khan, Palladium Senior Technical Advisor. “The real-world performance of the AI models may be different from its initial validation.” She explains that it is the human factor that can be the problem in creating bias in data and in turn, the solution in removing it. “For machine learning techniques, it takes what you feed into it and uses it. So, if you are showing it biased information, it is going to learn from those biases.”

This is not a new problem, and though efforts to build solutions are progressing, it has certainly not yet been implemented broadly or successfully. “If you’re working on an AI algorithm, we need to ask if we can be better curators and stewards of the data being used to train AI models,” Friedman notes. “Are we tracking the way that data is then being used or if they’re having unequal impacts on protected groups?”

Khan adds that the AI models are already out there – it’s too late and too far gone to go back, but now there needs to be validation and verification. “Whatever we get in terms of output, we cannot just fully trust, because the processes involved in producing them are not error or bias free. We need to come up with a set of tools, framework, or procedure to regularise its usage.”

“You cannot just play with this, it’s touching people’s lives,” she notes.

While Friedman and Khan agree that it is too late to go back, they add that it is not too late for better governance. “Social media bots and deep fakes exist and are already here, so it’s got to be government policy that forces technology companies to be transparent with their users,” Friedman adds.

“The harder it becomes to distinguish AI from people, the more urgent it becomes because people won’t know what or who they’re dealing with.” And that’s truly where the risk is for society. Friedman adds that it is imperative for technology users to be aware when they are interacting with AI, lest they assume they are chatting with a human.

Everyday usage of AI and related technologies are already improving lives around the world, from in-home assistants to simple spellcheck and autofill to translating technologies, and as computing processers get stronger, so will our AI tools. As that strength grows, so too must the policy to keep it in check, and for now, the window is currently wide open for governments to step in and ensure that AI continues to help society rather than harm.


Learn more about diversity, equity, inclusion, and accessibility principles in monitoring, evaluation, learning, and analytics and contact info@thepalladiumgroup.com for more.