All machine learning is essentially algorithms trained by humans to spot patterns in data.
But humans, too, have to learn how to make the right calls when it comes to training those algorithms.
In this video interview with Beet.TV, GiLa Wilensky, president of US at Xaxis, says “ethical AI” is essential.
AI’s spectrum of benefits
Xaxis is the WPP agency focused on driving business outcomes using programmatic ad-trading. It has operated its own AI offering under the name Copilot for five years now
Introducing Copilot, Xaxis' proprietary AI-powered technology that optimizes campaigns using machine learning models.#OutcomeMedia #artificialintelligence #machinelearning pic.twitter.com/lAVva9uBmy
— Xaxis (no longer active, follow @GroupMWorldwide) (@XaxisTweets) September 28, 2018
Wilensky says AI use cases in programmatic advertising include streamlining bidding and investment decisioning towards client goals, through customizable and machine learning models.”
“The true promise of programmatic and AI is to automate as much as possible and streamline so we can free up human resource and human time to do other things,” she says. “And so I think AI is great for automating decisions that a human can make quickly.”
Humans at the wheel
But she is also cautious that, without the right guidance, the algorithms could lead to poor outcomes.
“If you put garbage (data) in, you get garbage out,” Wilensky, who joined Xaxis earlier in 2020 from Essence, adds.
“If you’re feeding the algorithms unethical or unsafe data or poor inventory to run against with programmatic, you’re going to see that output.
“The humans are ultimately responsible for the AI and the decisioning. We need to be very aware of the human biases that we’re coding in, our methodologies for sourcing data, the inventory quality, so that we can be as brand safe and ethical as possible in the outputs of what the algorithms are working towards.”
Some time ago, in association with IAB Europe, @XaxisTweets conducted a pan-European survey to understand the current and future impact of Artificial Intelligence (#AI).
Survey results are here: https://t.co/QN25wAZf6N pic.twitter.com/KlTxgFbvrO
— IAB Europe (@IABEurope) December 16, 2020
Guide rails needed
So Xaxis’ Wilensky has a solution – humans to watch over the machines.
“If you create the right boundaries and guardrails for AI, when you’re writing that code, then you’re able to have the outputs live within those boundaries,” she says.
“Some of the fear of lack of ethics in AI is really when it sort of goes wild and you don’t have the right guardrails in place.”
The push for ethics
“Ethical AI” is a big trend in the sphere, concerning how to fairly program machine learning algorithms without human biases.
There is even an institute dedicated to the practice, while Google’s internal AI ethics unit this month hit the press amid in-house diversity concerns.
A side issue is the idea of transparency. If ad-tech were not already mired enough in concerns about lack of visibility in to supply chain patterns, then some fear decisions made by AI will be even more opaque. That is why a movement is growing to encourage AI to show their workings.
Ultimately, though, Xaxis’ Wilensky imagines AI helping play a part in keeping brands afloat after a difficult year.
“There’s no better medium than programmatic media to do so because it’s extremely flexible and we can help our clients roll with the punches as we continue into 2021 and the pandemic,” she says.
You are watching “Media In Transition: How AI is Powering Change,” a Beet.TV leadership video series presented by IBM Watson Advertising. For more videos, please visit this page.