We spoke with experts on the state of AI in digital advertising and e-commerce, the future of this tech and the privacy implications: Tejas Dessai, a research analyst at Global X ETFs, Greg Cypes, chief product and technology officer at Collage Group, and Calli Schroeder, senior counsel and global privacy counsel at the Electronic Privacy Information Center. Here’s what they had to say.
Patent Drop: How is AI affecting e-commerce, online shopping and generally the way that consumers are sold things?
Dessai: I think there’s going to be very strong implications across the economy. And just like social media, just like e-commerce, every industry is experimenting and trying to understand what could work and what couldn’t. I think broadly speaking, AI could sort of help in (a few) major areas.
Number one, I think it can be a cost deflator, broadly it can help lower costs in many areas, be it content generation, boosting user generated content, improving content library for social media platforms and for e-commerce brands, brand assets, marketing, creative, things like that.
I think for large platforms like Meta platforms, Amazon, Google to a certain extent … clearly these providers have been using AI in the past to build their targeting mechanisms and to build their algorithms, what can these large language models add more to that? What are the possibilities there?
PD: What are the limitations and challenges that companies are facing as they implement AI tools into their marketing and e-commerce processes?
Dessai: What’s important to sort of keep in mind is that these tools are experimental. And in many cases, if you look at some of the most popular models like ChatGPT, these models haven’t been sort of crafted for e-commerce related applications specifically, or for very specific niche-use cases. Which means that the output that they produce is sometimes pretty basic or not tailored for specific information. That’s something that brands need to be cautious about. You can’t really rely on these tools to drive all of your campaigns, to drive all of your creative — it’s more of a copilot.
The other (risk) is we can see a lot of competitive pressure growing. A lot of brands are not using these technologies and tools, and aren’t prepared for it. They could find themselves on the backfoot.
PD: How is AI impacting digital advertising?
Cypes: If you think about different stages of a digital advertising campaign, everything from planning to the creative and messaging, AI can be used and input into any of those. What used to take weeks from a concepting point of view, now it takes hours to do. Even with optimization and measurement, because it’s hard sometimes to measure the effectiveness of an ad campaign, artificial intelligence and modeling can help answer some of those questions.
Many years ago, it was a little bit of the wild, wild west of third-party cookies and tracking. Now, AI has certainly taken over, replacing those third-party cookies with predictions and algorithms to help do better targeting. We haven’t gone away from the tactics that were used before (California Consumer Privacy Act) and (EU General Data Protection Regulation), we’ve just replaced the methodology.
PD: What does the future of this tech look like?
Cypes: AI has been around for a lot longer than the general population thinks. We’ve been using and leveraging artificial intelligence and machine learning for many, many years. Generative AI obviously has made it top of mind and more hip to talk about AI.
I think what’s really interesting is when you start fusing (generative AI) with the other components or pieces of AI technology that have existed for a long time, then you get to a world where we start seeing predictions coming out of these prompts. That’s where I think marketers have a really interesting opportunity. They can say “Build me an ad that will get me a lot more sales on my website,” or “Tell me how would this ad perform with this specific audience.” Some of that technology exists in some areas, but not not within the prompt space of generative AI. So bringing those things together will be really fascinating to see how that works.
PD: How is the onset of AI within digital advertising and e-commerce impacting user privacy?
Schroeder: The problem is there’s not a ton of transparency with how these systems are working. For targeted advertising in unfair contexts … It happens similarly in AI systems. If they’re able to link the person asking questions with a specific individual through ad tech files that get built on people, AI can do the same thing and generate targeted and sometimes discriminatory results for some of those requests.
We already have this ad tech system that uses you as a commodity, collects tons of targeted information on you, and sells that information. If those files exist and you can sell them to other companies for digital marketing and advertising, then you can absolutely also sell those files to the companies that make AI systems and then they can feed that into their data sets. Then, (AI systems) are learning more about, “People with his interest frequently also have this interest,” or “a person in this location frequently is in support of this political party,” or “a person with this income level frequently is interested in these types of things.”
So, theoretically, you could sell these targeted advertising files — which are highly, highly personalized and very specific — and they can go into training datasets, which means that they’re perpetuating these connections that already can lead to some discriminatory results.
PD: What’s next for this tech, and what should tech firms take into account when pushing to implement it?
Schroeder: I think the integration of AI with the targeted/behavioral/surveillance advertising system we already have is going to be a nightmare. We’ve seen bad, discriminatory and privacy invading practices with targeted advertising. Separately, on the AI front, we’ve already seen some real issues when it comes to the results that are generated, and we’ve seen issues when it comes to the methods that companies are using to create their datasets.
When you combine those two, the reason I think it’s going to be a real problem is because we already have really serious privacy issues. When those get exacerbated into training datasets and into widely used AI systems, it means that any sort of discrimination, privacy violations, irregularities are going to get exponentially spread, and used in these systems in ways where it’s not going to be possible to undo.
For companies that are looking to integrate AI systems, there shouldn’t be exceptions because AI is a shiny new technology. You have to go through the work, and that includes putting in place data protection addenda and making sure that a liability is assigned for who’s responsible for the information that gets put into these systems.