|

It’s Getting Harder for Computers to Identify AI Images

Photo credit: Deepak pal/Flickr

Sign up for smart news, insights, and analysis on the biggest financial stories of the day.

The refrain “Pics or it didn’t happen” is about to lose all meaning.

As the world’s largest tech companies dive deeper and deeper into the capabilities of artificial intelligence, even the programs designed specifically to identify phony images created by AI can’t tell what is real and what is fake, The Wall Street Journal reported. Now governments are stepping in to try to rein in the Tomorrowland tech.

What is Reality?

Most free AI programs still produce fairly janky images. For example, visit the AI image generator site Craiyon (formerly known as Dall-E Mini), type in “Tony Soprano,” and you’ll likely end up with a monstrosity that looks like looks like the app smashed 10 different photos of the fictional mafia dad together and then went over it with the smudge tool.

What was once futuristic fodder for Sci-fi writers like Arthur C. Clarke and Harlan Ellison has quickly become run-of-the-mill. And much like those writers, it’s hard not to notice AI’s potential for harm and deception. Fake images like “Puffer Coat Pope,” Donald Trump getting tackled by police, and Emmanuel Macron running from protestors are worth a chuckle for now, but with Microsoft, OpenAI, Alibaba, and others throwing massive investments into AI tech, keeping up with the advances won’t be easy:

  • Tech company Optic has a website Ai or Not, which up until recently had a 95% accuracy rate, but after Midjourney’s latest update, it dropped down to 89%. At one point, the tech was even tricked by “Puffer Coat Pope.”
  • Companies like Microsoft are trying to get ahead of the technology by imposing restrictions on their generators and Bing’s Image Creator doesn’t let users enter prompts with prominent public figures. Midjourney has human moderators, but it’s rolling out an algorithm to process user requests, company founder David Holz, told the WSJ

The CEO of Hive – another company that tags AI content – says it’s an arms race. “We look at all the tools out there and every time they’re updating their models, we have to update ours and keep up the pace,” Kevin Guo told the WSJ.

Working for the Clampdown: The self-imposed guidelines are a sign of good faith, but China isn’t taking any chances. The Middle Kingdom’s government released draft rules on Tuesday that would require companies to submit security assessments to authorities before launching AI tools to the public. Users would also have to provide their real names and other personal information before accessing AI programs. Companies that don’t comply could face fines, suspensions, and criminal investigations.