How the race for profits could overtake the need to weigh human risks.
Depending on which expert you speak to, artificial intelligence will elevate humans’ standard of living to previously unimagined heights, or destroy civilization as we know it.
Uncertainties abound. What is certain, however, is that humanity has made it clear it plans to roll the dice – which means, we’re doing this, come what may.
Whatever wan, vestigial mewlings for caution may be sounded, we are, after all, humans – which means we cannot help ourselves – so of course we are going to open this Pandora’s Box and see what’s inside it. Which also means there’s no going back.
This is not just affecting the financial and mental well-being of the Elon Musks of the world but, according to an American Psychological Association survey, countless professionals are already fretting over what this will mean for their future. Overall, survey respondents said they were worried about A.I. monitoring technologies in the workplace and that being surveilled had negative effects on their psychological well-being, causing them to feel less valued.
Specifically, 38 percent of workers surveyed said they feared A.I. might make some or all of their professional duties obsolete, while others reported they felt stressed about being monitored at work.
Around 46 percent said they were uncomfortable with their employer using technology to track them, with 51 percent feeling micromanaged and 39 percent feeling emotional exhaustion and burnout.
“Being monitored at work also appeared to coincide with poor employee morale,” the American Psychological Association noted, adding that nearly 41 percent of workers who reported being concerned about A.I. also feared they did not matter to their employer.
Around 38 percent reported that they were not worried about A.I. – a group the survey noted was predominantly white and college-educated.
While the average worker isn’t feeling too hot about this, the A.I. talent pool and A.I. recruitment is on fire, with graduating PhDs and those with specializations in Large Language Models (A.I. tools that simulate how humans speak and write), landing behemoth salaries.
Those just out of university can earn $800,000 a year or more if they come from a top school. And those with experience at one of the big A.I. labs can fetch millions. As a case in point, our parent publication, The Daily Upside, reported this week how Sam Altman’s A.I. company, OpenAI, is angling to nab researchers at Google, offering compensation packages of up to $5 million to $10 million. (That is assuming San Francisco-based OpenAI gets a valuation of $86 billion in a shares sale.)
While sky-high pay packages might imply Silicon Valley knows exactly what it’s doing and it has everything under control, nothing could be further from the truth. Confusion and consternation reign, as humans scramble to determine what is possible and what will be required in this brave new world of neural networks and machine learning.
That goes double for garden-variety companies that were caught flat-footed last year by the instant ubiquity and popularity of chatbots like OpenAI’s ChatGPT. These companies are simply trying to gain a foothold in hopes of not so much keeping pace as not falling behind.
Says Veena Marr, a tech consultant at recruiter Spencer Stuart, “I think panic is the right word, around, ‘Oh my goodness, have we missed the boat? What are we doing?…What does this mean for our competitive advantage? What does this mean for our growth?'”
In short, it is roundly agreed that A.I. will be a top priority for professionals and companies for the foreseeable future. But because the landscape is so new and so uncertain, nobody really knows what that means, or what might happen. Not even Sam Altman, the chief executive of OpenAI, who is pushing ahead in an attempt to create what’s called “artificial general intelligence,” also known as AGI, computer software that boasts the same intelligence as humans.
During OpenAI’s developer conference last week, Altman shared the latest technological advances behind his company’s ChatGPT chatbot, which had the effect of immediately inflaming the arms race among tech companies to produce similar A.I. tools to compete.
Amid ChatGPT’s viral success, however, it has yet to turn a profit – and it’s still far from evident that Altman can build a technology that might achieve AGI status, which, for the time being, remains a pipe dream.
What he is sure of, though, he said in an interview with The Financial Times this week, is that he will need ever greater amounts of capital to have a shot at the goal of all his endeavors, which is, “intelligence, magic intelligence in the sky. I think that’s what we’re about.”
Magic intelligence in the sky is not going to come cheap. OpenAI’s largest investor, Microsoft, has invested $10 billion in the company to help it develop its technology, but Altman says the company will need “to raise a lot more over time” to cover the outrageous costs of building groundbreaking A.I. models, acknowledging, “training expenses are just huge.”
How soon will this magic sky intelligence dwell among us? Altman says he does not know. “There’s a long way to go, and a lot of compute to build out between here and AGI,” he says.
According to Goldman Sachs, breakthroughs in A.I. have the potential to bring sweeping changes to the global economy tantamount to an estimated 7 percent increase in global gross domestic product. In dollar terms, that’s close to $7 trillion, the bank said. As new A.I. tools become more available to companies, workers and society, the trend could also lift productivity growth by 1.5 percentage points over a 10-year period.
These tools also have the potential to make it easier for A.I. to deceive humans. This week, the U.S. Senate heard testimony on how scammers are busily using artificial intelligence to create voice clones and deepfakes to target elderly Americans to devastating effect.
A Philadelphia-based lawyer who testified, Gary Schildhorn, told the Senate Aging Committee that his son’s voice was spoofed by A.I. so well that he thought his son had been in a car accident, was crying and in jail, needing his help. (This story has to be heard to be believed.)
While fears about deepfakes and voice clones have been widely downplayed – and even a cause for mirth and amusement, as reported by Power Corridor earlier this year – Schildhorn’s testimony shows how, when misused, they can cause real and immediate damage.
Perhaps no one knows this more than celebrities and professional actors, whose job it is to constantly look after their images. Scarlett Johansson recently had to take legal action to keep her likeness from being swiped. And this week, members of SAG-AFTRA inked an agreement, after a lengthy negotiation, that forced Hollywood film studios to gain their consent – and compensate them – for using their A.I.-generated likenesses.
It’s a new Wild West, where the difference between staying ahead to stay competitive and the risk of unintended consequences cannot be overstated – and likely will overlap.
How much a price humanity may pay in the race to cash in remains to be seen.