C-suite debacle over the leadership of multinational artificial intelligence corporations

“I’m not a Luddite, there is some value in this new technology. The motivation is just wrong.” The reference to Luddism sounds out of place in a letter from Scott Jenson, one of the most experienced and renowned user experience engineers in Silicon Valley. With more than 35 years of experience at Valley companies, 15 of which at Google, few profiles seem further removed from those of English artisans raiding factories and burning mass-produced machines.

Despite his resume, Jenson is forced to apologize in a letter announcing that he has ended his time at Google. Criticism of technological advances in Silicon Valley is a big word. He did this to warn about the position of his former company and other companies such as Apple in the race for artificial intelligence. “They’re afraid they won’t be able to afford anyone to get there first,” he says.

“I left Google last month. The “AI projects” I worked on were very unmotivated and driven by a panic that if they contained “AI” they would be great. This myopia is NOT (capitalized in the original) something driven by the needs of the user. It’s the panic of being left behind,” he insists in a post on his LinkedIn profile this Sunday.

“The same thing happened 13 years ago with Google+ (I was there for that fiasco too). It was a similar hysterical reaction, but on Facebook,” Jenson recalls. Google+ was the multinational corporation’s failed social networking project, canceled in 2019 after eight years in which it failed to engage users. This is considered one of the company’s greatest failures along with Google Glass.

Jenson’s departure amid criticism of his former company’s artificial intelligence strategy coincides with the farewells of several former executives at OpenAI, the company leading the technology race. In recent days, it has disbanded its entire security team responsible for ensuring its products respect human rights and the sustainability of artificial intelligence.

Jan Leike, the manager who led the team, assured that OpenAI did not care enough about the tasks assigned to his department. “I had disagreements with OpenAI management over the company’s core priorities for quite some time until we reached a breaking point,” he said on X (formerly Twitter).

“I think we need to put a lot more effort into preparing for the next generations of models in the areas of security, surveillance, preparedness, protection, adversary resilience, (human rights) compliance, privacy, social impact and related topics. These problems are very difficult to solve, and I am concerned that we are not on track to do so,” he criticized.

OpenAI responded through its chief executive Sam Altman, who was “very grateful” and “very sad” by Leike’s departure. “You’re right, we have a lot to do and we’re committed to getting it done,” he said, announcing that he would publish a post with more information about OpenAI’s policy approach to security.

However, in the case of OpenAI, disagreements regarding the direction of the company are obvious. Just a week before Leike, Ilya Sutskever, one of its co-founders, left the company. The man who was its chief scientist was the prime mover behind the plot to fire Sam Altman as CEO of the company in November 2023. The movement failed due to employee mutiny, but the reasons given by Sutzkever then are similar to those expressed now by Leike.

Notices of non-liability of companies

Google, when contacted in this manner, did not provide any position on Scott Jenson’s criticism. Yes, he was referring to some of his complaints about the new generation of assistants that Google and the big AI companies are preparing. “AI assistants could have important social implications, both in terms of the distribution of benefits and burdens within society and in fundamentally changing the way people collaborate and coordinate,” explained a product manager for the multinational’s artificial intelligence division.

“Efforts to adequately understand AI assistants and their impact face an evaluation gap when studied using existing methods. Responsible development and implementation of AI assistants requires additional research, policy work and public debate,” continues the same executive.

Regardless, this isn’t the first time Google has been linked to this type of complaint. The most famous case is that of Ethiopian technologist Timnit Gebru, who led Google’s artificial intelligence ethics team until the company fired her in 2020 after a publicly released report warned it was not doing enough to mitigate risks. More than 1,200 employees of the multinational corporation and 1,500 independent researchers signed a letter condemning the dismissal.

“The technology we create is supposed to help people, but it feels like we’re trying to take people out of the equation. They are only interested in automating everything and earning as much money as possible at minimal cost,” the expert said in an interview with elDiario.es.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button