Disadvantages of AI: Suicide, taking advantage of strangers’ deaths, creating obscene ‘deepfakes’ and faulty products. technology

AI damage
Olena Zelenska, wife of the President of Ukraine, the victim of a fraud about a false purchase of a Bugatti created and spread with the help of AI, was on an official visit to London last May.Dylan Martínez (El Pais)

Sewell Setzer, 14, committed suicide last February after falling in love with a character created by artificial intelligence on the Character.AI platform, according to the teen’s family’s lawsuit against the company. The late Paul Mohney never saw war, nor did Olena Zelenska, the wife of the President of Ukraine, buy a Bugatti Tourbillon. But false information, created with the help of artificial intelligence (AI), has been spread through advertising on obituary pages with the intention of making easy money or generating Russian propaganda. Due to bias in the system’s artificial intelligence, a school cleaner from Edinburgh, a single-parent family of two children, was left without benefits like many other women in her circumstances. A customer of a payments platform was alerted by an algorithm for a transaction that never existed, a lawsuit questions the security of the vehicle due to an alleged programming error, and thousands of users saw their data being used without consent. Are. People are at the end of the artificial intelligence chain, but responsibility for the harm it causes them is not fully defined. “Here we find a worrying legislative void,” warns Cecilia Danesi, co-director of the master’s degree. AI ethical governance (UPSA) and author of Consumer rights at the crossroads of artificial intelligence (Dickinson, 2024).

Make money from the death of strangersWith artificial intelligence it is easier and cheaper, even if it means spreading lies that increase the pain of the relatives of the deceased. This is done through obituary pages where AI creates information about the deceased with real or invented data such as Mohney’s military history to gain traffic and hence, advertising revenue. “There’s a completely new strategy that relies on getting information about a dead person, seeing if there’s a slight increase in traffic, even if it’s in a particular area, and getting these trickles of traffic. to immediately publish articles about the person,” explains search engine expert Chris Silver Smith. fastcompany,

wrong information and deepfake obsceneThe AI ​​Incident Page collects dozens of alerts every month of incidents or cases of abuse arising from artificial intelligence and has already identified more than 800 complaints. In its latest records, this includes false information about the attempted assassination of Donald Trump, Kamala Harris, the Democratic candidate for President of the United States, or false and realistic pornographic images (deepfake) of British policies. According to one survey, there is a growing fear of the impact of these creations and their viral impact on democratic processes and 31% of Europeans believe that AI has already influenced their voting decisions. European Tech Insights Report 2024, Developed by the Center for the Governance of Change (CGC) of IE University.

“There is growing concern among citizens about the role of AI in the conduct of elections. And, although there is still no clear evidence that it has made any major changes in outcomes, the emergence of AI has raised concerns about misinformation and the technology. deepfake Around the world,” says Carlos Luca de Tena, CGC executive director.

“In creating a fake video or image with Generative AI, it is clear that AI is a medium, a tool, so the responsibility will be on the creator. The main problem is that it is impossible to identify in most cases. case of porn fake For example, (fake images of pornographic content) have a direct impact on the gender gap because platforms encourage you to use them with images of women. By having more photos of this genre, they become more accurate with our bodies and this results in more marginalization and stigmatization of women. Therefore, in the age of misinformation and cancel culture, education is of the utmost importance and as users we double check We[double-check]every content we see and, most of all, verify it before we interact with it,” explains Danesi.

The researcher, a member of UNESCO’s Women for Ethical in AI and co-author of a report presented at the G20 Brazil on algorithm audits, says in relation to the effects of misinformation: “An algorithm can have a dual role: generative AI for one And second, the creation of fake news through search engine or social media algorithms that make false content go viral. In this second case, it is clear that we cannot require platforms to verify every content that is published. “It is physically impossible.”

automatic discriminationOne of the complaints about AI dysfunction involves the bias that disadvantages single-parent families (made up of 90% women) in the Scottish benefits system. “While there are various provisions to avoid bias in AI regulation (particularly in relation to the requirements that high-risk systems must meet), not regulating issues related to civil liability makes it more difficult for victims to obtain compensation. Does not contribute. The same happens with the Digital Services Act, which imposes certain transparency obligations on digital platforms,” explains Danesi.

defective productThe AI ​​incident page includes a court case opened due to a potential flaw in the vehicle’s programming that could affect safety. In this sense, the researcher details: “With regard to the correction of the defective product directive, this also remains half done. The problem lies in the type of damages that can be claimed under the rule, as it does not include, for example, moral damages. Attacks on privacy or discrimination are excluded from the protection of the Directive.

In Danesi’s opinion, these cases demonstrate that one of the areas of law that requires urgent attention due to the advent of AI is civil liability. “Because consumers are extremely sensitive to the harm it causes. If we don’t have clear rules to protect against these harms, people are left vulnerable. But, in addition, clear rules on civil liability provide legal certainty, encourage innovation and, in the event of a harmful event, encourage the conclusion of an agreement. This is because, as companies know in advance the rules of the game in terms of responsibility, they decide with greater certainty where and what to invest in the scenario in which they will operate,” he argues.

According to the researcher, there are initiatives at the European level that try to address the problem, such as the Artificial Intelligence Regulation, the Digital Services Law (which establishes measures that limit the influence of algorithms on digital platforms, social networks, search engines, etc. ), the proposed AI Liability Directive and the imminent reform of the Defective Products Directive.

“That instruction had become obsolete. There was also debate over whether or not this applied to AI systems, as the definition of a product was based on something more physical, not digital. The modification involves expansion of the product concept as it includes digital manufacturing files and computer programs. The focus of the standard is on the safety of the individual, making it irrelevant whether the damage is caused by a physical or digital product,” he explains.

(TagstoTranslate)technology

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button