AI Briefing: In a tech twist, teens say their parents know more about AI than them – at least for now

A lot of the AI ​​discussion this year has focused on what businesses, regulators, and researchers think. But what do parents and children think?

To understand how families are grappling with generic AI, Kantar and researchers at the Family Online Safety Institute sought to explore how “habits, hopes, and fears” differ in the US as well as Germany and Japan . The findings released last week showed that most parents feel positive about their teens using generic AI, even if they are still concerned about the risks.

Overall, Germany had the highest percentage of overall positive sentiment (70%), followed by the US (66%) and Japan (59%). Japan had the highest percentage of negative attitudes, with 38% expressing a negative view, compared to 29% in the US and 27% in Germany. To address concerns, parents in all three countries cited data transparency as a top priority – something that was also mentioned by teens in the US and Germany.

When Kantar asked how they have tried GenAI tools, most parents and teens mentioned using it for analytical tasks. However, a higher percentage of parents in all three countries reported using it for creative tasks, while teens were more likely to use it for “efficiency-enhancing” tasks such as grammar checking. On the other hand, both parents and teens mentioned being worried about the possibility of job loss and misinformation. Teens also expressed concerns about the use of AI to create new forms of harassment and cyber-bullying.

One of the most surprising findings in Kantar’s report: Teens said their parents knew more about AI than they did. This is especially notable after nearly two decades of the social media era, when children adopted new platforms faster than their parents. However, that may be because parents are using AI more in work settings than social media, said Cara Sundby, senior director of Kantar Futures practice. Another possibility is that parents have gone through enough digital development to become more adept at “responsible education”.

“Parents are also using it for themselves in a way that they haven’t been on TikTok and Snap,” Sundby said. “It’s more laborious.”

Kantar despite optimism Revealingly, a separate survey conducted in the US and UK by Braze found that almost half of consumers are concerned that brands will not use their data responsibly, with only 16% being “completely confident” or “very confident”. Only 29% said they were comfortable with brands using AI to personalize their experiences, while another 38% said they were not comfortable and another 33% were unsure.

Ashley Cassovan, managing director of the AI ​​Governance Center at the International Association of Privacy Professionals, said it is important to understand how communities use AI systems and how they are affected by them. In his former role as managing director of the Responsible AI Institute, Casovan also helped conduct a survey of Canadians to see what they thought about AI, and found that 71% of respondents thought that AI should be developed in consultation with “everyday people”, while 82% thought it was important to incorporate ethics to minimize harm. (Cassovan also spent several years working for the Canadian government, where he helped develop its first national policy around government use of AI.)

“There is a really strong need to understand the context of how these AI systems are being used,” Cassovan said. “There needs to be literacy at the end user level, which I think you can generally say is the public, although it may stop before that.”

AI News:

  • It was a unique weekend for OpenAI. On Friday, co-founder Sam Altman was removed from the CEO role and removed from the board, while CTO Mira Muratti was named as interim CEO. In a blog post about the leadership change, OpenAI also said that co-founder and president Greg Brockman is stepping down from his role as board chairman but will remain with the company. However, hours later, Brockman announced that he would step down, Since then, various rumors and reports have arisen regarding what caused the sudden change and what may happen next. In OpenAI’s blog post, the company said Altman’s departure “followed a thoughtful review process by the board, which concluded that he was not consistently forthright in his communications with the board, thereby impairing his ability to perform his responsibilities.” There was an obstacle.”
  • Major tech companies including Google, Meta, IBM and Shutterstock are adding new AI-related policies and tools to build trust, avoid risks and improve legal compliance.
  • A new bipartisan bill in the US Senate aims to bring greater accountability and transparency to AI while fostering innovation. The bill, called the Artificial Intelligence (AI) Research, Innovation, and Accountability Act, was filed last week with three Republican cosponsors along with three Democrats. Meanwhile, the FTC announced a new “voice cloning challenge” to raise awareness of the potential risks posed by AI.
  • Major publishers including News Corp and IAC said they were deeply disappointed by generic AI companies taking down content without paying.
  • A key executive of Stability AI resigned over copyright concerns related to the way the popular AI startup trains its AI models. In an op-ed about his departure, Ed Newton-Rex, who led Stability’s audio efforts, said he felt the company had “exploited” copyrighted material by using it without permission. (Stability AI has been the target of several copyright lawsuits, including one from the artist and another from Getty Images.)
  • Generative AI continues to appear in quarterly earnings reports from various companies. Last week, Getty Images, Visa, Chegg, Alibaba and others all mentioned GenAI in their results presentations. Generative AI partially contributed to a 35% increase in gross profit of Tencent’s online advertising business, the company said, citing the role of “increasing demand for video ads and the innovative application of generative AI tools in creating compelling ad visuals.” Gave.

Indications and Products:

  • Microsoft announced several AI updates during its Ignite event last week. Along with the rebranding of Bing Chat and Bing Chat Enterprise to Copilot — which will be available on December 1 — Microsoft added new support for OpenAI’s GPT, enhanced commercial data protection, more plugins, and a new one to help people build Other updates including Copilot Studio were also announced. Their own standalone low-code copilot. Microsoft also released a new report detailing how people are using Copilot for creativity and productivity. Finally, it announced Baidu as the latest partner for its Chat Ads API.
  • Google is experimenting with new music-related generative AI tools, including a way for users to create original music clips for YouTube Shorts created by text-based prompts. In a preview released last week, the company showed how it’s working with major artists — including Charlie Puth, Demi Lovato, T-Pain, John Legend and Sia — who are providing their music for the project.
  • IBM has announced a new governance tool for its Watson portfolio called watsonx.governance that helps AI customers detect AI risks, predict potential future concerns, and monitor things like bias, accuracy, fairness, and privacy. will help.
  • Ally Financial announced early results of a generative AI experiment using the company’s large language models. According to Ellie, using the tool reduced campaign production time by 2-3 weeks, while others saw 34% time savings on various tasks.
  • More agencies are striking new deals as a way to take advantage of the generic AI boom. Last week, Omnicom announced a new collaboration with Getty Images to integrate Getty’s new AI image generator into Omnicom’s Omni data orchestration platform. Meanwhile, Stagwell announced a new partnership with Google to incorporate the tech giant’s AI into Stagwell Marketing Cloud.

Other stories from Digiday:

  • “Why the NFL released an AI-powered game with Amazon”

https://digiday.com/?p=526273

(tagstotranslate)Generative AI

Source link

Leave a Comment