This is a cache of https://www.nbcnews.com/tech/tech-news/meta-study-says-hosted-forum-shifted-opinion-ai-rcna146246. It is a snapshot of the page at 2024-04-04T01:00:39.176+0000.
Meta study says it hosted forum that shifted opinion on AI
IE 11 is not supported. For an optimal experience visit our site on another browser.

Meta study says it hosted special event that shifted opinion on AI, and that it's planning more

The company said that after the forum, over 50 percent of participants thought AI had a positive impact.
A Meta Platforms event in Mumbai on Sept. 20, 2023
A Meta Platforms event in Mumbai in 2023Niharika Kulkarni / NurPhoto via Getty Images file

Meta says that participants in a forum it hosted on artificial intelligence came away with a more positive opinion of AI's potential impact, and that it intends to hold more such forums.

The announcement was part of a study published by Meta and Stanford University on a forum in which participants received information about AI from “experts, academics and other stakeholders” and discussed AI chatbot policy proposals.

Results from the October 2023 Meta Community Forum presented by Stanford's Deliberative Democracy Lab showed that 49.8% of 393 American participants, a slight minority, thought AI had “a positive impact.” After participating in the forum, 54.4% of participants thought AI had a positive impact: 4.6% more than before the forum began.

The forum also included participants from Brazil, Germany and Spain. Participants from the other countries came into the forum with a larger majority already having strong positive feelings toward AI, which only increased during the forum. Slightly more participants from other countries had already used ChatGPT or similar chatbots than participating Americans. 

Meta, which owns Facebook, Instagram and WhatsApp, has already released generative AI products like Imagine, which can produce images from text prompts. It has also flooded its own services with Meta AI chatbots, which are integrated into some of its apps. Some even feature celebrity faces like TikTok influencer Charli D’Amelio.

Meta is also familiar with the dark side of generative AI. In March, NBC News found hundreds of ads had run on Meta’s platforms for an AI-powered deepfake app that promised the ability to “undress” pictures of women and girls. One of the ads featured a picture of actress Jenna Ortega taken when she was 16.

The ads ran during a period when middle and high school school students across the U.S. were found making sexually explicit deepfake images of their classmates. Just a year earlier, NBC News found hundreds of ads had run on Meta for a similar deepfake app that featured fake sexually suggestive videos of actress Emma Watson. Meta suspended both deepfake apps from advertising after NBC News reached out.

Meta’s AI forum with Stanford focused on questions of how generative AI should engage with users, such as generative AI chatbots, as the technology becomes more powerful. Thirty-eight policy proposals were discussed in small groups of participants who formulated questions for experts. Information briefs, including lists of pros and cons, and experts were chosen by a “distinguished Advisory Committee.” The members of the committee and the experts they chose were not named. 

Over the course of the forum, participants deliberated over questions like whether AI chatbots should be able to form romantic relationships with humans, what sources of information chatbots should rely on, whether chatbots should be “human-like,” whether they should be allowed to be “offensive,” and how much visible transparency should be given to users about the artificial nature of a chatbot. 

In the conclusions, Meta and Stanford found that participants “maintained concerns over AI bias, misinformation, and potential human rights violations.” Participants also wanted the ability to control the chatbot’s access to their data and were skeptical about chatbots replacing human interaction. Participants wanted tech companies to prioritize user privacy and data security and be transparent about how their data is being used.