This is a cache of https://slashdot.org/story/24/04/08/2119229/social-order-could-collapse-in-ai-era-two-top-japan-companies-say. It is a snapshot of the page at 2024-04-09T01:10:52.670+0000.
<strong>'</strong>Social Order Could Collapse<strong>'</strong> in AI Era, Two Top Japan Companies Say - Slashdot

Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Japan AI

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say (wsj.com) 4

Japan's largest telecommunications company and the country's biggest newspaper called for speedy legislation to restrain generative AI, saying democracy and social order could collapse if AI is left unchecked. From a report: Nippon Telegraph and Telephone, or NTT, and Yomiuri Shimbun Group Holdings made the proposal in an AI manifesto to be released Monday. Combined with a law passed in March by the European Parliament restricting some uses of AI, the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.

The Japanese companies' manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology. Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users' attention without regard to morals or accuracy. Unless AI is restrained, "in the worst-case scenario, democracy and social order could collapse, resulting in wars," the manifesto said. It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say

Comments Filter:
  • like reading manga all day and watch Godzilla appear from the sea at sunrise?

  • Seems clear to me that we cannot wait out the development of "AI". But I am not sure what really needs to be done ...

    One way would be to identify obviously bad "behavior" and regulate (or temporarily ban??) them. Examples: running AI systems to make decisions that are not checked by humans and/or that do not follow established rules yet greatly impact human (or animal) life (say, health care decisions, financial decisions, hiring decisions, AI in warfare). Especially societies that are overly sympathetic (o

  • "the tools are sometimes designed to seize users' attention without regard to morals or accuracy"

    He's described pretty much all of social media. AI just makes it worse. I don't see how anyone could hope to contain this. AI can create human-like accounts, post human-like content and generate images that are certainly good enough to fool 95% of the people out there. All of the nefarious shit that we've had with social media is now just getting amplified.

Someone is unenthusiastic about your work.

Working...