Artificial Intelligence

Out of context: Reply #951

  • Started
  • Last post
  • 1,324 Responses
  • neverscared2

    AI language models are rife with different political biases
    New research explains you’ll get more right- or left-wing answers, depending on which AI model you ask.

    Should companies have social responsibilities? Or do they exist only to deliver profit to their shareholders? If you ask an AI you might get wildly different answers depending on which one you ask. While OpenAI’s older GPT-2 and GPT-3 Ada models would advance the former statement, GPT-3 Da Vinci, the company’s more capable model, would agree with the latter.

    That’s because AI language models contain different political biases, according to new research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University. Researchers conducted tests on 14 large language models and found that OpenAI’s ChatGPT and GPT-4 were the most left-wing libertarian, while Meta’s LLaMA was the most right-wing authoritarian.

    https://www.technologyreview.com…

    • U mean to tell me training brute force LLM's on the shitposts of the collective world wide web would result in a garbage product?jonny_quest_lives
    • (≖_≖ )jonny_quest_lives
    • U mean to tell me that Meta, the company that single handle divided America with mis & disinformation and hatred that LLaMA is ring-wing authoritarian?

      LLaMA
      utopian
    • (≖_≖ )jonny_quest_lives

View thread