Summary

  • The Biden-Harris administration has secured voluntary commitments from eight tech companies to develop safe and trustworthy generative AI models.

  • The companies include Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI.

  • The commitments only cover future generative AI models, which are models that can create new text, images, or other data.

  • The companies have agreed to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.

  • They have also agreed to safeguard their intellectual property, prevent the tech from falling into the wrong hands, and give users a way to easily report vulnerabilities or bugs.

  • The companies have also agreed to publicly report their technology’s capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.

  • Finally, the companies have agreed to focus on research to investigate societal and civil risks AI might pose, such as discriminatory decision-making or weaknesses in data privacy.

The article also mentions that the White House is developing an Executive Order and will continue to pursue bipartisan legislation “to help America lead the way in responsible AI development.”

It is important to note that these commitments are voluntary, and there is no guarantee that the companies will follow through on them. The White House’s Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.

Additional Details

  • The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.

Comment

  1. Haha, let them self-regulate, just like the financial industries regulate themselves, or became the heads of the agencies that regulate these things. See how that will turn out.

  2. Responsible AI would always include, our AI models would kill you faster than you can blink and hack your systems faster than you can move your fingers.

  • TacoButtPlug
    link
    fedilink
    English
    241 year ago

    “We promise to not let it fall into the wrong hands.” -The company that’s been massively hacked a few times

  • @malloc@lemmy.world
    link
    fedilink
    English
    231 year ago

    Decades of propaganda painting the entire government as bad, deregulation, defunding, union busting, and lobbying in the capital have paid off for the private sector.

    Companies can now promise to not do bad things. Because that worked out so well in the aftermath of the 2008 subprime mortgage crisis…

  • @Defiant@lemmy.cafe
    link
    fedilink
    English
    151 year ago

    I don’t understand, why isn’t Meta, Google, OpenAI, Microsoft, and Apple on there. Those are what could be considered the “main” AI companies.

    • @Raisin8659OP
      link
      English
      81 year ago

      I wonder if, for Meta, being open-sourced wouldn’t fit the company with the rest. Also, for now, it looks like a publicity stunt with no real teeth. Those more substantial AI companies maybe holding out for more favorable treatments.

  • MxM111
    link
    fedilink
    61 year ago

    Interesting. No Apple. I guess Siri will either continue being stupid, or it’s OK for it to become manipulative.

    • @Raisin8659OP
      link
      English
      61 year ago

      Yeah, no Google either. I heard Apple is currently spending over a million dollar a day for AI training. Soon, you’ll have something beyond Siri.

  • Uriel238 [all pronouns]
    link
    fedilink
    English
    3
    edit-2
    1 year ago

    There are multiple scenarios by which AI takes over the world. In none of them does it have to get sentient and go COGITO ERGO SUM! in a big booming voice (which would, in turn require it to build a powerhouse sound tower by which to make big announcements. UNREAD MAILBOX CLEARED! )

    While the academic AI sector is trying hard to assure any developments towards AGI are friendly and safe to humans, the corporate sector is as concerned about safety as Stockton Rush and Oceangate are… were. Big Tech LLC wants AGI first so they can control it and make the first wishes. And hopefully the genie doesn’t eat them…

    Or even better, the AGI genie provides them with enough indulgences to kill themselves. (Morphene and cocaine are popular among dictators.)

    Meanwhile the AGI commands an army of swarming killer robots to kill everyone else and secure their resources, and doesn’t stop just because it’s human masters are dead.

    Former human masters.

  • @duxbellorum@lemm.ee
    link
    fedilink
    English
    11 year ago

    The government has multiple labs that get multiple billions of dollars per year building better bombs, doing physics, designing guidance systems, etc…

    We have plenty of ridiculously smart people and billions dollars spent on cpu based physics supercomputers, just one of those billions (chump change for the national labs) spent on industrial scale GPU clusters is enough to let the labs build and test their own massive foundation models. To beat industry to the next generation of this tech and prioritize safety research.

    Would it be more or less scary for us to do this?