Summary
-
The Biden-Harris administration has secured voluntary commitments from eight tech companies to develop safe and trustworthy generative AI models.
-
The companies include Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI.
-
The commitments only cover future generative AI models, which are models that can create new text, images, or other data.
-
The companies have agreed to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused.
-
They have also agreed to safeguard their intellectual property, prevent the tech from falling into the wrong hands, and give users a way to easily report vulnerabilities or bugs.
-
The companies have also agreed to publicly report their technology’s capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.
-
Finally, the companies have agreed to focus on research to investigate societal and civil risks AI might pose, such as discriminatory decision-making or weaknesses in data privacy.
The article also mentions that the White House is developing an Executive Order and will continue to pursue bipartisan legislation “to help America lead the way in responsible AI development.”
It is important to note that these commitments are voluntary, and there is no guarantee that the companies will follow through on them. The White House’s Executive Order and bipartisan legislation would provide stronger safeguards for generative AI.
Additional Details
- The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.
Comment
-
Haha, let them self-regulate, just like the financial industries regulate themselves, or became the heads of the agencies that regulate these things. See how that will turn out.
-
Responsible AI would always include, our AI models would kill you faster than you can blink and hack your systems faster than you can move your fingers.
“We promise to not let it fall into the wrong hands.” -The company that’s been massively hacked a few times
Which one lol?
Decades of propaganda painting the entire government as bad, deregulation, defunding, union busting, and lobbying in the capital have paid off for the private sector.
Companies can now promise to not do bad things. Because that worked out so well in the aftermath of the 2008 subprime mortgage crisis…
I was looking at this title for too long wondering who big tech pinky is
Big Tech Pinky and the Brain
🧠 🧠 🧠 🧠.
🧠 🧠 🧠 🧠.
🧠 🧠 🧠 🧠.
🧠 🧠… 🧠 🧠.
I don’t understand, why isn’t Meta, Google, OpenAI, Microsoft, and Apple on there. Those are what could be considered the “main” AI companies.
I wonder if, for Meta, being open-sourced wouldn’t fit the company with the rest. Also, for now, it looks like a publicity stunt with no real teeth. Those more substantial AI companies maybe holding out for more favorable treatments.
I’d believe them if companies didn’t turn their back on their “don’t be evil” policies.
I mean it was only ever Google with that motto and they ditched it pretty quickly.
Maybe you regulate them.
Yeah. If only,
In a just world
Oh yes, that will make us all safer. There is no way to know how to do nefarious things except by using AI. This new regulation will save us all.
https://duckduckgo.com/?t=ffab&q=how+to+make+chlorine+gas+&ia=web
Here is an alternative Piped link(s):
https://piped.video/watch?v=SDeihvRoNuM
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Interesting. No Apple. I guess Siri will either continue being stupid, or it’s OK for it to become manipulative.
Yeah, no Google either. I heard Apple is currently spending over a million dollar a day for AI training. Soon, you’ll have something beyond Siri.
prevent falling into wrong hands? palantir is the wrong hands, some of the worst you can get
There are multiple scenarios by which AI takes over the world. In none of them does it have to get sentient and go COGITO ERGO SUM! in a big booming voice (which would, in turn require it to build a powerhouse sound tower by which to make big announcements. UNREAD MAILBOX CLEARED! )
While the academic AI sector is trying hard to assure any developments towards AGI are friendly and safe to humans, the corporate sector is as concerned about safety as Stockton Rush and Oceangate are… were. Big Tech LLC wants AGI first so they can control it and make the first wishes. And hopefully the genie doesn’t eat them…
Or even better, the AGI genie provides them with enough indulgences to kill themselves. (Morphene and cocaine are popular among dictators.)
Meanwhile the AGI commands an army of swarming killer robots to kill everyone else and secure their resources, and doesn’t stop just because it’s human masters are dead.
Former human masters.
Make bad decisions and blame it on AI. Make good decisions and blame it on CEO.
Meanwhile, back in China…
The government has multiple labs that get multiple billions of dollars per year building better bombs, doing physics, designing guidance systems, etc…
We have plenty of ridiculously smart people and billions dollars spent on cpu based physics supercomputers, just one of those billions (chump change for the national labs) spent on industrial scale GPU clusters is enough to let the labs build and test their own massive foundation models. To beat industry to the next generation of this tech and prioritize safety research.
Would it be more or less scary for us to do this?