Back in July, the White House secured commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to help manage the risks that artificial intelligence potentially poses. More recently, eight more companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—also pledged to maintain “the development of safe, secure, and trustworthy AI,” as a White House brief reported.
Let’s explore why this is so important, especially as AI continues to develop.
As beneficial as artificial intelligence has proven to be, it has also proven to be a tool for cybercriminals and other threat actors to use to their advantage to great effect. From these tools being used to create deepfaked images to replicated voices scamming people out of thousands of dollars, there are countless ways that AI can potentially be weaponized by threat actors using legitimate tools.
This is why the Biden White House is pushing for these companies to create the technology needed to watermark AI content in such a way that the platform used to create it can be identified. The theory is that these watermarks would help prove whether an AI platform was involved in creating content, helping to spot potential threats and confirming that these platforms are being built and innovated upon to spot them more effectively.
In addition to the watermark, other safeguards have been agreed to by the technology firms:
Granted, these standards and practices aren’t enforceable by the government, but they serve as an invaluable first step towards more secure artificial intelligence.
We’ve long been committed to fulfilling business IT needs, particularly in regard to cybersecurity. Give us a call at (305) 447-7628 to find out what we can do for you and your operations.
Comments