Leading Tech Companies Sign White House Pledge to Develop Responsible AI Testing Practices: Our Experts React
The White House recently announced seven major AI companies have voluntarily agreed to subject new AI systems to external testing. While some have called the move a step in the right direction, the question remains: Does it go far enough toward achieving Responsible AI? We asked our leaders, faculty members, and researchers for their thoughts.
“While I think watermarking AI-generated content is a good step, it would be great to figure out how to make this step more effective. In addition to tagging AI-generated content, I would have liked to see acknowledgement of other key issues. For example, who is responsible for AI-generated information? How do we guide towards positioning AI outputs as ‘recommendations’ on which humans make the final decision? Given that these are only seven companies from a much larger space, we need more representation and more thought through pledges, and better yet: requirements.”
"Nobody actually knows how to do the external security testing demanded by the White House, as the risks are about more than just cybersecurity. Testing should also be able to account for ethical concerns, intellectual property rights, competency, and a whole lot more."
“While a step in the right direction toward Responsible AI, this is a wasted opportunity. Companies could easily have been asked to commit to citizen participation and engagement in the development of their platforms. If we genuinely want to create AI that is aligned with public values, then we need to use AI to engage diverse communities in its development. Alas, the commitment to ‘deploy advanced AI systems to help address society’s greatest challenges’ does not include strengthening democracy and improving how we govern.”
“There’s probably no harm in asking tech firms to identify AI-generated images through ‘watermarking,’ but I remain suspicious of government efforts to ‘regulate’ AI. I cannot fathom the idea of regulating cognitive science or the philosophy of mind, for example, so I cannot comprehend what regulating AI, which is also a science, means. Leave science alone, is my recommendation.”
“It is good to see the White House acknowledging the rapid progression of AI and the necessity for governmental involvement. However, this agreement appears to merely endorse the existing initiatives of these tech companies. Some of the issues fundamental for advancing Responsible AI and reducing bias, such as disclosing information about training data, are conspicuously absent from these promises. The optional character of these promises, coupled with a lack of any system to ensure their fulfillment, further compounds the problem.”
“While I welcome the companies' commitments to external testing and data sharing with academics, we'll have to wait and see whether they follow through in a meaningful way. I am puzzled by this ‘watermarking’ idea; it’s never worked before, in the copyright infringement deterrence context, so I don't see how it will work now, under even more challenging circumstances.”
“It is critical that we build capacity—outside of industry—that can track developments in AI and audit what companies are doing so that we can create informed policy and keep those companies accountable.”
“I am happy to see that the White House and Congress are taking these issues seriously, but my worry is that the companies who are voluntarily participating are the ones who will decide what ‘safe’ means for their models. ‘Safety’ often refers to an unlikely far-off Skynet-style doomsday scenario. This can obscure real harms that are possible here and now, like poor performance for marginalized groups or generation of misinformation and hate speech at scale. The only real way to set the correct priorities is through active community participation and feedback.”
“Self-regulation is not likely to work when there are billions at stake. We don't want the AI equivalent of 'Unsafe at Any Speed.' We have the FDA, the National Highway Traffic Safety Administration (NHTSA), the CFPB, etc. making things safer for users. Similarly, we need the right regulatory structure to promote sensible innovation and growth in AI applications without interfering in the science. This should be done while protecting the public interest, using appropriate incentives and meaningful penalties.”