
The Bureau of Industry and Security (BIS) at the US Department of Commerce is proposing new regulations that would require developers of advanced AI models and cloud computing providers to report on their development activities, cybersecurity measures, and results from red-teaming tests. This is aimed at assessing the risk of AI systems being used to aid cyberattacks or to enable non-experts to create chemical, biological, radiological, or nuclear weapons.
“This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” Gina M. Raimondo, secretary of commerce, said in a statement.
The proposed regulations follow a pilot survey carried out by the BIS earlier this year and coincide with global efforts to regulate AI.
After the EU’s landmark AI Act, countries such as Australia have introduced their own proposals to oversee AI development and usage. For enterprises, these could increase costs and slow down operations.
“Enterprises will need to invest in additional resources to meet the new compliance requirements, such as expanding compliance workforces, implementing new reporting systems, and possibly undergoing regular audits,” said Charlie Dai, VP and principal analyst at Forrester.
From an operational standpoint, companies may need to modify their processes to gather and report the required data, potentially leading to changes in AI governance, data management practices, cybersecurity measures, and internal reporting protocols, Dai added.
The extent of BIS’ actions based on the reporting remains uncertain, but the agency has previously played a key role in preventing software vulnerabilities from entering the US and restricting the export of critical semiconductor hardware, according to Suseel Menon, practice director at Everest Group.
“Determining the impact of such reporting will take time and further clarity on the extent of reporting required,” Menon said. “But given most large enterprises are still in the early stages of implementing AI into their operations and products, the effects in the near to mid-term are minimal.”
The tech industry had pushed back against SB 1047, with over 74% of companies expressing opposition. Major firms like Google and Meta have raised concerns that the bill could create a restrictive regulatory environment and stifle AI innovation.
Innovation in most sectors is typically inversely proportional to complex regulations, Menon added. High regulatory barriers tend to stifle innovation, which is also why the US has historically favored looser regulations compared to the EU.
“Complex regulations could also draw away innovative projects and talent out of certain regions with an emergence of ‘AI Heavens,’” Menon said. “Much like tax heavens, these could draw important economic activity into countries more willing to experiment.”