Microsoft advocates AI rules to reduce risk

Microsoft advocates AI rules to reduce risk

Microsoft passed a raft of regulations for artificial intelligence on Thursday, as the company addresses concerns from governments around the world about the dangers of the rapidly evolving technology.

Microsoft, which has promised to bring artificial intelligence into many of its products, has proposed regulations including a requirement that systems used on critical infrastructure can be completely stopped or slowed down, similar to an emergency braking system on a train. The company also called for laws to show when additional legal obligations apply to an AI system and for labels to show when an image or video was generated by a computer.

“Companies have to step up,” Microsoft president Brad Smith said in an interview about lobbying for regulations. “The government needs to move faster.” He presented the proposals to an audience that included lawmakers at an event in downtown Washington on Thursday morning.

The call for regulation is permeating an AI boom, with the launch of the ChatGPT chatbot in November generating a flurry of interest. Companies including Microsoft and Alphabet, Google’s parent company, have raced ever since to integrate the technology into their products. That has raised concerns that companies are sacrificing safety to get to the next big thing before their competitors.

Lawmakers have publicly expressed concerns that these AI products, which can generate text and images on their own, will lead to an avalanche of disinformation, be used by criminals, and put people out of work. Washington regulators have vowed to be vigilant about fraudsters who use artificial intelligence and cases where systems perpetuate discrimination or make decisions that break the law.

See also  Artificial intelligence could be as big as the Internet

In response to this scrutiny, AI developers have increasingly called for shifting some of the burden of monitoring the technology to the government. Sam Altman, CEO of OpenAI, which makes ChatGPT and considers Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.

The maneuver echoes new calls for privacy or social media laws by internet companies such as Google and Meta, Facebook’s parent. In the US, lawmakers have moved slowly after such calls, with few new federal rules on privacy or social media in recent years.

In the interview a. Smith said that Microsoft is not trying to remove responsibility for managing new technology, because it has been introducing specific ideas and has pledged to implement some of them regardless of whether or not the government takes action.

“There is not an iota of abdication of responsibility,” he said.

The idea was supported by Mr. Altman, during his congressional testimony, argued that a government agency should require companies to obtain licenses to deploy “highly capable” AI models.

“This means that you notify the government when testing begins,” Smith said. “You have to share the results with the government. Even when it is authorized for publication, it is your duty to continue to monitor it and report to the government if unexpected problems arise.”

Microsoft, which generated more than $22 billion in cloud computing business in the first quarter, also said these high-risk systems should only be allowed to operate in “authorized AI data centers.” Mr. Smith conceded that the company would not be “in a bad position” to provide such services, but said several US competitors could also provide them.

See also  BlackRock, Nasdaq and the SEC met on Bitcoin ETFs

Microsoft added that governments should classify certain AI systems used in critical infrastructure as “high risk” and require “safety brakes” from them. She compared the feature to “braking systems engineers have long built into other technologies like elevators, school buses, and high-speed trains.”

Microsoft said that in some sensitive cases, companies that provide AI systems are required to know certain information about their customers. The company said that to protect consumers from deception, AI-generated content should be required to carry a private label.

Mr. Smith said companies should take legal “liability” for AI-related damages. In some cases, he said, the responsible party could be the developer of an app like Microsoft’s Bing search engine that uses someone else’s underlying AI technology. He added that cloud computing companies may be responsible for complying with security regulations and other rules.

“We don’t necessarily have the best information or the best answer, or we may not be the most credible spokespeople,” Smith said. “But, you know, right now, especially in Washington, D.C., people are looking for ideas.”

Leave a Reply

Your email address will not be published. Required fields are marked *