Microsoft Corp. increased the number of employees on its artificial intelligence product safety team from 350 to 400 in the previous year.
In its first annual AI transparency report, released on Wednesday, the firm stated that almost half of the team dedicates their full time to the effort, outlining steps to guarantee the responsible rollout of its services. Along with current staff members, fresh hires have also joined the team.
Microsoft disbanded its Ethics and Society team last year as part of larger technology sector cutbacks that destroyed safety and trust departments at other businesses, including Google and Meta Platforms Inc.
Microsoft is eager to increase confidence in these tools due to growing worries about its generative AI technologies’ propensity to produce bizarre content. The business looked into instances in February using its Copilot chatbot, whose answers varied from strange to dangerous.
The Federal Trade Commission, the board, and legislators received emails from a Microsoft software developer the following month alerting them to the company’s lack of efforts to prevent violent and abusive images from being created using Copilot Designer, an AI picture production tool.
The National Institute for Standards and Technology developed a methodology that serves as the foundation for Microsoft’s safe AI deployment strategy.
As per an executive order signed by President Joe Biden last year, the agency, a division of the Department of Commerce, was responsible for developing standards for the new technology.
Microsoft stated in its first report that it has released 30 responsible AI capabilities, some of which make it more difficult for users to manipulate AI chatbots into behaving strangely.
The “prompt shields” developed by the business are intended to identify and prevent intentional attempts, commonly referred to as prompt injection assaults or jailbreaks, to cause an AI model to act in an unexpected manner.
Also Read: Microsoft and Coca-Cola Sign $1.1B AI Deal