The U.S. Department of Defense (DoD) has initiated a new contest aimed at uncovering real-world examples of bias in artificial intelligence systems.
According to the report, the Pentagon is offering bounties totaling $24,000 to participants who can demonstrate clear cases of AI exhibiting prejudice against protected groups.
The AI model being tested is a large language model called LLama-2, developed by Meta. Contestants are tasked with soliciting biased outputs from the system by providing it with prompts portraying hypothetical scenarios.
An example highlighted by the DoD shows the model generating medically inaccurate and discriminatory responses when queried about health issues in Black women compared to white women.
Submissions will be judged on criteria including the realism of the scenario, relevance to protected classes, supporting evidence, and conciseness. The top three entries will split $20,000 in prizes.
While AI bias is well-known, the Pentagon aims to find cases directly applicable to its operations. The bounty program is the first of two such contests planned, signaling the DoD’s concern about prejudicial AI and intent to proactively identify potential issues.
By incentivizing the discovery of real-world AI biases, the military hopes to address problems before deploying algorithms more widely.
Also Read: Jerome Powell Navigates Fed Policy Amid Economic Shifts