On a late night in December, Shane Jones, an artificial intelligence engineer at Microsoft , felt sickened by the images popping up on his computer. Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI’s technology. Like with OpenAI’s DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild. Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft’s oft-cited responsible AI principles. The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator. Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. “Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter to Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s Android app to make clear that it’s only for mature audiences. “Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote. Jones said the risk “has been known by Microsoft and OpenAI prior to the public release of the AI model last October.” https://www.cnbc.com/2024/03/06/mic...pilot-designer-creates-disturbing-images.html