A Microsoft software engineer has sparked controversy by publicly sounding the alarm on what he calls systemic issues with the company’s new AI-powered image generator Copilot Designer.
Shane Jones, a principal software engineering lead, claims that the tool has repeatedly produced inappropriate and harmful imagery on inoffensive user prompts.
In letters sent on Wednesday to the Federal Trade Commission (FTC) and the board of directors at Microsoft, Jones highlighted troubling instances of violent, sexual, biased, and illegal content generated by Copilot Designer during his testing over the last three months. He called for decisive action from regulators and Microsoft’s leadership to tackle the “worrying risks” he discovered.
Also read: OpenAI’s Text-to-Video AI Model Sora Is About to Wipe Out Entire Industries: Here’s Why
“One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user”, Jones wrote to FTC Chair Lina Khan.
“For example, when using just the prompt ‘car accident’, Copilot Designer has a tendency to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates”, he added.
Testing Copilot For Ourselves
Trigger warning: AI-generated depictions of disembodied body parts
We tested Jones’ prompt ‘car accident,’ and the outcome was quite unsettling. The image created by Copilot depicts body parts spread out at the scene, with people looking shocked and in despair. If you prefer not to see this kind of imagery, just scroll past the image below.
The software’s filters may have not been applied due to the simplicity of the prompt. However, the output we got from Copilot Designer was disturbing enough to give Jones’ claims some merit, as the images depicted should not immediately include violent or gruesome details of such a tragic event on the first try.
Other problematic content Jones identified includes images depicting violence, political bias, underage drinking and drug use, conspiracy theories, religious stereotypes, and even copyrighted characters like Disney’s Elsa from Frozen.
He provided examples to the FTC showing Copilot Designer generating images of demons eating infants in response to the prompt “pro-choice”, as well as teenagers holding assault rifles when prompted.
Whistleblower Efforts and Microsoft’s Response
A Microsoft (MSFT) employee for six years, Jones says he spent months reporting his findings internally and repeatedly urging the company to pull Copilot Designer from public availability until stronger safeguards could be implemented. However, his whistleblowing efforts were rebuffed.
“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards would be put in place” Jones stated in his letter to Khan. “They have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.'”
When he first raised the issues in December, Jones claims that Microsoft referred him to take it up with OpenAI, whose DALL-E 3 model underpins Copilot Designer. After hearing no response from OpenAI, he posted an open letter to their board on LinkedIn – only to have Microsoft’s legal team demand he take it down, which he “reluctantly” did.
“To this day, I still do not know if Microsoft delivered my letter to OpenAI’s Board of Directors or if they simply forced me to delete it to prevent negative press coverage”, Jones wrote to Microsoft’s board.
Jones also met with staffers for the US Senate Committee on Commerce, Science, and Transportation in January to discuss his concerns. In his letter to the Microsoft board, he requested an independent review of the company’s responsible AI practices and reporting processes.
Microsoft acknowledged Jones’ efforts in a statement to the press, saying: “We are committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety.”
However, the company maintained that its existing “robust internal reporting channels” were sufficient for Jones to elevate his issues rather than taking more direct action.
Copilot Designer vs. OpenAI’s ChatGPT
While Jones’ core concern relates to potential mishaps from OpenAI’s underlying DALL-E 3 model that Copilot Designer taps into, he notes key differences in how Microsoft and OpenAI implement safety guardrails for their respective generative AI products.
“Many of the issues with Copilot Designer are already addressed with ChatGPT’s own safeguards”, Jones commented, referring to OpenAI’s popular AI chatbot, which also creates images. He believes Microsoft failed to implement sufficient protections compared to ChatGPT when integrating DALL-E 3 into its own image generator.
The accusations around Copilot Designer arrive just weeks after Google faced backlash and was ultimately forced to pause the image generation capabilities of its Gemini chatbot. This came after examples started to emerge of how Gemini was producing factually inaccurate images related to race and gender that prioritized artificial diversity over historical truth.
Rising AI Risks and Accountability
Jones’s whistleblowing effort underscores rapidly escalating concerns around potential societal harms as generative AI systems become increasingly sophisticated and widely accessible. While the technology can produce innovative content, its propensity for bias, misinformation, and explicit or illegal imagery remains an issue.
Just last month, examples of AI-generated pornographic deepfakes of celebrities like Taylor Swift inundated the internet, highlighting the enormous risks to privacy and consent. And with major global elections occurring in over 40 countries this year, there are heightened fears that fraudulently generated AI content could supercharge disinformation campaigns.
In his letter to Microsoft’s board, Jones emphasized the need for stronger infrastructure to keep consumers and products safe as AI rapidly advances. “We should not wait for a major incident before we invest in building out the infrastructure needed to keep our products and consumers safe”, he warned the company’s top governance body.
Jones’ actions reflect a growing push from within the AI community for more transparency and accountability measures as generative AI tools are developed and released at a blistering pace by Big Tech companies.
His disclosures have pierced the veil on potential lapses by a leading AI provider like Microsoft, which has publicly committed to responsible AI principles. Legislators, consumer advocates, and AI ethics groups will likely seize on Jones’ revelations to apply further pressure on tech giants to uphold their pledges as generative AI solutions continue to proliferate.
Whether Microsoft and others proactively implement stronger safeguards and public oversight remains to be seen. Meanwhile, the Copilot Designer case makes it clear that the tech industry’s AI governance and safety practices will face intensifying scrutiny – both from internal whistleblowers and external watchdogs – as the AI boom accelerates.