
Microsoft Build 2025: AI Security Flub Exposes Walmart’s AI Strategy Amid Protests
The Microsoft Build 2025 conference in Seattle was marred by protests and an unexpected revelation: details of Walmart's AI strategy were inadvertently exposed during a session on AI security.
The incident occurred during a presentation by Microsoft AI security chief Neta Haiby on Tuesday, May 20, 2025. While addressing a disruption from protestors, Haiby accidentally shared her screen, revealing internal Microsoft Teams messages discussing Walmart's AI integration plans.
Leaked Details of Walmart's AI Initiatives
The revealed messages, posted by a principal Microsoft cloud solution architect named Leigh Samons, outlined Microsoft's strategy for integrating its technology into Walmart's processes. One message highlighted that Walmart's "MyAssistant" tool, built last summer and leveraging Walmart's proprietary data and large language models in Azure OpenAI Service, needed "extra safeguards" due to its power. The tool assists store associates with tasks like summarizing documents and creating marketing content.
"Microsoft is WAY ahead of Google with AI Security"
Adding further fuel to the fire, the internal Teams message quoted a “distinguished” AI engineer at Walmart praising Microsoft's AI security, stating, "Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you." This seemingly unintentional endorsement provides valuable insight into Walmart's confidence in Microsoft's AI capabilities compared to its competitors.

Protests Rock Microsoft Build Over AI Ties to Israel
The Microsoft Build conference was also the target of protests related to Microsoft's collaborations with the Israeli government. The protests, organized by groups like "No Azure for Apartheid," accused Microsoft of enabling human rights violations in Palestine through its AI technology. Sarah Bird, Microsoft's head of responsible AI, was directly confronted during one of the sessions. Hossam Nasr, an organizer with the group, stated, "Sarah Bird, you are whitewashing the crimes of Microsoft in Palestine... how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine." Multiple protestors, including former Microsoft employees, were removed from the event.
These protests echo similar demonstrations at previous Microsoft events, including the company's 50th anniversary celebration, where employees voiced concerns over Microsoft's AI dealings with the Israeli military. These incidents highlight the growing ethical scrutiny surrounding AI development and its potential misuse.
A Call for Automated Privacy Features
The accidental reveal of Walmart's AI plans raises serious questions about data security during screen sharing. Tech journalist Sean Endicott suggested Microsoft should implements auto blur of chat windows in MS Teams as a feature in the future in order to protect user's privacy in unexpected moments. This feature can improve user privacy and security considering that AI capabilities are growing everyday.
What are your thoughts on Microsoft's AI security measures in light of this incident? Should Microsoft implement automated privacy features in Teams and other messaging applications? Share your opinions in the comments below.