The new feature is part of Copilot Studio, and is called Computer Use. It can mimick the way a human being would click buttons or type in fields on a computer.
Microsoft's Copilot Studio debuts "computer use," enabling AI agents to autonomously interact with websites and desktop apps by mimicking human actions—clicking buttons, typing, and navigating menus.
Now in research preview for select orgs, the feature lets them build AI agents that handle complex tasks in browsers/apps—even without APIs.
With computer use, users can simply describe the task they want the agent to perform using natural language.
Agents simulate actions for pre-deployment testing/refinement; once set up, they automate workflows across browsers (Edge, Chrome, Firefox) and desktop apps.
“If a person can use the app, the agent can too,” said Charles Lamanna, Corporate Vice President of Microsoft’s Business and Industry Copilot.
Security and privacy remain key considerations. Microsoft has confirmed that enterprise data remains within Microsoft Cloud boundaries and will not be used to train its Frontier models.
Early access to the computer use feature is now available for Copilot Studio users, with broader rollout expected in the near future.