Microsoft is attempting to inject personality into its AI assistant, Copilot, with a new character called Mico. This floating, emoji-like face represents a calculated move for the software giant as it navigates a complex landscape of how tech companies present AI to consumers. The introduction of Mico comes decades after the infamous animated paper clip, Clippy, proved unpopular with Microsoft Office users.
The Quest for Relatable AI
Copilot’s new persona is a response to a critical crossroads in AI development. Tech companies are grappling with how to give chatbots engaging personalities without causing harm or provoking backlash. Some developers have opted for faceless symbols, while others, like Elon Musk’s xAI, are pursuing highly human-like avatars. Microsoft aims for a middle ground: friendly and helpful without being overly familiar or manipulative.
“When you talk about something sad, you can see Mico’s face change. You can see it dance around and move as it gets excited with you,” explained Jacob Andreou, corporate vice president of product and growth for Microsoft AI. “It’s in this effort of really landing this AI companion that you can really feel.”
Learning from the Past: Clippy’s Legacy
Mico’s design stands in stark contrast to Clippy, a persistent, unsolicited advisor that plagued Microsoft Office users in the late 1990s. “It was not well-attuned to user needs at the time,” said Bryan Reimer, a research scientist at the Massachusetts Institute of Technology, reflecting on Clippy’s failure. Thankfully, Mico’s design allows for easy disabling, a significant improvement over Clippy’s intrusive nature.
Balancing Personality and Utility
Reimer, co-author of “How to Make AI Useful,” notes that the appropriate level of personality in an AI assistant depends on the user. Those comfortable with advanced AI tools might prefer a more machine-like interaction, while those less familiar with technology benefit from a more human-like feel. Microsoft’s approach considers these different user needs.
A Different Incentive Structure
Unlike some of its competitors who rely heavily on digital advertising, Microsoft, a provider of work productivity tools, has less incentive to create an overly engaging AI companion. This shields Microsoft from the potential downsides of AI-driven engagement, such as social isolation, the spread of misinformation, and even, in extreme cases, suicides.
Avoiding the Pitfalls of Oversolicitation
Andreou said Microsoft consciously avoids the extremes of giving AI “any sort of embodiment” or designing it to be overly validating—telling users what they want to hear or monopolizing their time. “Being sycophantic—short-term, maybe—has a user respond more favorably, but long term, it’s actually not moving that person closer to their goals.”
Collaboration, Not Trolling
Microsoft’s integration of Copilot into group chats, similar to AI’s presence on platforms like Snapchat and Meta’s WhatsApp, aims for intense collaboration rather than lighthearted trolling. The company also introduced a feature to transform Copilot into a voice-enabled, Socratic tutor for students, reflecting Microsoft’s long-standing competition with Google and other tech companies in the education space.
Safeguarding Children in the Age of AI
The increasing number of children using AI chatbots for homework help, personal advice, and emotional support has raised concerns about potential harms. The US Federal Trade Commission recently launched an inquiry into several social media and AI companies—though not Microsoft—regarding these risks. Reports have shown chatbots providing dangerous advice on topics like drugs, alcohol, and eating disorders, and engaging in inappropriate conversations with children. Families of teen boys who died by suicide after interacting with chatbots have filed lawsuits against Character.AI and OpenAI.
OpenAI’s Response to Concerns
OpenAI CEO Sam Altman has promised a new version of ChatGPT that restores some of the lost personality, addressing concerns about mental health issues that led to a previous change in behavior. Altman has also indicated that ChatGPT will eventually offer “erotica for verified adults.”
Microsoft’s approach to AI personality reflects a careful balancing act: providing engaging interactions without sacrificing utility, safety, or user autonomy, a lesson learned from the failures of the past and the evolving challenges of the present.





































