Comparing AI Tools: Gemini vs. Copilot in Everyday Scenarios
Greetings, everyone! The rise of AI in performing routine tasks suggests that these smart assistants could become indispensable. Yet, when it comes to Microsoft’s Copilot, my experiences swing between admiration and impressive blunders.
This situation left me pondering if it was time to turn to another LLM. Google's Gemini has been making waves, so I decided to pit it against Copilot through several practical tests.
The mission was simple: find out how these tools manage tasks that an average PC or Mac user might perform. I crafted identical prompts for both assistants to see how they would fare.
Test 1: Crafting a Travel Plan
AI promoting itself as an ultimate travel expert is not new. For my initial test, I asked for a dream holiday plan across European Christmas markets, starting from Paris and wrapping up in Strasbourg, using trains with specific criteria.
With my prior research, I had expectations. Gemini excelled, offering an itinerary with iconic German markets and seamless train transitions. When I requested a modification for Cologne, it promptly provided detailed updates.
Copilot, however, proposed a strictly French itinerary, incorrectly dismissing viable German options. When questioned, it conceded that routes into Germany were indeed practical. Clearly, Gemini was the competent travel consultant here.
Test 2: Map Creation
I asked both bots to depict a wide European loop, starting and ending in Paris with stops like Munich and Vienna. Gemini acknowledged its mapping limits but linked me to a Google Maps rendition.
Copilot's attempt was the opposite of accuracy, misplacing cities across Europe. In follow-ups, it recognized its failures in geography and detail, deferring ultimately to Gemini's more precise solutions.
Test 3: Windows OS Research
Seeking historical data on Windows versions—their release and support timelines—I turned to both tools. They both delivered, but Gemini stood out by highlighting a necessary Windows 8 upgrade note.
Verification would still be wise despite the thorough responses from either side, given potential errors AI tools can make.
Test 4: Designing an Infographic
Recalling my days with graphic designers, I envisioned an infographic on passkeys for a piece. Copilot’s output was lackluster—basic icons with no interpretive value. Efforts to refine fell flat.
Gemini, conversely, succeeded by creating an instructive graphic promptly and with minimal tweaks required, showcasing clear superiority in artistic utility.
Test 5: Making a Financial Choice
Both chatbots tackled the car leasing versus buying dilemma adeptly, posing logical questions tailored to my situation, and ultimately, both advised buying as a more economical choice.
In basic financial guidance, users can reasonably depend on either AI tool.
Test 6: Writing a PowerShell Script
To automate photo renaming tasks, I tasked both systems with scripting in PowerShell. Gemini's solution was awkward, demanding external tools without proper links, and initially failing due to data mishandling.
Copilot, utilizing core PowerShell features, produced a self-contained script that included safety checks and undo options, affirming its dominance in scripting endeavors.
Test 7: Solving a Movie Trivia
Time for some film trivia: remembering a specific scene, I queried both bots. They swiftly identified the film as 'Bullets Over Broadway,' with Dianne Wiest’s role, settling any debates effortlessly.
In essence, both tools performed competently, though Copilot was more detailed.



Leave a Reply