How People Actually Use AI: Cross-Model Data from Magicdoor
How People Actually Use AI: Cross-Model Data from Magicdoor
Most AI platforms lock you into one model family. We don't. Magicdoor gives every user access to 11 chat models and 9 image models across 6 providers — and we've been watching what happens when people can freely switch between them.
The short version: people who try multiple models stick around. And the patterns in how they switch reveal something important about what AI is actually good for.
Key Findings at a Glance
| Finding | Data Point | Source |
|---|---|---|
| Multi-model users convert at dramatically higher rates | 73% of users who try 5+ models become paying subscribers | Internal analytics |
| Image generation is a strong activation signal | 37% of users who try image generation convert to paid | Internal analytics |
| Most users never get started | 60% of signups never send a single message | Internal analytics |
| The price gap between models is massive | Up to 10x cost difference between cheapest and most expensive chat models | Model pricing |
| No single provider dominates | Users spread usage across 6 providers rather than fixating on one | Platform data |
Finding 1: Multi-Model Users Convert at 73%
This is the most striking pattern in our data. Users who try 5 or more different models convert to paid subscribers at a rate of 73%.
What's happening is intuitive once you see it: trying multiple models means the user has found enough value to experiment. They've likely discovered that Claude writes better, GPT handles certain tasks faster, and Perplexity pulls live web data — and that combination is worth paying for.
| User Behavior | Conversion Rate |
|---|---|
| Tried 5+ models | 73% |
| Tried image generation | 37% |
| Never sent a message | 0% (by definition) |
Caveat: Correlation is not causation. Users who try many models are likely more engaged to begin with. But the pattern is consistent and directionally clear: exploration correlates strongly with retention.
Finding 2: 60% Never Start
Here's the uncomfortable number: 60% of people who sign up never send a single message.
This isn't unique to Magicdoor — it's a common pattern across AI tools. But it highlights the gap between interest in AI and actual adoption. People sign up out of curiosity, then hit a blank text box and don't know what to ask.
The contrast with Finding 1 is stark. The users who do engage — and especially those who explore multiple models — convert at high rates. The challenge isn't the product; it's getting people past that first prompt.
This is why we've invested in custom assistants and multi-model workflow guides to reduce the blank-page problem.
Finding 3: The Price Gap Creates a Natural Optimization Opportunity
One reason multi-model access matters: the cost differences between models are enormous. Using the right model for each task can cut costs dramatically without sacrificing quality.
Here's the full pricing breakdown across all 11 chat models:
| Model | Provider | Input (per 1M tokens) | Output (per 1M tokens) | Sweet spot |
|---|---|---|---|---|
| Gemini 3 Flash | $0.50 | $3 | Fast, budget tasks | |
| Qwen 3 Thinking | Together AI | $0.65 | $3 | Budget reasoning |
| GPT-5.4 Mini | OpenAI | $0.75 | $4.50 | Affordable general work |
| Claude Haiku 4.5 | Anthropic | $1 | $5 | Light Anthropic tasks |
| Gemini 3 Pro | $2 | $12 | Long documents | |
| Perplexity Reasoning | Perplexity | $2 | $8 (+$5/req) | Web search with sources |
| GPT-5.4 | OpenAI | $2.50 | $15 | General purpose |
| Claude Sonnet 4.6 | Anthropic | $3 | $15 | Writing, code, analysis |
| Grok 4 | xAI | $3 | $15 | Analytical reasoning |
| Deep Research (pplx) | Perplexity | $3 | $15 (+$5/req) | Deep web research |
| Claude Opus 4.6 | Anthropic | $5 | $25 | Complex reasoning |
The cheapest model (Gemini 3 Flash at $0.50/$3) is 10x cheaper on input and 8x cheaper on output than the most expensive (Claude Opus 4.6 at $5/$25). Both are available on the same platform, in the same conversation.
A user who defaults to a premium model for everything will spend far more than one who uses budget models for routine questions and escalates only when needed. For practical tips, see our guide to saving money on AI.
Finding 4: Image Generation Is an Activation Lever
37% of users who try image generation convert to paid — making it the second-strongest activation signal after multi-model exploration.
This makes sense. Image generation is tangible: you get a visible result immediately, and the quality difference between models is obvious. It's also something many users haven't tried before, so it carries a novelty factor that text chat doesn't.
Magicdoor offers 9 image models with significant price and capability variation:
| Model | Cost per image | Notable strength |
|---|---|---|
| Recraft Upscaler | $0.006 | Enhance existing images |
| Seedream 4.5 | $0.03 | Cinematic quality, lowest generation price |
| Google Nano Banana | $0.039 | Google's base image model |
| Flux.1 Kontext Pro | $0.04 | Editing and style transfer |
| Recraft V3 | $0.04 | Clean design, style presets |
| Imagen 4 | $0.05 | High-quality generation |
| Flux 2 Pro | $0.05 | Photorealistic imagery |
| ChatGPT Image | $0.08 | Text in images, versatile editing |
| Google Nano Banana Pro (2K) | $0.14 | Highest quality, 2K resolution |
For a full comparison, see our image generation guide.
Finding 5: No Single Provider Dominates
When users have access to every major model, they don't pick one and ignore the rest. Usage spreads across providers, with different models filling different roles:
| Provider | Models on Magicdoor | Primary strength |
|---|---|---|
| Anthropic | Claude Opus 4.6, Sonnet 4.6, Haiku 4.5 | Writing, code, nuanced reasoning |
| OpenAI | GPT-5.4, GPT-5.4 Mini | General purpose, cost efficiency |
| Gemini 3 Pro, Gemini 3 Flash | Long documents, budget tasks | |
| xAI | Grok 4 | Analytical reasoning |
| Perplexity | Reasoning, Deep Research | Live web search with citations |
| Together AI | Qwen 3 Thinking | Budget reasoning |
This distribution supports the core thesis: different models are genuinely better at different things. A locked-in subscription to any single provider means missing capabilities that another provider handles better. For detailed model-by-task recommendations, see our model selection guide.
What This Means for Choosing an AI Tool
The data points to a simple conclusion: the most valuable AI setup is one that gives you access to multiple models.
Here's the cost math:
| Approach | Monthly cost | Models available |
|---|---|---|
| ChatGPT Plus | $20/mo | GPT models only |
| Claude Pro | $20/mo | Claude models only |
| Gemini Advanced | $20/mo | Gemini models only |
| Two of the above | $40/mo | Two provider families |
| All three | $60/mo | Three providers, three logins |
| Magicdoor | $6/mo + usage | All 11 chat + 9 image models |
Most Magicdoor users spend $8–10/month total — less than half the cost of a single provider subscription — and get access to every model across 6 providers. Full breakdown: usage-based pricing.
The data suggests this isn't just cheaper — it's better. Users who explore multiple models find more value, convert at higher rates, and stick around longer.
Methodology and Limitations
Data sources. The activation metrics in this report (conversion rates by model count and feature usage) are derived from internal analytics across our user base. Model pricing and availability are pulled directly from our production model registry and are independently verifiable on our pricing page.
What this does not show. These are aggregate correlations, not controlled experiments. We cannot conclude that trying more models causes higher conversion — it's plausible that more engaged users both try more models and are more likely to subscribe. Sample sizes and exact time periods are not disclosed to protect user privacy.
What we can verify. Model pricing, model count (11 chat, 9 image), and provider count (6) are directly verifiable from our production systems. The activation patterns are directionally consistent across multiple measurement periods.
Planned follow-ups. We intend to publish more granular data in future reports, including:
- Which specific models users try first, second, and third
- How model preferences shift over time for individual users
- Task-level switching patterns (e.g., users who start with chat and discover image generation)
- Cohort analysis comparing single-model vs. multi-model users over 90 days
FAQs
What is this report based on?
This report combines internal activation metrics from Magicdoor's user base with pricing and model data from our production systems. Activation metrics are aggregate patterns — individual user data is not disclosed.
Is 73% conversion real?
Yes — 73% of users who try 5 or more models on Magicdoor convert to paid subscribers. However, this is a correlation: more curious users may be predisposed to both explore and subscribe. We note this caveat explicitly in the methodology section.
How many models does Magicdoor offer?
11 chat models across 6 providers (Anthropic, OpenAI, Google, xAI, Perplexity, Together AI) and 9 image generation models. See the full lineup on our model selection page.
How much does Magicdoor cost?
$6/month base subscription, which includes $1 in usage credits. Most users spend $8–10/month total with pay-as-you-go pricing. Full details: usage-based pricing.
Where can I learn more about multi-model workflows?
See our practical guide to using multiple AI models, 10 multi-model workflow examples, and the Claude vs ChatGPT comparison for task-specific recommendations.
Ready to see why multi-model users don't go back? Try Magicdoor — 11 chat models, 9 image models, one interface. Starting at $6/month.
Related Resources
A Familiar Interface with More Model Choice for ChatGPT Users
How Magicdoor.ai keeps the familiar chat workflow while adding more models, usage-based billing, and cross-model memory.
Best AI Models for the Money on Magicdoor in 2026
A cost-per-quality breakdown of the models available on Magicdoor and how to minimize your AI spending.
Claude vs ChatGPT (2026): Honest Comparison from Real Usage Data
Side-by-side comparison of Claude and ChatGPT in 2026. Based on real usage across coding, writing, research, and image understanding. Includes pricing, strengths, and when to use each.
Best Practices for Multi-Model Workflows
Learn how to combine the strengths of the models currently available on Magicdoor.