Selecting the right AI Model

Magicdoor enables you to use a selection of different AI models in one interface. In contrast to other multi-model platforms, Magicdoor is simple and priced based on your actual usage, with a much lower subscription fee.

Model Selection

By default, Magicdoor uses Claude 4 Sonnet from Anthropic. It supports text and image inputs, web search, tool use, and is suitable for most general use cases. You can easily switch model with the selector under the input area at any time.

Available models:

  • Claude 4 Sonnet: Image support ✓ | Web search ✓ | Image generation ✓ | Excellent reasoning and coding
  • GPT-5: Image support ✓ | Web search ✓ | Image generation ✓ | Flagship generalist, great at most tasks
  • GPT-5 Mini: Image support ✓ | Web search ✓ | Image generation ✓ | Very low cost, ideal for summaries/extraction
  • Gemini 2.5 Pro: Image support ✓ | Web search ✓ | Image generation ✓ | Long context, strong tools and planning
  • Gemini 2.5 Flash: Image support ✓ | Web search ✓ | Image generation ✓ | Very fast and inexpensive
  • Grok 4: Image support ✓ | Web search ✓ | Image generation ✓ | Fast and capable, web‑aware
  • Perplexity (Sonar Reasoning): Image support ✗ | Web search ✓ | Image generation ✗ | Best for fact‑checking with sources
  • Perplexity Deep Research: Image support ✗ | Web search ✓ | Image generation ✗ | Deep multi‑step research briefs
  • DeepSeek R1: Image support ✗ | Web search ✗ | Image generation ✗ | Low‑cost text‑only reasoning
  • Claude Opus 4.1: Image support ✓ | Web search ✓ | Image generation ✓ | Premium Claude model
  • GPT-4o: Image support ✓ | Web search ✓ | Image generation ✓ | Familiar “vintage” multimodal baseline
  • Claude 3.5 Sonnet: Image support ✓ | Web search ✓ | Image generation ✓ | Legacy Claude, still strong

The magic of switching model within a conversation

One of the less good things about Perplexity is that it's so optimized for search that it's not always the best at generating text. On the other hand, GPT-4o and Claude are the best at generating text, but not always the best at search.

With Magicdoor, you can use different models within one conversation depending on the task. For example, you can ask Perplexity to gather some information from the web, then use Claude or GPT-4o to talk about it, and then switch to a reasoning model to formulate an action plan.

Here are some simple recipes for how to combine different models:

  • Search & Chat: Use Perplexity to find information, then use Claude or GPT‑5 to chat about it.
  • Chat with fact-checks: Use GPT‑5 or Claude to chat, and Perplexity to fact‑check with up‑to‑date information.
  • Summarizing: Use GPT‑5 Mini or Gemini 2.5 Flash to summarize text quickly and cheaply.
  • Planning: Use Claude 4 Sonnet or GPT‑5 to think through a problem, then refine with Gemini 2.5 Pro for structure and long context.

How to switch model

To switch model, simply click on the model name in the dropdown menu under the input area. When you start a new chat or load an existing chat from the sidebar, the model selector will default back to GPT-4o.

Managing cost

The two most important elements with regards to chat cost are tokens and which model you use. Most models price their usage based on tokens. A token is simply the smallest unit of data that is processed by a model. It is not necessarily a word, but can be part of a word. Some features (like Perplexity Deep Research) may include a per‑request component, and image generation can be priced per image. The price of one message depends on the length of the prompt (prompt tokens) and the length of the reply (completion tokens).

Here is the very important part to understand: In order to have an ongoing chat conversation with an LLM, we have to send the entire conversation as the prompt. This means that the cost of a message depends very strongly on the length of the conversation, and grows as the number of messages increases.

The other key driver of cost is the model you use. One model has a higher cost per token than the other. As a rule of thumb, top‑tier models (e.g., Claude 4 Sonnet, GPT‑5) are often 5–15× more expensive than budget models (e.g., GPT‑5 Mini, Gemini 2.5 Flash).

  • Learn more about how tokens work with LLMs in this article
  • Find the exact cost per model with some examples in this article

Tips for managing costs

  • Start a new chat every time you start a new subject. This make sure that you're not sending a ton of unrelated messages to the model unnecessarily.
  • If a conversation is getting very long, consider asking the model to summarize the conversation so far, and use the summary as the prompt for a new chat.
  • Uploading images uses a lot of tokens, keep that in mind if you're using a model that supports it.
  • High‑end models can be expensive. Prefer Mini/Flash variants for drafts and bulk summarization.

Copyright © 2025 magicdoor.ai