AI Assistants: When a user opens an assistant to chat, they see the model that's configured on the assistant - regardless of whether their subscription allows them to use that model.

The Issue

When a user opens an assistant to chat, they see the model that's configured on the assistant - regardless of whether their subscription allows them to use that model.

Example Flow:

  1. Free user creates an assistant with gemini-2.0-flash (a free model)

  2. Pro user (same workspace) edits the assistant and changes the model to claude-sonnet-4.5 (a premium model)

  3. Free user opens the assistant to chat

  4. Free user sees claude-sonnet-4.5 in the model selector - a model they cannot use

  5. If they try to send a message, it either:

    • Fails with an error at runtime, or

    • Works (security hole if backend doesn't validate)

The Root Cause:

The assistant stores a single modelId field, and when displaying the assistant to users, we show that stored model without checking if the current user's subscription allows access to it.

Expected Behavior:

When a free user opens an assistant that has a pro model configured:

  • They should not see the pro model

  • They should see only the models available to their subscription (the 3 free models)

  • The model selector should be filtered based on the current user's tier, not the assistant's stored model

Please authenticate to join the conversation.

Upvoters
Status

Planned

Board
💡

Feature Request and Bug Report

Tags

UX / UI

Date

3 months ago

Author

Katja Danilina

Subscribe to post

Get notified by email when there are changes.