Bug: Agent LLM Parameters not always passed through to chat

Your OS version: Sequoia 15.5

Your BoltAI app version. 2.8.0 (build 55)

Are you a Setapp user. No.

The AI provider & model you’re using. Google Gemini 3.0 Flash Preview on Google Vertex via OpenRouter.

Steps to reproduce the issue. Create an agent and in the agent settings specify Google Gemini 3.0 Flash Preview as the model and set Thinking tokens budget to be non-zero. Create a new chat, set the new agent as the active profile then check the LLM Parameters section in the right sidebar. The thinking budget has not been applied.

The error message. No error message.

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board
💻

BoltAI Mac v2

Date

10 days ago

Author

Matt

Subscribe to post

Get notified by email when there are changes.