Critical performance issues with Gemini 3.1: Response times up to 10 minutes
i everyone, I’m experiencing significant latency issues when using Gemini 3.1 within Bolt. Even for very basic queries or simple code adjustments, the model takes an incredibly long time to generate a response—sometimes up to 10 minutes for a single output. This happens consistently, making the development workflow nearly impossible. Is this a known API bottleneck, or are there specific settings I should check to improve the response speed? Any help or insights from the team would be greatly appreciated.

baxenko About 9 hours ago
Critical performance issues with Gemini 3.1: Response times up to 10 minutes
i everyone, I’m experiencing significant latency issues when using Gemini 3.1 within Bolt. Even for very basic queries or simple code adjustments, the model takes an incredibly long time to generate a response—sometimes up to 10 minutes for a single output. This happens consistently, making the development workflow nearly impossible. Is this a known API bottleneck, or are there specific settings I should check to improve the response speed? Any help or insights from the team would be greatly appreciated.

baxenko About 9 hours ago
Save current chat settings to a new agent
Currently you can adjust settings, like model and tools, during a chat session, but there doesn’t appear to be a way to then save to a new agent.

Will Ernst 1 day ago
Save current chat settings to a new agent
Currently you can adjust settings, like model and tools, during a chat session, but there doesn’t appear to be a way to then save to a new agent.

Will Ernst 1 day ago
Hard line breaks inserted into API response text - persists through all copy/export methods
My OS version: macOS 26.3 (25D125) My BoltAI app version: BoltAI 2 v2.8.1 Are you a Setapp user: No AI provider & model: Anthropic — Claude Sonnet 4.6 Steps to reproduce the issue: Start a chat and request a multi-paragraph response Observe that the displayed output contains line breaks mid-sentence or mid-paragraph, appearing to align with the display width rather than paragraph boundaries Try each of the following copy/export methods — the line breaks persist in all cases: Copy icon for the entire chat Copy icon for a Markdown artifact Manual text selection and copy File > Export as Markdown Error message: No error message is displayed. The issue is that hard line breaks are being baked into the response text rather than reflecting the actual API output. Because the breaks survive export to a Markdown file, this rules out a display/rendering issue.

Marek Laskowski 4 days ago
Hard line breaks inserted into API response text - persists through all copy/export methods
My OS version: macOS 26.3 (25D125) My BoltAI app version: BoltAI 2 v2.8.1 Are you a Setapp user: No AI provider & model: Anthropic — Claude Sonnet 4.6 Steps to reproduce the issue: Start a chat and request a multi-paragraph response Observe that the displayed output contains line breaks mid-sentence or mid-paragraph, appearing to align with the display width rather than paragraph boundaries Try each of the following copy/export methods — the line breaks persist in all cases: Copy icon for the entire chat Copy icon for a Markdown artifact Manual text selection and copy File > Export as Markdown Error message: No error message is displayed. The issue is that hard line breaks are being baked into the response text rather than reflecting the actual API output. Because the breaks survive export to a Markdown file, this rules out a display/rendering issue.

Marek Laskowski 4 days ago
Can't reach chat list button on iPadOS
Under certain circumstances iPadOS’ window controls interfere with the chat list button in the top left corner. Therefore you can’t access the list of chats. The example shows slide over but it happens in all narrow window dimensions.

Moritz Zimmer 5 days ago
Can't reach chat list button on iPadOS
Under certain circumstances iPadOS’ window controls interfere with the chat list button in the top left corner. Therefore you can’t access the list of chats. The example shows slide over but it happens in all narrow window dimensions.

Moritz Zimmer 5 days ago
Make Bolt AI an AI app, not a human app
The thing about generative AI is its potential for its power to compound. Right now, Bolt AI is an app from the era when humans did things. In this case, manipulating AI. So, yes, great, I can go through a complex back and forth with Claude to generate an ideal prompt. Then I manually save it off as an agent or workflow. Repeat 10x. Then there’ll be lots of learnings from lots of chats that come from those system prompts. So I need to update the prompt in three months, and again, and again. And I’ll need to continually curate the model list with new models. Lots of human work, actually. So: make Bolt able to configure every element of itself. The AI coding apps in VS Code already do this. You don’t create a new Mode in Roo Code manually, it does it for you. In Bolt, this could mean: - AI agents being able to update agents, workflows and other settings in Bolt. - AI agents in Bolt being able to manipulate data/settings outside the app. - Data in Bolt being programmatically accessible to be seem/manipulated by other AIs. That could mean: - AI agents updating agents/workflows based on conversation history and user feedback. - AI agents automatically curating and updating model lists based on preset criteria. - AI agents automatically organizing chats into projects, etc, based on preset criteria.

Matt 10 days ago
Make Bolt AI an AI app, not a human app
The thing about generative AI is its potential for its power to compound. Right now, Bolt AI is an app from the era when humans did things. In this case, manipulating AI. So, yes, great, I can go through a complex back and forth with Claude to generate an ideal prompt. Then I manually save it off as an agent or workflow. Repeat 10x. Then there’ll be lots of learnings from lots of chats that come from those system prompts. So I need to update the prompt in three months, and again, and again. And I’ll need to continually curate the model list with new models. Lots of human work, actually. So: make Bolt able to configure every element of itself. The AI coding apps in VS Code already do this. You don’t create a new Mode in Roo Code manually, it does it for you. In Bolt, this could mean: - AI agents being able to update agents, workflows and other settings in Bolt. - AI agents in Bolt being able to manipulate data/settings outside the app. - Data in Bolt being programmatically accessible to be seem/manipulated by other AIs. That could mean: - AI agents updating agents/workflows based on conversation history and user feedback. - AI agents automatically curating and updating model lists based on preset criteria. - AI agents automatically organizing chats into projects, etc, based on preset criteria.

Matt 10 days ago
Feature request: audio as input for workflows
Thank you so much for v2, there’s so many great features (instant chat, Parakeet) and so many little requests implemented. Much appreciated. IMO the killer feature: transcribed audio from Parakeet/Whisper as an input for workflows. There are so many uses: creating emails, memos and so much else could be done seamlessly and instantly with a single keyboard shortcut and your own voice. I know you don’t want to build a whole chaining app, but please consider one application.

Matt 10 days ago
Feature request: audio as input for workflows
Thank you so much for v2, there’s so many great features (instant chat, Parakeet) and so many little requests implemented. Much appreciated. IMO the killer feature: transcribed audio from Parakeet/Whisper as an input for workflows. There are so many uses: creating emails, memos and so much else could be done seamlessly and instantly with a single keyboard shortcut and your own voice. I know you don’t want to build a whole chaining app, but please consider one application.

Matt 10 days ago
Feature request: Config font size in the left sidebar
The manual config of the AI-written chat titles is a great little feature to have to curate the left sidebar. One further request: being able to adjust the font size of chat titles (and folder names) in the left side bar to be able to make them smaller, as was possible in v1.

Matt 10 days ago
Feature request: Config font size in the left sidebar
The manual config of the AI-written chat titles is a great little feature to have to curate the left sidebar. One further request: being able to adjust the font size of chat titles (and folder names) in the left side bar to be able to make them smaller, as was possible in v1.

Matt 10 days ago
Feature request: Set default agent for projects
Now we have the agent structure, we should be able to set a default agent for a project. The project would then automatically inherit the system prompt, parameters and so on from the agent. It could also be still possible to add project-level config such as files. At the moment, you have to either manually set the agent each chat within the project, or copy and paste the system message and so on from the agent to the project config.

Matt 10 days ago
Feature request: Set default agent for projects
Now we have the agent structure, we should be able to set a default agent for a project. The project would then automatically inherit the system prompt, parameters and so on from the agent. It could also be still possible to add project-level config such as files. At the moment, you have to either manually set the agent each chat within the project, or copy and paste the system message and so on from the agent to the project config.

Matt 10 days ago
Bug: Agent LLM Parameters not always passed through to chat
Your OS version: Sequoia 15.5 Your BoltAI app version. 2.8.0 (build 55) Are you a Setapp user. No. The AI provider & model you’re using. Google Gemini 3.0 Flash Preview on Google Vertex via OpenRouter. Steps to reproduce the issue. Create an agent and in the agent settings specify Google Gemini 3.0 Flash Preview as the model and set Thinking tokens budget to be non-zero. Create a new chat, set the new agent as the active profile then check the LLM Parameters section in the right sidebar. The thinking budget has not been applied. The error message. No error message.

Matt 10 days ago
Bug: Agent LLM Parameters not always passed through to chat
Your OS version: Sequoia 15.5 Your BoltAI app version. 2.8.0 (build 55) Are you a Setapp user. No. The AI provider & model you’re using. Google Gemini 3.0 Flash Preview on Google Vertex via OpenRouter. Steps to reproduce the issue. Create an agent and in the agent settings specify Google Gemini 3.0 Flash Preview as the model and set Thinking tokens budget to be non-zero. Create a new chat, set the new agent as the active profile then check the LLM Parameters section in the right sidebar. The thinking budget has not been applied. The error message. No error message.

Matt 10 days ago
In Progress
Mutiple chat selection doesn't support bulk actions
I just noticed some unexpected behavior when selecting multiple chats. I recently imported my ChatGPT history, and have a number of chats I wanted to organize into Projects. I selected several, right-clicked, and chose Move to move to the project, but that only operated on the one I happened to right-click on, not the set as expected. I haven’t tried but I would guess Delete probably does the same single operation rather than bulk.

Will Ernst 13 days ago
In Progress
Mutiple chat selection doesn't support bulk actions
I just noticed some unexpected behavior when selecting multiple chats. I recently imported my ChatGPT history, and have a number of chats I wanted to organize into Projects. I selected several, right-clicked, and chose Move to move to the project, but that only operated on the one I happened to right-click on, not the set as expected. I haven’t tried but I would guess Delete probably does the same single operation rather than bulk.

Will Ernst 13 days ago
Add opencode zen as a provider
I feel like everyone is moving to zen from openrouter, the reliability is just so much better and would be amazing to have!

Wren 13 days ago
Add opencode zen as a provider
I feel like everyone is moving to zen from openrouter, the reliability is just so much better and would be amazing to have!

Wren 13 days ago
Ghost chat screens in Mission Control
Mission Control shows ghost BoltAi chat windows. You can’t select them or remove them without closing the app.

ikum 15 days ago
Ghost chat screens in Mission Control
Mission Control shows ghost BoltAi chat windows. You can’t select them or remove them without closing the app.

ikum 15 days ago
No Working Directory Config = DEALBREAKER
When using Claude Code as an AI provider, please add an option to specify a working directory for the API connection. Claude Code uses project-level config files (CLAUDE.md) to define personas, operating principles, and tool configurations — these only load when Claude is invoked from the correct directory. Without this, BoltAI's Claude Code integration always returns a generic assistant with no project context, making it unsuitable as a primary interface for project-based AI workflows. Suggested: a 'Working Directory' field in the Claude Code provider settings. Behavioural config customisation is non-optional these days. Why is this not baked in to BoltAI already?

Adam 15 days ago
No Working Directory Config = DEALBREAKER
When using Claude Code as an AI provider, please add an option to specify a working directory for the API connection. Claude Code uses project-level config files (CLAUDE.md) to define personas, operating principles, and tool configurations — these only load when Claude is invoked from the correct directory. Without this, BoltAI's Claude Code integration always returns a generic assistant with no project context, making it unsuitable as a primary interface for project-based AI workflows. Suggested: a 'Working Directory' field in the Claude Code provider settings. Behavioural config customisation is non-optional these days. Why is this not baked in to BoltAI already?

Adam 15 days ago
Improve vertical alignment in sidebar folder hierarchy
The current sidebar navigation has a visual alignment issue where nested files are indented too far to the left relative to their parent folders. As shown by the red boxes and arrows in the image above, the child items seem to align with the folder's expand/collapse icon instead of the folder name. This lack of indentation breaks the vertical scanning line, making the hierarchy look cluttered and difficult to read quickly. The items should be shifted right to align flush with the parent folder names, along the path indicated by the green line.

rafael 19 days ago
Improve vertical alignment in sidebar folder hierarchy
The current sidebar navigation has a visual alignment issue where nested files are indented too far to the left relative to their parent folders. As shown by the red boxes and arrows in the image above, the child items seem to align with the folder's expand/collapse icon instead of the folder name. This lack of indentation breaks the vertical scanning line, making the hierarchy look cluttered and difficult to read quickly. The items should be shifted right to align flush with the parent folder names, along the path indicated by the green line.

rafael 19 days ago
Bug with Gemini CLI
First of all, I want to thank you for integrating Claude Code and the Gemini CLI into Bolt AI. This helps me immensely because I have a Pro subscription, so now I don't need to spend extra API credits. This is truly awesome! That said, I’ve run into a bug. For some reason, the Gemini CLI within Bolt gets the current date wrong, whereas it works perfectly fine in my terminal. This leads to grounding issues, particularly when the model needs to search the internet. It seems like the local environment context isn't being passed through correctly. Perhaps you need to explicitly set an environment variable or inject the date context in your implementation? I’m attaching side-by-side screenshots of the same question asked in Bolt AI and my terminal to show the difference. See here: https://share.cleanshot.com/kjVdFLtn

Risinggoblin 20 days ago
Bug with Gemini CLI
First of all, I want to thank you for integrating Claude Code and the Gemini CLI into Bolt AI. This helps me immensely because I have a Pro subscription, so now I don't need to spend extra API credits. This is truly awesome! That said, I’ve run into a bug. For some reason, the Gemini CLI within Bolt gets the current date wrong, whereas it works perfectly fine in my terminal. This leads to grounding issues, particularly when the model needs to search the internet. It seems like the local environment context isn't being passed through correctly. Perhaps you need to explicitly set an environment variable or inject the date context in your implementation? I’m attaching side-by-side screenshots of the same question asked in Bolt AI and my terminal to show the difference. See here: https://share.cleanshot.com/kjVdFLtn

Risinggoblin 20 days ago
Unable to change model for a project after the previous one was deleted
Hi, I encountered an issue with the BoltAl 2 app on my Mac, actually two bugs. Basically I had these providers configured: OpenCode Zen OpenRouter Ollama Cloud Each with their own default model, and OpenCode Zen set as the default provider. For some reason, more than once I found that different providers were set as the default, without me changing anything, resulting in more than one provider being set as the default, instead of just one. That’s the first issue. To try and fix it, I removed OpenCode Zen and OpenRouter, since I wanted to keep using Ollama Cloud. Once I did that though, I was unable to change the model for a project from the previously selected one from OpenCode Zen, to a model in the remaining provider, Ollama Cloud. The only solution I could find was to create a new project with the correct model, move the chats to that project, and then delete the old project. It’a a bit annoying so I hope it can be fixed soon. Thanks.

Vito Botta 20 days ago
Unable to change model for a project after the previous one was deleted
Hi, I encountered an issue with the BoltAl 2 app on my Mac, actually two bugs. Basically I had these providers configured: OpenCode Zen OpenRouter Ollama Cloud Each with their own default model, and OpenCode Zen set as the default provider. For some reason, more than once I found that different providers were set as the default, without me changing anything, resulting in more than one provider being set as the default, instead of just one. That’s the first issue. To try and fix it, I removed OpenCode Zen and OpenRouter, since I wanted to keep using Ollama Cloud. Once I did that though, I was unable to change the model for a project from the previously selected one from OpenCode Zen, to a model in the remaining provider, Ollama Cloud. The only solution I could find was to create a new project with the correct model, move the chats to that project, and then delete the old project. It’a a bit annoying so I hope it can be fixed soon. Thanks.

Vito Botta 20 days ago
orkflow Palette Unresponsive Across Multiple Devices; Duplicate "Instant Workflow" Entries Cannot Be Removed
Environment: App: BoltAI OS: macOS (tested on multiple devices) Issue persists after: reinstallation, shortcut reassignment Description: The workflow palette no longer opens when triggered via keyboard shortcut or UI. This issue is reproducible across multiple macOS devices and persists after the following troubleshooting steps: Reassigned the workflow palette shortcut to a different key combination. Completely uninstalled and reinstalled BoltAI. Restarted the devices. None of the above resolved the issue. Additional Issue — Duplicate "Instant Workflow" Entries (attached below): The workflow list displays multiple duplicate entries labeled "Instant Workflow." Attempting to delete these duplicates has no effect — the entries remain after deletion attempts, and no error or confirmation is shown. Steps to Reproduce: Open BoltAI. Trigger the workflow palette via the assigned shortcut or UI button. Expected: Workflow palette opens. Actual: Nothing happens. No palette, no error. Navigate to the workflow list. Observe duplicate "Instant Workflow" entries. Attempt to delete any duplicate. Expected: Entry is removed. Actual: Entry persists with no feedback. Impact: Workflows are completely inaccessible, making a core feature of the app unusable. Troubleshooting Already Performed: Shortcut reassignment (multiple combinations tested) Full uninstall and reinstall Tested on multiple macOS devices/different lics — same behavior on all

IbrahimMan 23 days ago
orkflow Palette Unresponsive Across Multiple Devices; Duplicate "Instant Workflow" Entries Cannot Be Removed
Environment: App: BoltAI OS: macOS (tested on multiple devices) Issue persists after: reinstallation, shortcut reassignment Description: The workflow palette no longer opens when triggered via keyboard shortcut or UI. This issue is reproducible across multiple macOS devices and persists after the following troubleshooting steps: Reassigned the workflow palette shortcut to a different key combination. Completely uninstalled and reinstalled BoltAI. Restarted the devices. None of the above resolved the issue. Additional Issue — Duplicate "Instant Workflow" Entries (attached below): The workflow list displays multiple duplicate entries labeled "Instant Workflow." Attempting to delete these duplicates has no effect — the entries remain after deletion attempts, and no error or confirmation is shown. Steps to Reproduce: Open BoltAI. Trigger the workflow palette via the assigned shortcut or UI button. Expected: Workflow palette opens. Actual: Nothing happens. No palette, no error. Navigate to the workflow list. Observe duplicate "Instant Workflow" entries. Attempt to delete any duplicate. Expected: Entry is removed. Actual: Entry persists with no feedback. Impact: Workflows are completely inaccessible, making a core feature of the app unusable. Troubleshooting Already Performed: Shortcut reassignment (multiple combinations tested) Full uninstall and reinstall Tested on multiple macOS devices/different lics — same behavior on all

IbrahimMan 23 days ago