Help me improve BoltAI

For a bug report, please include the following information:

  • Your OS version.

  • Your BoltAI app version.

  • Are you a Setapp user.

  • The AI provider & model you’re using.

  • Steps to reproduce the issue.

  • The error message

Issues with MCP Servers

Hi Daniel, I wanted to report a few issues with MCP servers that I noticed. Authentication seems to be required much more often than with other clients. I can especially compare it with Raycast AI Chat. In Raycast, I never had to log in again to reauthenticate their HTTP MCP. For BoltAI, it is necessary quite regularly. For Notion, it is extreme. You have to log in again after 30 minutes or one hour. It seems like only the access token from the JWT is being used, and once it expires, the refresh token is not used to obtain a new one. For some MCP servers, the authentication workflow doesn’t work at all. One newer example I noticed is TickTick. https://help.ticktick.com/articles/7438129581631995904 It should work, but you simply get an authentication error after logging in and allowing the connection in the browser. Maybe that’s related to the fact that you can’t add URL-only MCPs like TickTick’s in Notion. You have to add at least one slash after the domain so the Add button becomes active and clickable.

Maximilian Christl About 15 hours ago

💻

BoltAI Mac v2

Hard line breaks inserted into API response text - persists through all copy/export methods

My OS version: macOS 26.3 (25D125) My BoltAI app version: BoltAI 2 v2.8.1 Are you a Setapp user: No AI provider & model: Anthropic — Claude Sonnet 4.6 Steps to reproduce the issue: Start a chat and request a multi-paragraph response Observe that the displayed output contains line breaks mid-sentence or mid-paragraph, appearing to align with the display width rather than paragraph boundaries Try each of the following copy/export methods — the line breaks persist in all cases: Copy icon for the entire chat Copy icon for a Markdown artifact Manual text selection and copy File > Export as Markdown Error message: No error message is displayed. The issue is that hard line breaks are being baked into the response text rather than reflecting the actual API output. Because the breaks survive export to a Markdown file, this rules out a display/rendering issue.

Marek Laskowski 16 days ago

💻

BoltAI Mac v2

Make Bolt AI an AI app, not a human app

The thing about generative AI is its potential for its power to compound. Right now, Bolt AI is an app from the era when humans did things. In this case, manipulating AI. So, yes, great, I can go through a complex back and forth with Claude to generate an ideal prompt. Then I manually save it off as an agent or workflow. Repeat 10x. Then there’ll be lots of learnings from lots of chats that come from those system prompts. So I need to update the prompt in three months, and again, and again. And I’ll need to continually curate the model list with new models. Lots of human work, actually. So: make Bolt able to configure every element of itself. The AI coding apps in VS Code already do this. You don’t create a new Mode in Roo Code manually, it does it for you. In Bolt, this could mean: - AI agents being able to update agents, workflows and other settings in Bolt. - AI agents in Bolt being able to manipulate data/settings outside the app. - Data in Bolt being programmatically accessible to be seem/manipulated by other AIs. That could mean: - AI agents updating agents/workflows based on conversation history and user feedback. - AI agents automatically curating and updating model lists based on preset criteria. - AI agents automatically organizing chats into projects, etc, based on preset criteria.

Matt 21 days ago

💻

BoltAI Mac v2

Bug with Gemini CLI

First of all, I want to thank you for integrating Claude Code and the Gemini CLI into Bolt AI. This helps me immensely because I have a Pro subscription, so now I don't need to spend extra API credits. This is truly awesome! That said, I’ve run into a bug. For some reason, the Gemini CLI within Bolt gets the current date wrong, whereas it works perfectly fine in my terminal. This leads to grounding issues, particularly when the model needs to search the internet. It seems like the local environment context isn't being passed through correctly. Perhaps you need to explicitly set an environment variable or inject the date context in your implementation? I’m attaching side-by-side screenshots of the same question asked in Bolt AI and my terminal to show the difference. See here: https://share.cleanshot.com/kjVdFLtn

Risinggoblin About 1 month ago

1
💻

BoltAI Mac v2