Have something to say?

Tell BoltAI how they could make the product more useful to you.

orkflow Palette Unresponsive Across Multiple Devices; Duplicate "Instant Workflow" Entries Cannot Be Removed

Environment: App: BoltAI OS: macOS (tested on multiple devices) Issue persists after: reinstallation, shortcut reassignment Description: The workflow palette no longer opens when triggered via keyboard shortcut or UI. This issue is reproducible across multiple macOS devices and persists after the following troubleshooting steps: Reassigned the workflow palette shortcut to a different key combination. Completely uninstalled and reinstalled BoltAI. Restarted the devices. None of the above resolved the issue. Additional Issue — Duplicate "Instant Workflow" Entries (attached below): The workflow list displays multiple duplicate entries labeled "Instant Workflow." Attempting to delete these duplicates has no effect — the entries remain after deletion attempts, and no error or confirmation is shown. Steps to Reproduce: Open BoltAI. Trigger the workflow palette via the assigned shortcut or UI button. Expected: Workflow palette opens. Actual: Nothing happens. No palette, no error. Navigate to the workflow list. Observe duplicate "Instant Workflow" entries. Attempt to delete any duplicate. Expected: Entry is removed. Actual: Entry persists with no feedback. Impact: Workflows are completely inaccessible, making a core feature of the app unusable. Troubleshooting Already Performed: Shortcut reassignment (multiple combinations tested) Full uninstall and reinstall Tested on multiple macOS devices/different lics — same behavior on all

IbrahimMan 3 days ago

7
💻

BoltAI Mac v2

[Feature Request] Custom API Headers & Body Parameters per Model

## The Problem Anthropic just released Claude Opus 4.6 Fast Mode — a research preview that delivers up to 2.5x faster output token generation. To use it, you need to pass two things that BoltAI currently doesn’t support: 1. A custom HTTP header: anthropic-beta: fast-mode-2026-02-01 1. A custom body parameter: "speed": "fast" The model ID stays the same (`claude-opus-4-6`), so you can’t work around this by just adding a new model. Without these fields, the request goes through as standard Opus 4.6 — no speed boost, and no way to opt in. This isn’t just about Fast Mode. Anthropic (and other providers) increasingly use beta headers and extra body parameters for new features: - anthropic-beta: context-1m-2025-08-07 — 1M token context window - anthropic-beta: prompt-caching-2024-07-31 — prompt caching - speed: "fast" — fast inference mode - inference_geo: "us" — data residency controls Right now, none of these are accessible from BoltAI. ## The Suggestion In the Edit Model screen, add two optional fields: - Custom Headers — a key-value input (or raw text field) for extra HTTP headers. - Custom Body Parameters — a key-value input (or JSON field) for additional parameters injected into the request body. ### Example Configuration For Opus 4.6 Fast Mode: |Field |Value | |-----------------|--------------------------------------| |Model ID |`claude-opus-4-6` | |Custom Header |`anthropic-beta: fast-mode-2026-02-01`| |Custom Body Param|`"speed": "fast"` | For 1M context window: |Field |Value | |-------------|---------------------------------------| |Model ID |`claude-opus-4-6` | |Custom Header|`anthropic-beta: context-1m-2025-08-07`| ## Why It Fits BoltAI BoltAI already gives power users granular control over model parameters like Top-P, Temperature, and Frequency Penalty. Custom headers and body params are the natural next step — they unlock every beta/preview feature from any provider without waiting for BoltAI to add explicit UI support for each one. This is a forward-looking solution: instead of playing catch-up with every new API flag, you give users the tools to configure it themselves on day one. ## Priority High — Fast Mode is available now and providers are shipping new beta features at an accelerating pace. Every week without this means users either can’t access new capabilities or have to fall back to curl/scripts.

Greg 3 days ago

📱

BoltAI Mobile

Issues with Instant Workflow, {{user_prompt}}, and Keyboard Shortcuts in BoltAI 2

Dear BoltAI Team, I am having trouble using an Instant Workflow in BoltAI 2 as intended and I am not sure whether I misconfigured something or if this is a bug. Setup macOS: latest version App: BoltAI 2 (direct license) Workflow type: Instant Workflow Settings: Input Source: Selection Agent: None (use defaults) Prompt Template: INSTRUCTION: {{user_prompt}} TEXT: {{input}} Only Text, no explanation or comments Preview Mode: No Preview (run immediately) Output Action: Paste at Cursor Issue 1 – Instant Workflow does not receive the selection In Outlook I select a German sentence in the email body. I open the Instant Chat Bar with Option+Space and type e.g. “bitte auf englisch übersetzen”. The model’s response is something like: “Please send the text you want me to translate into English.” So it seems that the selected text is not passed as {{input}} into the Instant Workflow, although Input Source is set to Selection. Question: How do I need to configure the Instant Chat Bar and/or the Instant Workflow so that the current selection is used as {{input}} and the result is directly inserted via “Paste at Cursor”? Issue 2 – {{user_prompt}} in regular workflows According to the UI hint, {{user_prompt}} is “only set for Instant”. In a regular workflow (type “Workflow”) that I trigger via the Workflow Palette, {{user_prompt}} indeed remains empty – only {{input}} is populated. My questions: Is it correct that {{user_prompt}} is only populated for Instant Workflows and will always be empty in regular workflows? Is there a way to trigger a specific Instant Workflow directly via a keyboard shortcut (without going through the normal chat UI), using the current selection as input and setting {{user_prompt}}? Is it intentional that the special “Instant Workflow” does not appear in the Workflow Palette? I would appreciate any guidance on the correct configuration or information on whether this is a known issue. Kind regards, Martin

Martin Schmid 4 days ago

💻

BoltAI Mac v2

Encryption Performance Feedback — iPhone 16 Pro

TL;DR Crypto choices (AES-256-GCM + scrypt) are solid. Performance is not — scrypt derivation takes 1.3s with N=4096, which is abnormally slow for this hardware. This points to an implementation issue rather than a configuration one. Key Findings 1. Scrypt derivation is too slow for the parameters used N=4096 is actually a low cost factor (OWASP recommends N=32768+), yet it takes 1.3s on an A18 Pro chip. A native C implementation should handle N=16384 in ~200-400ms on this device. This strongly suggests a non-native (pure Swift/JS) scrypt implementation is being used. 2. Encryption re-derives the key every time Each encrypt call takes ~1.3s regardless of data size (32 bytes vs 1KB — no meaningful difference). The time is entirely spent on scrypt, not on AES. The scrypt cache clearly works in isolation (0.07ms on cache hit), but the encrypt doesn’t appear to use it. 3. UX impact A full encrypt + decrypt cycle takes 2.82s for a simple API key. Users will notice this. Saving or retrieving credentials should feel instant. Suggestions 1. Use the cached derived key for encrypt/decrypt — the cache mechanism already exists, it just needs to be wired into the encrypt path. This alone would drop encrypt time from 1.3s to single-digit ms. 2. Switch to a native scrypt implementation (or use CommonCrypto / a C binding). This would allow raising N to a more secure value (16384–32768) while still being faster than the current 1.3s at N=4096. 3. Consider Argon2id as an alternative KDF — it’s the current OWASP recommendation over scrypt, has good native iOS support, and is easier to tune for mobile. Test Environment ∙ Device: iPhone 16 Pro (A18 Pro) ∙ Algorithm: AES-256-GCM ∙ KDF: scrypt (N=4096, r=8, p=1)

Greg 6 days ago

1
📱

BoltAI Mobile

[Feature Request] "Paste long text as a file" with customizable threshold

I would love to see a feature that automatically converts large text pastes into an attached file. This is a game-changer for keeping conversation history readable on mobile and for taking full advantage of Prompt Caching. The Suggestion: • Add a toggle: "Paste long text as a file". • The Key Detail: Add a slider or input field to set the threshold (e.g., from 500 to 10,000 characters). • Benefit: This allows power users to decide exactly when their code or documents should be minimized into a file icon versus staying as a readable message. Why it fits BoltAI: Your app already provides incredible granular control over model parameters like Top-P and Frequency Penalty. Adding a customizable threshold for text handling would perfectly align with this "Power User" philosophy and further differentiate BoltAI from simpler AI clients. Sincerely, Greg iPhone 16 Pro

Greg 7 days ago

📱

BoltAI Mobile

Responses truncated from Opus 4.5

Hi, over the past few days, and still with the 2.6.2 release (which I just installed, hoping it might fixed the issue, but alas no), responses are now truncated very quickly. Example: Analysis: Quarterly Review Template (FY26 Q1) Lens used: Template Quality Assessment + Messaging-A/B + Inherent Simplicity Executive Summary This is a template deck, not a completed review. Approximately 90% of the content consists of placeholder text ("x", "Example", "Comment", "Project name"). The deck provides structure …and then it gets truncated. This has just started happening in the last day or two. I can say, “continue” and it will go a little bit farther but then truncate again.

Eric Bowman 13 days ago

💻

BoltAI Mac v2

Optional local python script execution support in workflows

A lot of us code scripts (hobby or otherwise) which leverage APIs of various subsystems we want to gather information from. Once the information is gathered, there is a need to make use of AI to analyze/convert/format/summarize before presentation. Since the foundation of workflows, like custom commands to be executed at will are already there, i would like to have an optional local python script execution section with in workflows so that the input for AI is generated by the script (one of many other input areas like selection, clipboard etc). The interface need not be to embed a code editor of sort, because python scripts can span modules and multiple code files. Just an input field which takes in the python command would do. This can open up any code written by AI tools to be readily invoked using the BoltAI workflows. examples: ~/venv/bin/python service_health.py --service | AI (to explain) ~/venv/bin/python get_pipeline_failures.py —pipeline-id | AI (to investigate) I can understand if this is not feasible (like literally embed in workflow section), but would love to have the feature implemented in any other way too, so that all of us can automate many things in our day to day lives.

prasadvelidi 15 days ago

1
💻

BoltAI Mac v2

Generation fails when app or chat is not active

I’ve encountered a few cases where I send a message and it fails to generate: When the app is not active: switching apps or locking the phone after sending a message Received error ‘fetch failed: Network connection was lost’ with Claude Sonnet 4.5 Received error ‘could not parse response’ with gpt-5-mini DID NOT receive an error with gpt-5 Opening a different chat in BoltAI while the response is being generated No error, no response with gpt-5, 5-mini, and Claude Sonnet 4.5 Since tool calling and reasoning take a few minutes sometimes, this is a pretty big obstacle to multitasking. I’ll do some more testing later, it could totally be just those models or something about my setup.

harrisonfloam 16 days ago

1
📱

BoltAI Mobile