Download Latest Version @slack_bolt@4.5.0 source code.tar.gz (2.1 MB)
Email in envelope

Get an email when there's a new version of Bolt for JavaScript

Home / @slack_bolt@4.5.0
Name Modified Size InfoDownloads / Week
Parent folder
@slack_bolt@4.5.0 source code.tar.gz 2025-10-07 2.1 MB
@slack_bolt@4.5.0 source code.zip 2025-10-07 2.3 MB
README.md 2025-10-07 19.7 kB
Totals: 3 Items   4.4 MB 2

AI-Enabled Features: Loading States, Text Streaming, and Feedback Buttons

🍿 Preview

https://github.com/user-attachments/assets/bc16597b-1632-46bb-b7aa-fe22330daf84

⚡ Getting Started

Try the AI Agent Sample app to explore the AI-enabled features and existing Assistant helper:

:::bash
# Create a new AI Agent app
$ slack create slack-ai-agent-app --template slack-samples/bolt-js-assistant-template
$ cd slack-ai-agent-app/

# Add your OPENAI_API_KEY
$ export OPENAI_API_KEY=sk-proj-ahM...

# Run the local dev server
$ slack run

⌛ Loading States

Loading states allows you to not only set the status (e.g. "My app is typing...") but also sprinkle some personality by cycling through a collection of loading messages:

Web Client SDK:

:::javascript
app.event('message', async ({ client, context, event, logger }) => {
    // ...
    await client.assistant.threads.setStatus({
        channel_id: channelId,
        thread_ts: threadTs,
        status: 'thinking...',
        loading_messages: [
            'Teaching the hamsters to type faster…',
            'Untangling the internet cables…',
            'Consulting the office goldfish…',
            'Polishing up the response just for you…',
            'Convincing the AI to stop overthinking…',
        ],
    });

    // Start a new message stream
});

Assistant Class:

:::javascript
const assistant = new Assistant({
    threadStarted: assistantThreadStarted,
    threadContextChanged: assistantThreadContextChanged,
    userMessage: async ({ client, context, logger, message, getThreadContext, say, setTitle, setStatus }) => {
        await setStatus({
            status: 'thinking...',
            loading_messages: [
                'Teaching the hamsters to type faster…',
                'Untangling the internet cables…',
                'Consulting the office goldfish…',
                'Polishing up the response just for you…',
                'Convincing the AI to stop overthinking…',
            ],
        });
    },
});

🔮 Text Streaming Helper

The client.chatStream() helper utility can be used to streamline calling the 3 text streaming methods:

:::javascript
app.event('message', async ({ client, context, event, logger }) => {
    // ...

    // Start a new message stream
    const streamer = client.chatStream({
        channel: channelId,
        recipient_team_id: teamId,
        recipient_user_id: userId,
        thread_ts: threadTs,
    });

    // Loop over OpenAI response stream
    // https://platform.openai.com/docs/api-reference/responses/create
    for await (const chunk of llmResponse) {
        if (chunk.type === 'response.output_text.delta') {
            await streamer.append({
                markdown_text: chunk.delta,
            });
        }
    }

    // Stop the stream and attach feedback buttons
    await streamer.stop({ blocks: [feedbackBlock] });
});

🔠 Text Streaming Methods

Alternative to the Text Streaming Helper is to call the individual methods.

1) client.chat.startStream

First, start a chat text stream to stream a response to any message:

:::javascript
app.event('message', async ({ client, context, event, logger }) => {
    // ...
    const streamResponse = await client.chat.startStream({
        channel: channelId,
        recipient_team_id: teamId,
        recipient_user_id: userId,
        thread_ts: threadTs,
    });

    const streamTs = streamResponse.ts

2) client.chat.appendStream

After starting a chat text stream, you can then append text to it in chunks (often from your favourite LLM SDK) to convey a streaming effect:

:::javascript
for await (const chunk of llmResponse) {
    if (chunk.type === 'response.output_text.delta') {
        await client.chat.appendSteam({
            channel: channelId,
            markdown_text: chunk.delta,
            ts: streamTs,
        });
    }
}

3) client.chat.stopStream

Lastly, you can stop the chat text stream to finalize your message:

:::javascript
await client.chat.stopStream({
    blocks: [feedbackBlock],
    channel: channelId,
    ts: streamTs,
});

👍🏻 Feedback Buttons

Add feedback buttons to the bottom of a message, after stopping a text stream, to gather user feedback:

:::javascript
const feedbackBlock = {
    type: 'context_actions',
    elements: [{
        type: 'feedback_buttons',
        action_id: 'feedback',
        positive_button: {
            text: { type: 'plain_text', text: 'Good Response' },
            accessibility_label: 'Submit positive feedback on this response',
            value: 'good-feedback',
        },
        negative_button: {
            text: { type: 'plain_text', text: 'Bad Response' },
            accessibility_label: 'Submit negative feedback on this response',
            value: 'bad-feedback',
        },
    }],
};

// Using the Text Streaming Helper
await streamer.stop({ blocks: [feedbackBlock] });

:::javascript
// Or, using the Text Streaming Method
await client.chat.stopStream({
    blocks: [feedbackBlock],
    channel: channelId,
    ts: streamTs,
});

What's Changed

👾 Enhancements

🐛 Bug fixes

📚 Documentation

🤖 Dependencies

🧰 Maintaince

New Contributors 🎉

Milestone: https://github.com/slackapi/bolt-js/milestone/59?closed=1 Full Changelog: https://github.com/slackapi/bolt-js/compare/@slack/bolt@4.4.0...@slack/bolt@4.5.0

Source: README.md, updated 2025-10-07