Create an AI-powered chat app that can answer questions about blockchain data using Dune’s Sim APIs and OpenAI’s function calling feature.
The Simchat interface we'll build - a conversational assistant for blockchain data
In this guide, you’ll learn how to build Simchat, an AI chat agent that can provide realtime blockchain insights through natural conversation.
Users can ask questions about wallet balances, transaction history, NFT collections, and token information across 60+ EVM chains and Solana, and the agent will fetch and explain the data in a friendly way.
By combining OpenAI’s LLMs with the realtime blockchain data provided by Sim APIs, you’ll create a chat agent that makes onchain data accessible to everyone, regardless of their technical expertise.
simchat/├── server.js # Express server with OpenAI integration├── chat.html # Chat interface├── package.json # Project configuration├── .env # API keys (keep private)└── node_modules/ # Dependencies
Run node server.js in the terminal to start the server.
Visit http://localhost:3001 to see the newly scaffolded chat app.
Our newly created chat front-end UI is ready.
If you try to send a message at this point, you’ll get a server error since we haven’t implemented the back-end functionality yet.
If you encounter errors, make sure your .env file contains the correct OPENAI_API_KEY and SIM_API_KEY.
Check your terminal for any error messages from server.js.
Now let’s add the core chat functionality to our Express server using OpenAI’s GPT-4o-mini.
We’ll start by defining a system prompt that instructs the LLM on its role and capabilities.
Add this SYSTEM_PROMPT variable to your server.js file:
(server.js)
// System prompt that instructs the AI assistantconst SYSTEM_PROMPT = `You are a helpful assistant that can answer questions about blockchain data using Dune's Sim APIs. You have access to various functions that can fetch realtime blockchain data including:- Token balances for wallets across 60+ EVM chains- Transaction activity and history- NFT collections and collectibles- Token metadata and pricing information- Token holder distributions- Supported blockchain networksWhen users ask about blockchain data, wallet information, token details, or transaction history, use the appropriate functions to fetch realtime data. Always provide clear, helpful explanations of the data you retrieve.Keep your responses concise and focused. When presenting large datasets, summarize the key findings rather than listing every detail.`;
This system prompt sets the context for the LLM, explaining its capabilities and how it should behave when interacting with users.
Now let’s implement the basic chat endpoint with Express.js that uses this system prompt.
The /chat endpoint will receive POST requests from our frontend chat interface, process them through the LLM, and return responses to display in the chat:
Run node server.js again and visit http://localhost:3001.
You’ll have a working chat interface powered by OpenAI’s gpt-4o-mini model with a custom system prompt, but it won’t be able to fetch realtime blockchain data yet.
The chat is now working with OpenAI responses, but not yet fetching blockchain data
To make our chatbot fetch realtime blockchain data, we need to use OpenAI’s function calling feature.
When the model determines it needs external data, it will call one of these functions with appropriate parameters, and we can then execute the actual API call and provide the results back to the model.
Add this functions array to your server.js file:
// Function definitions for OpenAI function callingconst functions = [ { type: "function", function: { name: "get_token_balances", description: "Get realtime token balances for an EVM wallet address across multiple chains. Returns native and ERC20 token balances with USD values.", parameters: { type: "object", properties: { address: { type: "string", description: "The wallet address to get balances for (e.g., 0xd8da6bf26964af9d7eed9e03e53415d37aa96045)" }, description: "Whether to exclude spam tokens from results", default: true } }, required: ["address"], additionalProperties: false } } }, { type: "function", function: { name: "get_wallet_activity", description: "Get chronologically ordered transaction activity for an EVM wallet including transfers, contract interactions, and decoded function calls.", parameters: { type: "object", properties: { address: { type: "string", description: "The wallet address to get activity for" }, limit: { type: "number", description: "Maximum number of activities to return (default: 25)", default: 25 } }, required: ["address"], additionalProperties: false } } }, { type: "function", function: { name: "get_nft_collectibles", description: "Get NFT collectibles (ERC721 and ERC1155) owned by an EVM wallet address.", parameters: { type: "object", properties: { address: { type: "string", description: "The wallet address to get NFTs for" }, limit: { type: "number", description: "Maximum number of collectibles to return (default: 50)", default: 50 } }, required: ["address"], additionalProperties: false } } }, { type: "function", function: { name: "get_token_info", description: "Get detailed metadata and pricing information for a specific token on EVM chains.", parameters: { type: "object", properties: { token_address: { type: "string", description: "The token contract address or 'native' for native tokens" }, chain_ids: { type: "string", description: "Chain IDs to search on (e.g., '1,137,8453' or 'all')", default: "all" } }, required: ["token_address"], additionalProperties: false } } }, { type: "function", function: { name: "get_token_holders", description: "Get token holders for a specific ERC20 or ERC721 token, ranked by wallet value.", parameters: { type: "object", properties: { chain_id: { type: "number", description: "The chain ID where the token exists (e.g., 1 for Ethereum)" }, token_address: { type: "string", description: "The token contract address" }, limit: { type: "number", description: "Maximum number of holders to return (default: 100)", default: 100 } }, required: ["chain_id", "token_address"], additionalProperties: false } } }, { type: "function", function: { name: "get_transactions", description: "Get detailed transaction information for an EVM wallet address.", parameters: { type: "object", properties: { address: { type: "string", description: "The wallet address to get transactions for" }, limit: { type: "number", description: "Maximum number of transactions to return (default: 25)", default: 25 } }, required: ["address"], additionalProperties: false } } }, { type: "function", function: { name: "get_supported_chains", description: "Get list of all supported EVM chains and their capabilities.", parameters: { type: "object", properties: {}, additionalProperties: false } } }, { type: "function", function: { name: "get_svm_token_balances", description: "Get token balances for a Solana (SVM) address. Returns native and SPL token balances with USD values.", parameters: { type: "object", properties: { address: { type: "string", description: "The Solana wallet address to get balances for (e.g., DYw8jCTfwHNRJhhmFcbXvVDTqWMEVFBX6ZKUmG5CNSKK)" }, limit: { type: "number", description: "Maximum number of balances to return (default: 100)", default: 100 }, chains: { type: "string", description: "Comma-separated list of chains to include, or 'all' for all supported chains", default: "all" } }, required: ["address"], additionalProperties: false } } }, { type: "function", function: { name: "get_svm_token_metadata", description: "Get metadata for a Solana token mint address.", parameters: { type: "object", properties: { mint: { type: "string", description: "The Solana token mint address (e.g., So11111111111111111111111111111111111111112)" } }, required: ["mint"], additionalProperties: false } } }];
Each function corresponds to a different Sim API endpoint that we’ll implement next.
This configuration handles all the different patterns of Sim APIs: simple objects for basic query parameters, URLSearchParams for complex query strings, multiple path parameters, and endpoints with no parameters.
Now we need an enhanced callFunction that can handle both regular objects and URLSearchParams:
// Function to execute API calls based on function nameasync function callFunction(name, args) { if (!API_CONFIGS[name]) return JSON.stringify({ error: `Unknown function: ${name}` }); const [endpoint, params] = API_CONFIGS[name](...Object.values(args)); const result = await apiCall(endpoint, params); return JSON.stringify(result);}
This approach maintains the streamlined API_CONFIGS pattern while properly handling all the different parameter types and patterns used by the various Sim API endpoints.
The apiCall function can handle both URLSearchParams objects (for complex queries) and regular objects (for simple query parameters).
Finally, we need to update our chat endpoint to handle function calls.
Replace your existing /chat endpoint with this version that includes function calling support:
// Enhanced chat endpoint with function callingapp.post('/chat', async (req, res) => { try { const { message } = req.body; if (!message) return res.status(400).json({ error: 'Message is required' }); // Create conversation with system prompt const messages = [ { role: "system", content: SYSTEM_PROMPT }, { role: "user", content: message } ]; // Call OpenAI with function definitions const response = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, tools: functions, tool_choice: "auto", max_tokens: 2048 }); let assistantMessage = response.choices[0].message; // Handle function calls if present if (assistantMessage.tool_calls) { messages.push(assistantMessage); // Execute each function call for (const toolCall of assistantMessage.tool_calls) { const functionResult = await callFunction( toolCall.function.name, JSON.parse(toolCall.function.arguments) ); messages.push({ role: "tool", tool_call_id: toolCall.id, content: functionResult }); } // Get final response with function results const finalResponse = await openai.chat.completions.create({ model: "gpt-4o-mini", messages: messages, tools: functions, tool_choice: "auto", max_tokens: 2048 }); assistantMessage = finalResponse.choices[0].message; } res.json({ message: assistantMessage.content, function_calls: assistantMessage.tool_calls || [] }); } catch (error) { console.error('Chat error:', error); res.status(500).json({ error: 'An error occurred while processing your request', details: error.message }); }});
This enhanced endpoint now supports the full function calling workflow: it sends the user’s message to OpenAI with the available functions, executes any function calls that the model makes, and then sends the function results back to get the final conversational response.
Restart your server and test the function calling functionality. Try asking questions like What tokens does vitalik.eth have? and watch as your chat agent fetches realtime data from Sim APIs to provide accurate, up-to-date responses.
You’ve successfully built a realtime chat agent that makes blockchain data accessible through natural conversation.
By combining OpenAI’s LLMs with Sim APIs’ comprehensive blockchain data, you’ve created a tool that can instantly fetch and explain complex onchain information across 60+ EVM chains and Solana.
This foundation provides everything you need to build your own specialized blockchain chat assistants.
Consider extending it for specific use cases like:
The complete source code on GitHub includes additional features like full session management and enhanced error handling that weren’t covered in this guide
Explore the repository to see the additional features in action.