Prompt
Create a prompt template with dynamic variables for LLM, supporting fields such as {input}, {human_input}, {context}, {chat_history}, {system_message}.
A real-time chatbot that uses Google Search restricted to your own domain, retrieves relevant web content, and leverages OpenAI LLM to answer user queries with up-to-date information. Ideal for providing accurate, domain-specific responses in customer support or information portals.

Flows
Create a prompt template with dynamic variables for LLM, supporting fields such as {input}, {human_input}, {context}, {chat_history}, {system_message}.
Below is a complete list of all components used in this flow to achieve its functionality. Components are the building blocks of every AI Flow. They allow you to create complex interactions and automate tasks by connecting various functionalities. Each component serves a specific purpose, such as handling user input, processing data, or integrating with external services.
The Chat Input component in FlowHunt initiates user interactions by capturing messages from the Playground. It serves as the starting point for flows, enabling the workflow to process both text and file-based inputs.
Discover the Chat Output component in FlowHunt—finalize chatbot responses with flexible, multi-part outputs. Essential for seamless flow completion and creating advanced, interactive AI chatbots.
The Button Widget component in FlowHunt transforms text or input into interactive, clickable buttons within your workflow. Perfect for creating dynamic user interfaces, collecting user choices, and improving engagement in AI-driven chatbots or automated processes.
The Chat Opened Trigger component detects when a chat session starts, enabling workflows to respond instantly as soon as a user opens the chat. It initiates flows with the initial chat message, making it essential for building responsive, interactive chatbots.
The Chat History component in FlowHunt enables chatbots to remember previous messages, ensuring coherent conversations and improved customer experience while optimizing memory and token usage.
Learn how FlowHunt's Prompt component lets you define your AI bot’s role and behavior, ensuring relevant, personalized responses. Customize prompts and templates for effective, context-aware chatbot flows.
Explore the Generator component in FlowHunt—powerful AI-driven text generation using your chosen LLM model. Effortlessly create dynamic chatbot responses by combining prompts, optional system instructions, and even images as input, making it a core tool for building intelligent, conversational workflows.
FlowHunt supports dozens of text generation models, including models by OpenAI. Here's how to use ChatGPT in your AI tools and chatbots.
Query Expansion in FlowHunt enhances chatbot understanding by finding synonyms, fixing spelling errors, and ensuring consistent, accurate responses for user queries.
FlowHunt's GoogleSearch component enhances chatbot accuracy using Retrieval-Augmented Generation (RAG) to access up-to-date knowledge from Google. Control results with options like language, country, and query prefixes for precise and relevant outputs.
Unlock web content in your workflows with the URL Retriever component. Effortlessly extract and process the text and metadata from any list of URLs—including web articles, documents, and more. Supports advanced options like OCR for images, selective metadata extraction, and customizable caching, making it ideal for building knowledge-rich AI flows and automations.
Flow description
This workflow implements a simple Retrieval-Augmented Generation (RAG) chatbot that leverages real-time Google Search to retrieve up-to-date information from the internet—specifically, it can be customized to restrict all searches to a particular domain. The main goal is to create a chatbot that can answer user queries using the most relevant and recent content found online, making it highly valuable for scenarios where static knowledge bases are insufficient.
The workflow is composed of several modular blocks, each representing a specific capability. Below is a breakdown of the workflow’s structure and functionality:
| Component | Role |
|---|---|
| Chat Input | Receives user queries and chat messages. |
| Chat History | Maintains conversation history for context-aware responses. |
| Query Expansion | Paraphrases user input into multiple alternative queries to improve search coverage. |
| Google Search | Executes searches on Google, restricted by a customizable domain prefix. |
| URL Retriever | Extracts content from the URLs returned by Google Search. |
| Prompt Template | Structures context, user input, and history for the language model. |
| OpenAI LLM | Generates responses using a language model (e.g., GPT-3/4). |
| Generator | Invokes the LLM with the prompt and context to produce the answer. |
| Chat Output | Displays chatbot responses to the user. |
| Button Widgets | Provides quick example queries for users to try with a single click. |
| Chat Opened Trigger | Initializes the conversation and populates quick-start buttons. |
When a user opens the chat, the Chat Opened Trigger activates. This initializes the chat interface and presents several Button Widgets with example queries (e.g., “what dinosaur has 500 teeth?”). When a user clicks a button or enters a custom message via Chat Input, the workflow proceeds as follows:
Query Expansion: The user’s input is paraphrased into multiple versions to maximize the likelihood of retrieving relevant search results.
Google Search: The expanded queries are sent to Google Search. By default, the search is limited to a specific domain (set by the query_prefix field, e.g., site: www.YOURDOMAIN.com), allowing you to focus the chatbot’s knowledge on your own website or any trusted source.
URL Retriever: The workflow retrieves the content of the top search results (URLs) as full documents.
Prompt Assembly: The retrieved content, user input, and chat history are combined using the Prompt Template component to provide rich context for the answer.
Language Model Generation: The prompt is sent to the OpenAI LLM, which generates a coherent and contextually relevant response.
Response Output: The generated answer is displayed to the user via the Chat Output.
query_prefix, you can ensure the chatbot sources information only from your trusted website or knowledge base, improving the reliability of answers.| Step | Description |
|---|---|
| User Input | User types a question or clicks a quick-start button |
| Query Expansion | Input is paraphrased for broader search coverage |
| Google Search | Searches are performed on Google, restricted to a given domain |
| URL Content Retrieval | Top search result contents are fetched |
| Prompt Construction | User input, search results, and chat history are compiled into a prompt |
| LLM Generation | OpenAI LLM generates a response using the full context |
| Output | Response is shown to the user |
query_prefix field in the Google Search component (e.g., site: www.YOURDOMAIN.com).By automating the search, retrieval, and answer generation process, this workflow saves manual research time and ensures users always get the most current and relevant information available.
We help companies like yours to develop smart chatbots, MCP Servers, AI tools or other types of AI automation to replace human in repetitive tasks in your organization.
AI chatbot assistant powered by OpenAI GPT-4o that automatically searches and leverages internal company documents to answer user questions. Delivers context-aw...
A powerful AI chatbot that answers user questions in real-time by retrieving and synthesizing information from Google, Reddit, Wikipedia, Arxiv, Stack Exchange,...
Automate customer support in LiveAgent with an AI chatbot that answers questions using your internal knowledge base, retrieves relevant documents, and seamlessl...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.



