
Zoekgeheugen
De Zoekgeheugen-component laat je flow informatie ophalen uit opgeslagen geheugen op basis van gebruikersvragen, ter ondersteuning van contextbewuste en kennisgestuurde workflows.
Componentbeschrijving
Hoe de Zoekgeheugen-component werkt
The Search Memory component is designed to retrieve relevant information from your workflow’s memory storage, often referred to as “Long Term Memory”. It takes a user query and searches stored documents or knowledge resources, returning the most related content. This is particularly useful for AI workflows that need to reference previous information, retrieve supporting documents, or provide context-aware responses.
What Does the Component Do?
- Purpose: The component searches through stored information in the workflow’s memory using a user-defined query and returns the most relevant pieces of information.
- Use Case: Useful for chatbots, virtual assistants, or any AI process that requires access to previously stored knowledge or documents to provide informed, contextual answers.
Key Features
- Flexible Retrieval: Allows you to specify the number of results, set a similarity threshold, and choose how information is aggregated from documents.
- Customizable Output: You can control what sections/types of content (such as headings or paragraphs) are included in the results.
- Integration with Tools: The retrieved documents can be formatted as messages, raw documents, or as tools for further use in the workflow.
Settings
| Input Name | Type | Required | Description | Default Value |
|---|---|---|---|---|
| Title | str | No | Title of the block in the output. | Related resources |
| Result limit | int | Yes | Number of results to return. | 3 |
| From pointer | bool | Yes | If true, loads from the best matching point in the document; otherwise, loads all. | true |
| Hide resources | bool | No | If true, hides the retrieved resources from output. | false |
| max_tokens | int | No | Maximum number of tokens in the output text. | 3000 |
| strategy | str | Yes | Strategy for aggregating content: “Concat documents, fill from first up to tokens limit” or “Include equal size from each document”. | Include equal size from each documents |
| threshold | float | No | Similarity threshold for retrieved results (0 to 1). | 0.8 |
| tool_description | str | No | Description for the tool, used by agents to understand its function. | (empty) |
| tool_name | str | No | Name for the tool in the agent. | (empty) |
| use_content | multi-select | No | Which content types to export (e.g., H1-H6, Paragraph). | All (H1-H6, Paragraph) |
| verbose | bool | No | Whether to print verbose output for debugging or insights. | false |
Inputs
| Input Name | Type | Required | Description | Default Value |
|---|---|---|---|---|
| Lookup key | str | No | Key used to locate specific information in Long Term Memory. | (empty) |
| Input query | str | Yes | The search query to use in memory lookup. | (empty) |
Outputs
The component provides multiple output formats to suit different needs:
- Documents (Message): The retrieved information as a message, suitable for direct integration into conversational flows.
- Raw Documents (Document): The unprocessed, raw content of the matched documents for further parsing or analysis.
- Documents As Tool (Tool): The found documents formatted as a tool, enabling chaining or complex agent workflows.
| Output Name | Type | Description |
|---|---|---|
| documents | Message | Retrieved content as message(s) |
| documents_raw | Document | Raw, unprocessed document content |
| documents_as_tool | Tool | Documents formatted for use as a tool in agent workflows |
Why Use Search Memory?
- Contextual AI: Enhance your AI’s responses by providing access to previously stored data, making interactions more informed and coherent.
- Knowledge Management: Efficiently leverage existing documentation or user-provided information without manual searching.
- Advanced Customization: Fine-tune retrieval strategies and output formats to fit your specific workflow requirements.
Example Scenarios
- Conversational Agents: Retrieve past interactions or knowledge snippets to maintain context across conversations.
- Research Assistants: Quickly surface relevant documents or passages from a large knowledge base in response to a query.
- Automated Decision Making: Provide supporting evidence from stored memory to justify recommendations or actions.
Summary Table
| Feature | Benefit |
|---|---|
| Query-based search | Finds the most relevant stored information for any user query |
| Output options | Choose between message, raw document, or tool formats |
| Custom retrieval | Control over number of results, similarity threshold, and content |
| Integrates with AI | Ideal for AI agents needing dynamic access to stored knowledge |
This component is a versatile building block for any AI workflow that requires memory search, document retrieval, or contextual augmentation.
Veelgestelde vragen
- Wat doet de Zoekgeheugen-component?
Zoekgeheugen stelt je workflow in staat relevante informatie op te halen uit opgeslagen geheugen of documenten met behulp van invoervragen, waardoor je AI-oplossingen contextbewuster worden.
- Hoe selecteert het welke documenten worden teruggegeven?
Het haalt documenten op die het beste aansluiten bij de invoervraag, met opties om het aantal resultaten te beperken en het uitvoerformaat of de strategie te bepalen.
- Kan ik het aantal resultaten of het type inhoud beheren?
Ja, je kunt een limiet voor resultaten instellen, kiezen welke documentinhoudstypes je wilt meenemen en strategieën aanpassen voor het combineren van documentfragmenten.
- Hoe helpt Zoekgeheugen mijn chatbot of workflow?
Door toegang te geven tot eerdere kennis of langetermijngeheugen kan je bot beter geïnformeerde, nauwkeurige en contextueel relevante antwoorden geven.
- Is Zoekgeheugen geschikt voor geavanceerde AI-toepassingen?
Zeker. Het is ontworpen om te integreren in complexe flows waarbij het ophalen van context of kennis uit eerdere data cruciaal is voor intelligente automatisering.
Probeer FlowHunt Zoekgeheugen
Geef je AI-oplossingen een boost door geheugenzoekopdrachten en -ophaling te integreren. Maak verbinding met langetermijnkennis en lever slimmere antwoorden.