
How LiveAgent achieved 75% chat automation with FlowHunt AI chatbot
Discover how LiveAgent's Support Team implemented FlowHunt's AI chatbot to resolve three-quarters of all chat interactions, reduce agent workload by 48.5%, and ...

A technical breakdown of six AI functions that reduced support workload by 48.5%. Learn the specific problems each solves, implementation approach, and measurable results from a support operations leader.
Jozef Štofira has spent over 15 years managing global technical teams and scaling support across 100+ markets, and now he leads customer support operations at Quality Unit. His latest presentation at E-commerce Mastermind focused not on AI theory, but on specific AI functions the LiveAgent team has deployed using FlowHunt and the measurable results they’ve achieved.
What follows is his breakdown of six distinct AI functions, how each addresses specific support bottlenecks, and the operational improvements his team documented.
If you’re interested in the complete LiveAgent AI implementation story with detailed metrics, see our LiveAgent success story .
Many support teams will face the same problem sooner or later: ticket volume grows faster than budgets. The traditional approach of scaling headcount proportionally to ticket volume eventually hits financial constraints. Meanwhile, overworked existing agents experience burnout from repetitive inquiries that consume time better spent on genuine customer issues.
Jozef Štofira’s approach centered on identifying which support tasks machines could handle better than humans and filtering them out. The end goal was to redirect the attention of agents toward high-value interactions where human judgment, empathy, and expertise matter most.
Jozef Štofira presented a structured AI implementation around discrete functions, each targeting a specific support bottleneck. Rather than deploying a monolithic “AI support system,” his team implemented solutions directly addressing specific inefficiencies.
The Problem: An overload of repetitive questions, pre-sale questions and general information requests unrelated to actual product support needs.
The Solution: FlowHunt AI chatbot connected directly to LiveAgent’s documentation and knowledge base, deployed specifically on high-traffic, low-complexity pages only.
The Result: 48.5% reduction in manual live chat volume. LiveAgent went from 3,500 monthly conversations requiring human agents down to 1,800. The chatbot now handles the difference autonomously, filtering inquiries and escalating only those genuinely requiring human expertise.
The critical decision was creating a chatbot that doesn’t attempt to handle everything. It focuses only on deflecting basic questions, looking up documentation and helping with simple troubleshooting, while immediately escalating Level 2 complexity to human agents.
The Problem: Traditional spam filters fail against sophisticated cold outreach and semi-relevant messages that technically aren’t spam but also aren’t valid support requests.
The Solution: AI analysis of context and intent rather than keyword matching. The system evaluates whether an incoming message represents a genuine support need or should be automatically closed.
The Result: At 2,000+ monthly ticket volume, this eliminates 3-6 hours of pure agent time monthly spent on manual spam review.
The difference is, that rule-based systems look for patterns, while AI evaluates intent. A cold sales email might not trigger rule-based spam keywords, but clearly isn’t a support request requiring agent attention.
The Problem: Manual categorization under time pressure leads to inconsistency and gaps. Before AI implementation, 15% of tickets went uncategorized, creating blind spots in support analytics and resource allocation.
The Solution: Automatic AI analysis and category assignment via API the moment tickets enter the system.
The Result: Complete elimination of uncategorized tickets (from 15% to 0%). At volumes exceeding 10,000 tickets, that’s 14-28 hours saved monthly.
The broader impact is, that support leadership now has accurate, complete data for trend analysis, capacity planning, and team performance measurement—data previously corrupted by inconsistent manual categorization.
The Problem: Customers frequently submit support requests missing essential information needed for resolution. Agents manually review, identify gaps, and request additional details, which delays resolution and consumes capacity.
The Solution: FlowHunt chatbot performs validation checks on incoming requests. The chatbot identifies missing information and immediately requests it. For requests that are complete and valid, the system provides instant acknowledgment and appropriate routing.
The Result: 5-10 hours monthly saved at 600+ request volume, plus significant improvement in customer experience through immediate feedback rather than delayed requests for clarification.
The customers now now receive instant guidance on what’s needed, instead of waiting for an agent to ask for more information. This greatly accelerates overall resolution time.
The Problem: Even when agents must personally handle tickets, time is wasted on drafting responses, searching documentation, and ensuring consistent brand voice and technical accuracy.
The Solution: FlowHunt’s AI-generated response drafts pull relevant information from the knowledge base with zero manual agent input required. All that’s left to do is review, make changes and send. Even for complex responses, agents can simply provide brief instructions that AI expands into complete, professionally formatted answers.
The Result: 2-3 minutes saved per response. At 4,000+ monthly responses requiring this level of involvement, approximately 166 hours saved monthly.
This also has a positive impact on training. Newer agents can immediately produce expert-level responses thanks to comprehensive knowledge base access through AI, eliminating the stress of frantically reading documentation.
The Problem: Determining when automation should handle an interaction versus when human attention is needed, and ensuring smooth transitions that don’t force customers to repeat information.
The Solution: Define escalation rules to determine when the FlowHunt chatbot answers independently versus when it transfers to human agents. Complete conversation history and context passes with every handover.
The Result: Customers experience seamless transitions. Agents receive full context and can continue conversations naturally instead of starting from scratch.
This function isn’t about maximizing automation percentage, but about optimizing the boundary between what machines handle and what humans handle, ensuring each operates in their area of strength.

Jozef included an example of how these six functions integrate throughout a complete support request lifecycle, from initial customer contact to resolution:
Stage 1: Intelligent Intake
When a customer initiates contact, two AI functions activate immediately. Anti-spam evaluates whether it’s a genuine support need or whether it should be automatically closed. Simultaneously, automatic categorization analyzes content and assigns appropriate tags before any human review occurs.
This front-line filtering ensures agents only see legitimate support requests that are already properly categorized for routing and prioritization.
Stage 2: Hybrid Chat Handling
The FlowHunt chatbot manages incoming conversations, and directly answers straightforward inquiries. When complexity exceeds the bot’s capabilities or customers explicitly request human assistance, intelligent escalation transfers the conversation to live agents with complete context.
This creates a tiered system where AI handles what it can, and humans handle what they should, with seamless handover ensuring customers never experience friction at the transition point.
Stage 3: Agent Acceleration
For requests requiring human handling, the answer assistant powered by FlowHunt is available in the response window. It can generate response drafts using relevant information from documentation and give agents either a starting point or significantly reducing time spent researching answers.
Meanwhile, automation handles routine acknowledgments and standard responses, such as demo request confirmations, without any agent involvement.
Stage 4: Continuous Learning Loop
The final stage involves extracting knowledge gaps identified during human-handled interactions. When chatbot conversations reveal questions the AI couldn’t answer from existing documentation, the system captures the expert resolution provided by human agents.
This information becomes the foundation for new knowledge base articles, expanding the chatbot’s capabilities over time without requiring manual knowledge base development. The system learns from every interaction it can’t fully automate.
The LiveAgent team didn’t deploy all six functions simultaneously. Just like Michal Lichner outlined in his AI implementation guide , they implemented incrementally, starting with the highest-impact, lowest-complexity functions first, and continued with daily monitoring of all functions.
The chatbot launched initially on specific website sections where misdirected traffic was heaviest, such as blogs and glossary pages, instead of the crucial product support pages. This allowed the team to refine prompts, expand FAQs, and validate performance before expanding to more technical support scenarios.
Automatic categorization came next, addressing the immediate data integrity problem that harmed support analytics. Once the accurate categorization was established, other functions that depended on proper routing and prioritization could build on that foundation.
The answer assistant deployed last among customer-facing functions, after the team had built confidence in AI’s ability to maintain brand voice and technical accuracy through less visible implementations.
Each function underwent daily monitoring during initial deployment. The team refined system prompts, expanded knowledge bases, and adjusted escalation rules based on real customer interactions rather than theoretical scenarios.
The AI functions integrate directly with existing LiveAgent helpdesk infrastructure through API connections rather than requiring complete system replacement. This allowed incremental deployment without disrupting ongoing operations.
Knowledge base integration uses approved company documentation as the source of truth rather than relying on general AI training. This drastically reduces hallucination risk and ensures consistent, accurate responses aligned with actual company policies and procedures.
The escalation system uses defined rules rather than probabilistic decision-making. When specific triggers occur—customer explicitly requests human assistance, AI confidence drops below threshold, conversation complexity exceeds defined parameters—handover happens automatically with complete context transfer.
Request validation operates through template matching and required field checking rather than attempting to understand arbitrary customer communication styles. This pragmatic approach addresses 90% of incomplete submissions without the complexity of natural language understanding.

Reflecting on implementation, Štofira also identifies prerequisites that have to exist before AI could deliver these results:
Organized Knowledge: A comprehensive, well-maintained documentation is crucial. AI can’t magically organize scattered tribal knowledge. It needs structured, accessible information to work with.
Clear Process Definition: Escalation rules, categorization schemas, and response templates had to be explicitly defined. AI needs structure to operate within, not vague guidelines about “good judgment.”
Commitment to Iteration: Current performance resulted from months of refinement, not initial deployment. The team committed to daily monitoring, continuous prompt improvement, and ongoing FAQ expansion based on real customer interactions.
Integration Capability: The ability to connect AI functions with existing systems through APIs made incremental deployment possible. Without this, the team would have faced an all-or-nothing system replacement that would have been too risky to attempt.
Realistic Expectations: Management understood that AI would require learning time and wouldn’t achieve peak performance immediately. This patience enabled the team to optimize properly rather than abandoning systems at the first sign of imperfection.
The presentation concluded with a brief mention of future directions his team is exploring. These include expanding AI answer assistant capabilities to email-based tickets beyond chat, developing automated workflows that transform resolved support interactions into knowledge base articles, and extending autonomous ticket processing to additional communication channels including WhatsApp and social media platforms.
This framework offers practical guidance for support leaders evaluating where to begin with AI.
It’s important to start by identifying your highest-volume, most repetitive support interactions. These represent the best initial targets because success is most achievable and impact is most measurable. This approach will save you from early burnout. Remember to not expect perfection right out the gate, but to monitor performance and find space for improvement. Only with clear rules, sufficient knowledge sources and a learning loop can AI truly start benefiting your support operations.
LiveAgent’s results demonstrate that AI in customer support works when implemented thoughtfully with clear success criteria and realistic expectations. The question isn’t whether AI can improve support operations, but rather whether teams can commit to the systematic, function-by-function approach that makes those improvements sustainable.
Jozef’s operational framework shows how AI functions work in practice, handling the daily reality of customer support at scale. If you’re interested in complex AI implementation, check out our other articles from the series:
Michal Lichner’s implementation roadmap established the strategic foundation—where to focus AI efforts and how to prepare content and processes systematically before deployment.
Viktor Zeman’s technical infrastructure ensures that once you’ve automated support operations, customers can actually discover you through AI-mediated search and commerce protocols.
Together, these three perspectives form a complete picture: strategic planning, operational execution, and technical infrastructure for e-commerce in an AI-mediated commerce environment.
The six functions are: (1) AI Chatbot for Level 1 inquiry deflection, (2) Anti-spam through contextual analysis, (3) Automatic categorization for data integrity, (4) Request validation and completeness checking, (5) Answer assistant for response acceleration, and (6) Intelligent escalation and hand-over. Each function targets a specific operational bottleneck rather than attempting monolithic AI transformation.
Implement incrementally, function by function, starting with highest-volume, most repetitive interactions. Deploy each function individually, validate success with measurable metrics, then expand. Begin with areas like blog page chats or automatic categorization where success is most achievable, building confidence before tackling technical support scenarios. Plan for months of optimization, not days of deployment.
Essential prerequisites include: organized, accessible knowledge bases with comprehensive documentation; clear process definitions for escalation rules and categorization schemas; API integration capability with existing helpdesk systems; commitment to daily monitoring and iterative improvement; and realistic expectations that current performance requires months of refinement, not immediate perfection.
The lifecycle integrates all functions: Stage 1 (Intelligent Intake) uses anti-spam and auto-categorization to filter and route requests. Stage 2 (Agent Acceleration) activates answer assistant for response drafts and automates routine acknowledgments. Stage 3 (Hybrid Chat) combines chatbot handling with intelligent escalation to human agents. Stage 4 (Learning Loop) captures knowledge gaps from human interactions to expand AI capabilities over time.
Maria is a copywriter at FlowHunt. A language nerd active in literary communities, she's fully aware that AI is transforming the way we write. Rather than resisting, she seeks to help define the perfect balance between AI workflows and the irreplaceable value of human creativity.

Build the same six AI functions that transformed LiveAgent's support team—chatbot deflection, anti-spam, auto-categorization, request validation, answer assistance, and intelligent escalation.

Discover how LiveAgent's Support Team implemented FlowHunt's AI chatbot to resolve three-quarters of all chat interactions, reduce agent workload by 48.5%, and ...

Learn how to implement AI customer service on your Shopify store with step-by-step instructions, best practices, and top tools for automating customer support a...

A practical framework for implementing AI in e-commerce from Quality Unit's CMO. Learn where to start, common challenges, content preparation strategies, and re...