
Ona: The Future of AI-Powered Coding Agents with Fully Sandboxed Cloud Environments
Discover how Ona (formerly Gitpod) is revolutionizing software development with AI coding agents that operate in fully configured, sandboxed cloud environments....

Explore how AMP, Sourcegraph’s frontier coding agent, is reshaping the AI development landscape by embracing rapid iteration, autonomous reasoning, and tool-calling agents—and why traditional coding tools are becoming obsolete.
The AI coding agent landscape is experiencing unprecedented disruption. What was cutting-edge six months ago is now considered outdated. GitHub Copilot, once the gold standard of AI-assisted development, has been eclipsed by newer tools. Cursor dominated the market as the fastest-growing startup of all time, only to face competition from even more advanced solutions. In this rapidly evolving ecosystem, Sourcegraph made a bold strategic decision: rather than incrementally improve their existing Cody product, they launched AMP—a completely new coding agent built from the ground up to embrace the frontier of AI capabilities.
This article explores the philosophy, technical architecture, and business strategy behind AMP, drawing insights from conversations with the team behind this revolutionary tool. We’ll examine why traditional approaches to product development fail in the age of rapid AI advancement, how tool-calling agents fundamentally differ from earlier AI coding assistants, and what the future of autonomous development looks like. Most importantly, we’ll understand why the “emperor has no clothes”—why established products with seemingly unshakeable market positions can become irrelevant almost overnight when the underlying technology shifts.
{{ youtubevideo videoID=“b4rOVZWLW6E” provider=“youtube” title=“AMP: The Emperor Has No Clothes - Why AI Coding Agents Are Disrupting Developer Tools” class=“rounded-lg shadow-md” }}
The evolution of AI-assisted development has followed a clear trajectory, each generation building on the previous one but fundamentally changing how developers interact with artificial intelligence. To understand AMP’s significance, we must first understand what distinguishes a coding agent from earlier forms of AI assistance. The journey began with GitHub Copilot, which introduced code completion and suggestion capabilities directly into developers’ editors. Copilot was revolutionary because it brought AI into the development workflow in a non-intrusive way, offering suggestions as developers typed. However, Copilot was fundamentally limited—it could suggest code, but it couldn’t execute complex, multi-step tasks or interact with the broader development environment.
The next generation brought tools like Cursor and Windsurf, which took a different approach by creating IDE forks that integrated AI more deeply into the development environment. These tools demonstrated that partially agentic capabilities—where AI could perform more complex operations within the IDE—could significantly improve developer productivity. They showed that developers were willing to switch their entire development environment if the AI capabilities were sufficiently advanced. However, even these tools operated within constraints: they were interactive, requiring developer input and approval at each step, and they couldn’t truly operate autonomously.
A coding agent, by contrast, is a fundamentally different architecture. An agent consists of three core components: a language model (typically a frontier model like Claude 3.5), a system prompt that defines the agent’s behavior and constraints, and a set of tools with associated prompts that describe what each tool can do. The critical difference is that agents can operate with explicit permissions to interact with external systems—file systems, code editors, version control systems, and more. This means an agent can autonomously reason about a problem, decide which tools to use, execute those tools, observe the results, and iterate until the task is complete. This is fundamentally more powerful than any previous approach because it enables true autonomous behavior rather than just enhanced suggestions or interactive assistance.
The technology landscape has entered a phase of unprecedented instability. What was state-of-the-art eighteen months ago is now considered primitive. GitHub Copilot, released in 2021, was genuinely revolutionary—it represented the first mainstream application of large language models to software development. Yet today, many developers don’t even consider it among the top choices for AI-assisted coding. This isn’t because Copilot became worse; it’s because the underlying technology advanced so rapidly that the entire category shifted. This creates a profound challenge for established companies: how do you maintain a successful product while the ground beneath it is constantly shifting?
Traditional product development assumes a relatively stable foundation. You find product-market fit, you scale the product, you build proper engineering practices, you add enterprise features, you establish long-term contracts with customers. This playbook has worked for decades because technology typically evolves gradually. But in the current AI era, this approach is actively harmful. If you optimize your product for scale and stability, you become slow. If you become slow, you miss the next wave of capability improvements. By the time you’ve added enterprise features and security compliance checkboxes, a new model has been released that makes your entire approach obsolete.
Sourcegraph faced this exact dilemma with Cody. Cody was a successful product with enterprise customers, long-running contracts, and significant revenue. But Cody was tightly integrated with the Sourcegraph platform, which meant it was bound by the platform’s release cycles. The platform had its own infrastructure, its own deployment schedule, and its own constraints. When Claude 3.5 Sonnet was released and the team realized they could build something fundamentally different—a tool-calling agent with autonomous reasoning capabilities—they faced a choice: try to retrofit these capabilities into Cody, or start fresh with a new product. They chose to start fresh, and this decision reveals a crucial insight about competing in rapidly evolving markets.
The key realization was that you cannot make a $20 subscription model work for a tool-calling agent. The computational costs are fundamentally different. A chat-based assistant like Cody can operate efficiently on modest infrastructure. A tool-calling agent that’s reasoning about code, executing tools, and iterating autonomously requires significantly more compute. This isn’t just a pricing problem; it’s a signal that the product is fundamentally different and requires a different business model, different customer expectations, and different go-to-market strategy. By creating AMP as a separate product with a separate brand, Sourcegraph could reset these expectations entirely. They could tell customers: “This is not Cody 2.0. This is a completely different thing. It costs more because it does more. It works differently because it’s built on a different architecture.”
To truly appreciate why AMP represents a paradigm shift, we need to understand the technical architecture of tool-calling agents in detail. A tool-calling agent is not simply a language model with access to functions. The architecture is more sophisticated and more powerful. The system begins with a frontier language model—in AMP’s case, Claude 3.5 Sonnet—that has been trained to understand and use tools effectively. The model receives a system prompt that defines its role, constraints, and objectives. Crucially, the system prompt isn’t just a casual instruction; it’s a carefully engineered prompt that shapes how the model reasons about problems and decides which tools to use.
Alongside the system prompt, each tool has its own prompt that describes what the tool does, what parameters it accepts, what it returns, and when it should be used. This is critical because the language model needs to understand not just that a tool exists, but what it’s for and when it’s appropriate to use it. For example, an agent might have tools for reading files, writing files, executing code, running tests, and committing changes. Each tool has a detailed description that helps the model reason about which tool to use in which situation. The model can then autonomously decide to use these tools, observe the results, and iterate based on what it learns.
The power of this architecture becomes apparent when you consider what an agent can do. A developer might ask an agent to “implement a new feature that adds user authentication to this codebase.” The agent can then autonomously: read the existing codebase to understand the architecture, identify where authentication should be integrated, write the necessary code, run tests to verify the implementation, handle any failures by modifying the code, and eventually commit the changes. All of this happens without human intervention. The agent is reasoning about the problem, making decisions about which tools to use, and iterating based on feedback from those tools.
This is fundamentally different from earlier AI coding tools. Copilot can suggest code, but it can’t execute a multi-step workflow. Cursor can perform more complex operations, but it requires human approval at each step. An agent can operate autonomously with explicit permissions. This creates a new category of capability that’s orders of magnitude more powerful. However, it also creates new challenges. Autonomous agents can make mistakes at scale. They can execute harmful operations if not properly constrained. They require careful system prompt engineering to ensure they behave as intended. These challenges are why AMP’s architecture and approach are so important.
When the AMP team started building, they made a fundamental decision: speed and iteration would be the primary optimization target. Everything else would flow from this decision. This meant abandoning many of the practices that had made Sourcegraph successful with Cody. No formal code reviews. No extensive planning cycles. No security and compliance checkboxes that take nine months to complete. Instead, the team adopted a personal project mentality: push to main, ship 15 times a day, dog food the product constantly, and iterate based on real usage.
This approach sounds chaotic, and by traditional software engineering standards, it is. But it’s precisely the right approach for a product operating at the frontier of AI capabilities. The reason is simple: the frontier is moving. Every few months, a new model is released. Every few weeks, new capabilities emerge. Every few days, new techniques for prompt engineering or tool design are discovered. In this environment, the ability to iterate quickly is more valuable than the ability to scale reliably. A product that ships 15 times a day can incorporate new model capabilities within hours of their release. A product that follows traditional release cycles will be months behind.
The team structure reflects this philosophy. The core AMP team is small—around eight people—compared to larger engineering organizations. This small size is intentional. It enables rapid decision-making and eliminates the communication overhead that slows down larger teams. Everyone on the team is experienced, which means they can operate without extensive code review processes. They dog food the product constantly, which means they catch issues quickly and understand user needs intimately. They’re not trying to build a product that works for every developer; they’re building a product for developers who want to move as fast as they do, who want to stay at the frontier of AI capabilities, and who are willing to embrace new approaches to development.
This positioning is crucial. AMP is not trying to be GitHub Copilot for everyone. It’s not trying to be the default AI coding tool for all developers. Instead, it’s positioning itself as the tool for developers and teams that want to move fast and stay at the frontier. This is a much smaller market than “all developers,” but it’s a market that’s willing to pay significantly more for superior capabilities. The business model reflects this: instead of a $20 per month subscription, AMP customers are paying hundreds of dollars per month. Some teams are on annual run rates of hundreds of thousands of dollars. This is possible because the value proposition is so strong for the target market.
The principles that guide AMP’s development—rapid iteration, frontier positioning, and autonomous reasoning—are directly applicable to broader workflow automation. FlowHunt, as a platform for building and automating complex workflows, faces similar challenges and opportunities. Just as AMP is positioning itself to handle the next generation of AI capabilities, FlowHunt can help organizations build workflows that are resilient to rapid technological change. By focusing on flexibility, rapid iteration, and the ability to incorporate new tools and capabilities quickly, FlowHunt enables teams to stay ahead of the curve.
The key insight is that in a rapidly evolving technological landscape, the ability to adapt quickly is more valuable than the ability to optimize for current conditions. This applies whether you’re building an AI coding agent or automating business processes. FlowHunt’s approach of enabling rapid workflow creation, testing, and iteration aligns perfectly with this philosophy. Teams can build workflows that incorporate the latest AI capabilities, test them quickly, and iterate based on results. As new models and tools emerge, workflows can be updated rapidly without requiring extensive re-engineering. This is the future of automation: not static, optimized processes, but dynamic, adaptive workflows that evolve as technology evolves.
The AI coding agent market provides a fascinating case study in how rapidly market leadership can shift. At the start of 2024, Cursor was widely considered the king of AI coding tools. It was the fastest-growing startup of all time. Developers were switching from other tools to Cursor in large numbers. The market seemed settled. Then, within a few months, the conversation shifted. New tools emerged. Capabilities improved. Developers started asking different questions. By mid-2024, if you asked developers what they thought was the best AI coding tool, many wouldn’t name Cursor first. The market had shifted so rapidly that the previous leader was no longer clearly dominant.
This pattern is not unique to coding agents. It’s a fundamental characteristic of markets where the underlying technology is advancing rapidly. In such markets, the ability to move fast and adapt is more important than current market share. A company with 30% market share that can iterate quickly and incorporate new capabilities will eventually overtake a company with 50% market share that moves slowly. This is why Sourcegraph’s decision to create AMP as a separate product was so strategically sound. By separating AMP from Cody, they freed themselves from the constraints that would have slowed them down. They could move fast, iterate quickly, and position themselves at the frontier.
The broader lesson is that in rapidly evolving markets, the emperor often has no clothes. Established products that seem dominant can become obsolete surprisingly quickly. This isn’t because they become worse; it’s because the technology shifts and they can’t adapt fast enough. The companies that succeed are those that understand this dynamic and position themselves accordingly. They don’t try to optimize for current conditions; they optimize for the ability to adapt to future conditions. They don’t try to serve every customer; they focus on customers who value speed and innovation. They don’t follow traditional product development practices; they adopt practices that enable rapid iteration and learning.
The conversation around AMP reveals an important insight about the future of AI agents: the next major shift will be from interactive agents to async agents. Currently, most AI coding agents operate interactively. A developer runs an agent in their editor or CLI, the agent performs some operations, and the developer sees the results. There’s typically one agent running at a time, and it’s running synchronously—the developer waits for it to complete. This is a significant improvement over manual coding, but it’s not the ultimate form of agent-based development.
The next frontier is async agents that run 24/7 in the background, concurrently. Instead of one agent running at a time, you might have 10, 50, or 100 agents running simultaneously on different tasks. An agent might be working on refactoring code in one part of the codebase while another agent is writing tests for a different component. A third agent might be analyzing performance issues and suggesting optimizations. All of this happens without human intervention, and all of it happens concurrently. The implications are staggering: a 10x to 100x improvement in the amount of work that can be done, a fundamental shift in how development teams operate, and a complete reimagining of what’s possible with AI-assisted development.
This shift will have profound implications for inference costs, for how teams organize their work, and for what it means to be a developer. It will also create new challenges around quality assurance, security, and ensuring that autonomous agents don’t introduce bugs or security vulnerabilities at scale. But the potential upside is enormous. Teams that can effectively leverage async agents will be able to accomplish in days what currently takes weeks. This is why positioning yourself to move fast and adapt is so critical. The companies that figure out how to build effective async agents first will have a massive competitive advantage.
The fundamental principle underlying AMP’s approach is building for uncertainty. The team doesn’t know exactly where the technology will go, but they know it will change. They don’t know which capabilities will matter most, but they know new capabilities will emerge. They don’t know what the market will look like in six months, but they know it will be different. Given this uncertainty, the rational approach is to optimize for adaptability rather than optimization. This means keeping the codebase flexible, maintaining the ability to ship quickly, staying close to the frontier of AI capabilities, and being willing to throw away approaches that no longer work.
This principle extends to team structure, business model, and customer strategy. The team is small and experienced, which enables rapid decision-making. The business model is flexible, with no fixed pricing or user model, which allows for rapid adjustment as the market evolves. The customer strategy focuses on developers who want to move fast, which creates a natural alignment between the company’s capabilities and customer needs. Everything flows from the core principle of building for uncertainty and optimizing for adaptability.
This is a radically different approach from traditional product development, where you try to predict the future, build for scale, and optimize for stability. But in a market where the underlying technology is advancing rapidly, traditional approaches are actively harmful. They slow you down, they lock you into decisions that become obsolete, and they prevent you from adapting to new realities. The companies that succeed in such markets are those that embrace uncertainty, optimize for adaptability, and move fast enough to stay ahead of the curve.
{{ cta-dark-panel heading=“Supercharge Your Workflow with FlowHunt” description=“Experience how FlowHunt automates your AI content and SEO workflows — from research and content generation to publishing and analytics — all in one place.” ctaPrimaryText=“Book a Demo” ctaPrimaryURL=“https://calendly.com/liveagentsession/flowhunt-chatbot-demo" ctaSecondaryText=“Try FlowHunt Free” ctaSecondaryURL=“https://app.flowhunt.io/sign-in" gradientStartColor="#123456” gradientEndColor="#654321” gradientId=“827591b1-ce8c-4110-b064-7cb85a0b1217” }}
The ability to ship 15 times per day is not accidental; it’s the result of deliberate architectural choices. The first key decision was to decouple AMP from the Sourcegraph platform entirely. Cody was tightly integrated with Sourcegraph, which meant it was bound by the platform’s release cycles and infrastructure constraints. AMP was built as a standalone product with its own infrastructure, its own deployment pipeline, and its own release schedule. This decoupling is crucial because it eliminates the coordination overhead that slows down larger systems. Changes to AMP don’t require coordinating with platform changes. Deployments don’t require waiting for platform releases.
The second key decision was to adopt a minimal code review process. The team pushes to main and ships frequently. If something breaks, the team fixes it quickly. This sounds risky, but it works because the team is small, experienced, and dog fooding the product constantly. They catch issues quickly because they’re using the product themselves. They can fix issues quickly because the codebase is fresh in their minds. They can iterate quickly because they’re not waiting for code review approvals. This approach would be dangerous in a large organization with many developers, but in a small, experienced team, it’s incredibly effective.
The third key decision was to dog food the product aggressively. The team uses AMP to build AMP. This creates a tight feedback loop where the team immediately experiences any issues or limitations in the product. It also means the team is constantly discovering new use cases and capabilities. When you’re using your own product to build your own product, you quickly learn what works and what doesn’t. You discover edge cases that you wouldn’t find through traditional testing. You develop intuition about what features matter most. This is why dog fooding is so powerful for rapid iteration.
The fourth key decision was to keep the codebase simple and flexible. Rather than trying to build a complex, highly optimized system, the team built something that’s easy to modify and extend. This means they can incorporate new capabilities quickly. When a new model is released, they can integrate it rapidly. When a new technique for prompt engineering emerges, they can experiment with it quickly. When they discover a better approach to a problem, they can refactor quickly without worrying about breaking complex dependencies. This simplicity and flexibility is worth more than optimization in a rapidly evolving market.
The pricing model for AMP reveals important insights about value creation in AI-assisted development. Early in the development process, the team realized they couldn’t make a $20 per month subscription work for a tool-calling agent. This wasn’t just a pricing problem; it was a signal that the product was fundamentally different and required a different business model. A chat-based assistant like Cody can operate efficiently on modest infrastructure. A tool-calling agent that’s reasoning about code, executing tools, and iterating autonomously requires significantly more compute. The infrastructure costs alone justify higher pricing.
But the pricing model also reflects the value proposition. For a developer or team that can use AMP effectively, the productivity gains are enormous. An agent that can autonomously implement features, write tests, and refactor code can save hours or days of work per week. For a team of developers, this translates to significant value. If AMP can save a team 10 hours per week, and developer time costs $100 per hour, then AMP is creating $1,000 per week of value. Charging hundreds of dollars per month is a tiny fraction of that value. This is why some teams are on annual run rates of hundreds of thousands of dollars—the value proposition is so strong that the pricing is actually a bargain.
The business model also reflects the strategic positioning. By charging significantly more than traditional coding tools, AMP is signaling that it’s a different category of product. It’s not competing on price; it’s competing on capability and value. This attracts customers who care about capability and value, and repels customers who are primarily price-sensitive. This is exactly the right customer segmentation for a product operating at the frontier of AI capabilities. You want customers who understand the value of frontier capabilities and are willing to pay for them. You don’t want customers who are primarily looking for the cheapest option, because those customers will switch to the next cheap option that comes along.
One of the most interesting aspects of Sourcegraph’s approach is how they’ve managed the tension between innovation and stability. Sourcegraph has a successful, profitable business with Cody and the broader Sourcegraph platform. This business generates revenue that funds the crazy experimentation happening with AMP. But this also creates organizational tension. How do you maintain a stable, profitable business while simultaneously pursuing radical innovation? How do you keep experienced engineers focused on the new product when they have deep expertise in the existing product?
The answer involves several key decisions. First, Sourcegraph explicitly decided not to try to convert existing Cody customers to AMP. Instead, they’re using the trust and revenue from Cody to fund AMP development. This is a crucial distinction. If they tried to migrate Cody customers to AMP, they would face resistance because AMP is fundamentally different and requires different usage patterns. By keeping Cody and AMP separate, they can serve different customer segments and avoid the disruption that would come from trying to migrate customers between fundamentally different products.
Second, Sourcegraph assembled a team for AMP that includes people with no preconceived notions about how to build software. Some of the most effective team members are people who have only worked at tiny, one-person companies. They don’t have years of experience with traditional software engineering practices. They don’t have ingrained habits about code reviews, planning cycles, and optimization. This lack of baggage is actually an advantage. They can adopt practices that would seem radical to someone with traditional software engineering experience, and they can do so without the cognitive dissonance that comes from abandoning deeply held beliefs.
Third, Sourcegraph has been explicit about the different rules for AMP. The team doesn’t follow the same processes as the rest of the company. They don’t do formal code reviews. They don’t check off security and compliance boxes. They don’t follow the same planning cycles. This is possible because they have customer trust. Customers understand that AMP is a frontier product, and they’re willing to accept different trade-offs. This explicit separation of rules and processes is crucial. If AMP had to follow the same processes as the rest of Sourcegraph, it would be slow. By explicitly separating the rules, Sourcegraph has created space for radical innovation.
The AMP story offers several important lessons for organizations operating in rapidly evolving markets. The first lesson is that established success can become a liability. Sourcegraph’s success with Cody and the Sourcegraph platform could have locked them into a slow, incremental approach. Instead, they recognized that the technology was shifting and made the bold decision to start fresh. This required the confidence to cannibalize their own business, the wisdom to recognize that the old approach wouldn’t work for the new technology, and the courage to pursue a radically different strategy.
The second lesson is that speed and adaptability are more valuable than optimization and scale in rapidly evolving markets. The team doesn’t try to build a perfectly optimized system. They build something that’s good enough and can be iterated quickly. They don’t try to serve every customer. They focus on customers who value speed and innovation. They don’t follow traditional processes. They adopt processes that enable rapid iteration. This focus on adaptability over optimization is counterintuitive to many organizations, but it’s the right approach when the underlying technology is advancing rapidly.
The third lesson is that small, experienced teams can outperform large organizations. The AMP team is around eight people. They’re all experienced engineers. They work without formal code reviews or extensive planning. They ship 15 times per day. They dog food the product constantly. This small team is able to move faster and innovate more effectively than much larger teams at other organizations. This is because they’ve eliminated the communication overhead and process overhead that slows down larger teams. They’ve created an environment where rapid iteration is possible.
The fourth lesson is that you need to reset expectations when the product fundamentally changes. Cody customers had expectations about pricing, features, and how the product works. Those expectations wouldn’t have transferred to AMP. By creating AMP as a separate product with a separate brand, Sourcegraph was able to reset expectations entirely. This is a powerful strategy when your product is fundamentally different from what came before. Rather than trying to retrofit new capabilities into an old product, create a new product and reset expectations.
The AI coding agent market is characterized by rapid change and shifting leadership. At any given moment, there’s a “best” tool, but that leadership position is unstable. Copilot was the clear leader for a period. Cursor became the leader. Now there are multiple strong competitors, and the leadership position is contested. This instability is driven by the rapid advancement of underlying models and techniques. When Claude 3.5 Sonnet was released, it enabled new capabilities that weren’t possible before. When new prompt engineering techniques are discovered, they can be incorporated into products quickly. When new models are released, the competitive landscape shifts.
In this environment, the companies that succeed are those that can move fast and adapt quickly. A company that’s optimized for stability and scale will be slow to incorporate new capabilities. A company that’s optimized for speed and iteration will be able to incorporate new capabilities quickly. Over time, the fast company will pull ahead. This is why AMP’s focus on speed and iteration is so strategically important. By optimizing for speed, AMP is positioning itself to stay ahead of the curve as the technology evolves.
The market is also characterized by increasing specialization. Rather than trying to be the best tool for all developers, successful products are focusing on specific segments. AMP is focusing on developers who want to move fast and stay at the frontier. Other products might focus on enterprise customers who value stability and support. Other products might focus on beginners who need guidance and education. This specialization is healthy for the market because it allows products to optimize for their target segment rather than trying to be all things to all people.
The story of AMP reveals fundamental truths about competing in rapidly evolving markets. Established products with seemingly unshakeable market positions can become obsolete surprisingly quickly when the underlying technology shifts. Companies that optimize for stability and scale become slow and vulnerable. Companies that optimize for speed and adaptability can move fast enough to stay ahead of the curve. The emperor often has no clothes—the dominant product of today can be irrelevant tomorrow if it can’t adapt to technological change. Sourcegraph’s decision to create AMP as a separate product, to adopt radical practices like pushing to main without code reviews, to focus on speed and iteration over optimization and scale, and to position the product at the frontier of AI capabilities represents a sophisticated understanding of how to compete in such markets. The lessons from AMP extend far beyond coding agents. Any organization operating in a market where technology is advancing rapidly should consider whether they’re optimized for the right things. Are you optimizing for stability when you should be optimizing for adaptability? Are you trying to serve all customers when you should be focusing on a specific segment? Are you following traditional processes when you should be adopting practices that enable rapid iteration? The answers to these questions will determine whether your organization thrives or becomes obsolete in the face of technological change.
AMP is a frontier coding agent built by Sourcegraph that uses advanced language models with tool-calling capabilities to autonomously reason and execute code changes. Unlike Cody, which was tightly integrated with the Sourcegraph platform and bound by its release cycles, AMP operates independently with its own infrastructure, allowing for 15+ deployments per day and rapid iteration on new model capabilities.
Sourcegraph created AMP as a separate product to avoid disrupting existing enterprise contracts and customer expectations around Cody. The shift from a chat-based assistant to a tool-calling agent represents a fundamental change in how developers interact with AI. By creating a new brand and product, Sourcegraph could reset expectations, move faster without legacy constraints, and position itself at the frontier of AI development without alienating existing customers.
Tool-calling agents are AI systems that combine a language model, system prompts, and external tools to perform autonomous tasks. Unlike traditional chatbots, tool-calling agents can interact with file systems, code editors, and other external systems with explicit permissions. This enables them to execute complex, multi-step workflows autonomously, making them fundamentally more powerful for software development tasks.
AMP is growing more than 50% month-over-month, with some teams on annual run rates of hundreds of thousands of dollars. The product has achieved positive gross margins while maintaining this growth rate. Sourcegraph has strategically focused on developers who want to move fast and stay at the model frontier, rather than trying to convert every developer in an enterprise.
The discussion highlights that async agents running 24/7 in the background will dominate the next phase of AI development. Instead of interactive agents running one at a time in an editor, teams will run 10-100 concurrent agents autonomously, creating a 10-100x improvement in output and inference. The key to success is positioning your product to move fast and adapt as new models and capabilities emerge every few months.
Arshia is an AI Workflow Engineer at FlowHunt. With a background in computer science and a passion for AI, he specializes in creating efficient workflows that integrate AI tools into everyday tasks, enhancing productivity and creativity.
Experience how FlowHunt helps teams build, test, and deploy AI-powered workflows faster than ever before.
Discover how Ona (formerly Gitpod) is revolutionizing software development with AI coding agents that operate in fully configured, sandboxed cloud environments....
Explore Claude Sonnet 4.5's breakthrough capabilities, Anthropic's vision for AI agents, and how the new Claude Agent SDK is reshaping the future of software de...
FlowHunt enables effortless AI automation with a no-code platform, empowering users to create custom tools. Founded by QualityUnit, creators of LiveAgent and Po...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.


