
Talk to Figma
Integrate FlowHunt with Figma to enable AI-powered design collaboration, automate repetitive tasks, and access design assets and metadata directly using natural...

Explore how taste, aesthetics, and design judgment serve as competitive advantages in the AI era, and how tools like Figma Make are democratizing creation while preserving the importance of human creativity and vision.
The intersection of artificial intelligence and design represents one of the most transformative moments in product development. As AI capabilities expand exponentially, a counterintuitive truth emerges: the more powerful AI becomes at generating designs, the more valuable human taste becomes. This paradox sits at the heart of how companies like Figma are reshaping the creative landscape. In a conversation exploring the philosophy behind Figma Make and the evolution of AI in design, Dylan Field, founder of Figma, articulates a vision where taste—the aesthetic judgment and creative sensibility that distinguishes exceptional products—becomes the ultimate competitive moat. This article explores what that means for designers, product builders, and anyone involved in creating digital experiences in an AI-augmented world.
Taste, in the design and product context, refers to the cultivated ability to recognize quality, make intentional aesthetic choices, and maintain coherence across a product experience. It’s not merely subjective preference—it’s a disciplined judgment informed by understanding principles of visual hierarchy, typography, spacing, color theory, user psychology, and the broader context of what makes a product feel polished and intentional. Taste is what separates a product that feels like it was thoughtfully crafted from one that feels like it was assembled from components. It’s the difference between a design that works and a design that delights. Throughout the history of technology, taste has been a defining characteristic of companies that achieved lasting success. Apple’s obsessive attention to detail, the minimalist elegance of early Google interfaces, and the thoughtful interactions in products like Figma itself—these are all expressions of taste at scale. Taste manifests in thousands of small decisions: the exact shade of gray used for secondary text, the precise timing of an animation, the whitespace around a button, the hierarchy of information on a page. Each decision, when made with intention and consistency, contributes to an overall sense of quality that users may not consciously notice but absolutely feel. This is why taste matters—it’s the accumulated effect of intentional choices that creates products people love to use.
The conventional wisdom might suggest that as AI becomes better at generating designs, the need for human taste diminishes. The opposite is true. As AI tools become more capable of producing viable design options quickly, the bottleneck shifts from generation to curation and refinement. When designers had to manually create every mockup, every iteration, every variation, the constraint was production capacity. Now, with AI capable of generating dozens of design options in seconds, the constraint becomes judgment—the ability to recognize which options are worth pursuing, which directions align with the product’s vision, and which choices will create the most coherent and delightful experience. This shift fundamentally changes what designers do. Rather than spending time on mechanical production, they spend time on evaluation, refinement, and strategic direction-setting. This is where taste becomes invaluable. A designer with strong taste can look at ten AI-generated layouts and immediately recognize which one has the right balance, which one best serves the user’s needs, and which one aligns with the product’s design language. They can then refine that option, push it further, and ensure it meets the standards of quality that define their product. In this sense, AI doesn’t replace taste—it amplifies it. It gives designers the leverage to apply their taste across a much larger design space, exploring more options and pushing further than they could if they had to manually create each one. The companies that will win in the AI era are those that understand this dynamic: they’ll use AI to expand the possibility space and then use taste to navigate that space with intention and coherence.
To understand why AI is now capable of assisting with design in meaningful ways, it’s important to understand the journey that led here. The history of AI in product development spans decades, but the recent acceleration is rooted in a specific insight: scaling laws. The concept of scaling laws—the principle that larger models trained on more data with more compute become exponentially more capable—represents a fundamental shift in how AI systems work. In the early days of machine learning, the focus was on clever algorithms and feature engineering. Teams would spend months designing the right features to feed into a model, optimizing every parameter, and hoping for incremental improvements. This approach had hard limits. No matter how clever the algorithm, there was a ceiling to what it could achieve. The breakthrough came with the realization that simply making models bigger, training them on more data, and giving them more compute could lead to emergent capabilities—abilities that weren’t explicitly programmed but emerged from scale. This insight, validated by research from OpenAI and others, changed everything. GPT-3, released in 2020, was a watershed moment. It demonstrated that a language model trained at scale could perform tasks it was never explicitly trained to do: writing code, answering questions, generating creative content, and much more. The delta between GPT-3 and previous models wasn’t incremental—it was exponential. This realization that something fundamental had shifted in AI’s capabilities opened the door to new possibilities across every domain, including design. The scaling laws principle means that as models get larger and training data increases, capabilities don’t just improve linearly—they improve exponentially. This has profound implications for design tools. It means AI can now understand context, infer intent from natural language, recognize patterns in design systems, and generate coherent options that align with a product’s visual language. These capabilities weren’t possible with smaller models or classical machine learning approaches. They emerged from scale.
Figma’s journey with AI spans more than a decade, though the company didn’t start with generative AI. The original mission—to close the gap between imagination and reality—was about helping designers translate their ideas into digital form. In the early days, this meant building collaborative design tools, real-time multiplayer capabilities, and a platform where designers could work together seamlessly. But even then, the founders were thinking about how AI could enhance creation. In the early 2010s, while exploring machine learning approaches, the Figma team was fascinated by emerging research in computational photography and image editing. Papers were being written about using internet-scale data to complete scenes, essentially doing content-aware fill but powered by the entire internet rather than algorithmic approaches. Other research explored converting 2D images into 3D scenes using techniques like photogrammetry and depth estimation. These were fascinating concepts, but they weren’t quite ready for prime time. The technology could get you 85% of the way there, but not 100%. It wasn’t until deep learning matured that these approaches became practical. The key insight was that there must be a way to make creation easier across many domains, not just one specific task. This led to the vision of “idea to reality”—not “idea to design” or “idea to prototype,” but the broader notion that AI could help people move from conception to execution across many different types of creation. Fast-forwarding to today, Figma Make represents the maturation of this vision. It’s not just a design generator—it’s a tool that understands design intent, can infer from existing design systems, and can help people explore the design space more effectively. The journey from those early conversations about neural networks and computational photography to a product that millions of designers use daily illustrates how long it takes for AI capabilities to mature into practical, useful tools.
One of the most interesting aspects of Figma Make is how it sits at the intersection of three traditionally separate domains: design, specification, and code. In traditional software development, these were distinct phases with clear handoffs. A product manager would write a specification, a designer would create mockups based on that spec, and an engineer would implement the design in code. Each phase had its own tools, its own language, and its own constraints. This waterfall-like process worked, but it was slow and created friction at every handoff. The question Figma is exploring is: what if these three representations of intent could be more fluid? What if a high-fidelity design could serve as a specification? What if a prototype could replace a PRD? What if code could be generated from design? The answer is that all three—spec, design, and code—are different representations of the same underlying intent. They’re different ways of expressing what a product should do and how it should look. As AI improves at translating between these representations, the boundaries between them blur. Figma Make operates in this blurred space. You can describe what you want in natural language, and it generates a design. That design is precise enough to serve as a specification for developers. The design can be connected to code through Figma’s developer tools. The code can be analyzed to understand the design intent and suggest improvements. This fluidity is powerful because it allows different teams and different projects to work in the way that makes most sense for them. Some teams might start with a detailed design. Others might start with a prototype. Still others might start with code and use design tools to visualize and refine it. The key is that all these approaches are now possible within a single platform, and AI helps translate between them.
One of Dylan Field’s most provocative statements is that we’re currently in the “MS-DOS era of AI”—that the natural language prompting everyone is doing today will eventually seem as primitive as command-line interfaces seem now. This perspective is important because it suggests that natural language is not the end state of how we’ll interact with AI, but rather the beginning. Natural language prompts are a way of exploring what researchers call “latent space”—the high-dimensional space of possibilities that a model has learned. When you prompt an AI model, you’re essentially pushing it in different directions within this space, exploring different regions of possibility. Natural language is a useful way to do this because it’s how humans naturally express intent. But it’s not the only way, and it may not be the best way for all use cases. As AI tools mature, we’ll likely see an explosion of different interfaces for exploring latent space. Some might be more visual—sliders and controls that let you adjust different dimensions of the design space. Some might be more constrained—interfaces that guide you through a structured set of choices. Some might be more playful—interfaces that encourage experimentation and serendipity. The key insight is that constraints can unlock creativity. A designer working within a constrained interface might discover possibilities they wouldn’t have found with unlimited natural language prompting. This is why the future of AI-assisted design isn’t just about better language models—it’s about better interfaces for exploring the design space. Figma Make is already moving in this direction. While it supports natural language prompts, it also understands context from your existing designs, can infer your intent from the design system you’ve built, and can suggest options based on patterns it recognizes. This is more sophisticated than simple prompting—it’s about understanding the designer’s intent at a deeper level and helping them explore the design space more effectively.
Design systems have become increasingly important in modern product development. They’re the codification of a product’s visual language, the patterns and principles that ensure consistency across all touchpoints. A design system includes typography scales, color palettes, spacing rules, component libraries, and the principles that guide how these elements are used together. In the context of AI-assisted design, design systems become even more valuable. They serve as the guardrails that help AI understand what your product should look like. When Figma Make can infer from your design system, it can generate options that are already aligned with your brand, your spacing rules, your typography, and your component library. This dramatically reduces the amount of manual refinement needed. Instead of generating a design that’s completely generic and requires extensive customization, AI can generate options that are already 80% of the way to being production-ready. This is where the combination of AI and design systems becomes powerful. The AI handles the generation and exploration of options. The design system ensures consistency and alignment. The designer’s taste determines which options are worth pursuing and how to refine them further. This three-part system—AI for generation, design systems for consistency, and human taste for curation—represents the future of design workflows. It’s not about replacing designers with AI. It’s about giving designers better tools to explore more possibilities while maintaining the coherence and intentionality that defines great products.
Experience how FlowHunt automates your AI content and design workflows — from research and generation to refinement and publishing — all in one place.
The principles that Dylan Field articulates about taste, AI, and design systems apply equally to content creation and workflow automation. FlowHunt is built on the same philosophy: use AI to expand the possibility space, but maintain human judgment and taste as the filter that determines what actually gets shipped. In content workflows, this means using AI to generate multiple options—different headlines, different angles, different structures—and then using human judgment to select and refine the best ones. In design workflows, it means using AI to generate layout options and component variations, but relying on designers to evaluate them against the design system and the product’s aesthetic vision. FlowHunt integrates these capabilities into a unified platform where content creators, designers, and product teams can collaborate on AI-assisted workflows. The platform understands that taste isn’t something that can be automated away—it’s something that needs to be supported and amplified. By providing tools that make it easy to generate options, compare them, refine them, and maintain consistency across a design system or content library, FlowHunt helps teams apply their taste at scale. This is particularly valuable for teams that need to produce large volumes of content or design work. Instead of manually creating everything, they can use AI to generate options and then apply their taste to curate and refine. The result is higher quality output, faster production, and more consistency across all touchpoints.
One of the most significant implications of AI-assisted design tools is the blurring of traditional roles. Historically, there were clear distinctions: product managers wrote specs, designers created mockups, and engineers implemented them. These roles required different skill sets and different tools. As AI tools become more capable, these boundaries blur. A product manager can now create a prototype without being a designer. A designer can now generate code without being an engineer. An engineer can now create designs without being a designer. This democratization of creation is powerful, but it also raises important questions. If anyone can generate a design with AI, what’s the value of a designer? The answer is taste. A designer’s value isn’t in their ability to use design tools—it’s in their ability to recognize quality, make intentional choices, and maintain coherence. These skills become more valuable, not less, as AI makes it easier for anyone to generate designs. The designers who will thrive in this environment are those who understand that their role is evolving from “maker of designs” to “curator and refiner of designs.” They’ll use AI to explore more possibilities than they could manually create, and then they’ll apply their taste to select and refine the best options. This is a different skill set than traditional design, but it’s one that’s increasingly valuable. Similarly, product managers who understand design principles can now create higher-fidelity prototypes to communicate their vision. Engineers who understand design can now contribute more meaningfully to design decisions. The result is more collaboration, more iteration, and ultimately better products. The key is that taste—the ability to recognize quality and make intentional choices—remains valuable across all these roles. It’s not about having a specific job title; it’s about having the judgment to know what’s good and the vision to push for it.
Understanding scaling laws is crucial to understanding why AI is suddenly capable of assisting with design in meaningful ways. For decades, AI research followed a pattern of incremental improvement. New algorithms, new techniques, new approaches would yield modest improvements in performance. Progress was real but slow. The breakthrough came with the realization that simply making models bigger—training them on more data with more compute—could lead to exponential improvements in capability. This insight, formalized in research on scaling laws, changed the trajectory of AI development. The implications are profound. It means that as we continue to scale up models and training data, we should expect continued exponential improvements in capability. It also means that the companies and teams that can access the most compute and the most data will have significant advantages. For design tools, this means that as language models and multimodal models continue to scale, they’ll become better at understanding design intent, inferring patterns from design systems, and generating coherent options. The capabilities we see in Figma Make today will seem primitive compared to what’s possible in a few years. This is both exciting and humbling. It’s exciting because it means the possibilities for AI-assisted creation are far from exhausted. It’s humbling because it means that the competitive advantages of today may not persist if they’re based solely on AI capabilities. The real competitive advantage comes from taste—from the ability to use these capabilities in service of a clear vision and aesthetic. Companies that combine powerful AI tools with strong taste and clear design principles will be the ones that create products people love.
The ultimate vision that Dylan Field articulates is one where AI helps people explore a much larger option space than they could manually. Instead of being constrained by what a single designer or team can create, you can explore hundreds or thousands of possibilities. The designer’s role becomes less about creating and more about navigating this expanded space—recognizing which directions are worth pursuing, which options align with the vision, and which choices will create the most coherent and delightful experience. This shift has profound implications for how products are built. It means more iteration, more exploration, and ultimately more intentional products. Instead of committing to the first design that works, teams can explore multiple directions and choose the one that best serves their users and their vision. It means that taste becomes the limiting factor, not production capacity. The teams that will win are those that have strong taste and the discipline to apply it consistently. This is why Figma Make is so significant. It’s not just a tool for generating designs faster. It’s a tool for expanding the possibility space and helping designers navigate that space with intention. It’s a tool that recognizes that taste is the real competitive advantage, and that AI’s role is to amplify that taste by making it possible to explore more possibilities and refine them more thoroughly. The future of creation isn’t about replacing human judgment with AI. It’s about using AI to expand the space of possibilities and then using human judgment to navigate that space with intention and coherence. This is the promise of tools like Figma Make, and it’s why taste will remain the ultimate moat in an AI-augmented world.
The convergence of AI capabilities and design tools represents a fundamental shift in how products are created. As Dylan Field articulates, taste—the cultivated ability to recognize quality, make intentional choices, and maintain coherence—becomes the ultimate competitive moat precisely because AI is becoming better at the mechanical aspects of design. The journey from early machine learning experiments to Figma Make illustrates how long it takes for AI capabilities to mature into practical tools, and how important it is to maintain a clear vision about what problems you’re solving. The blurring of roles between designers, product managers, and engineers, enabled by AI-assisted tools, democratizes creation while simultaneously making taste more valuable. Design systems serve as the guardrails that help AI generate coherent options aligned with a product’s vision. Natural language is just the beginning of how we’ll interact with AI—future interfaces will offer more sophisticated ways to explore the design space. The scaling laws that power modern AI systems suggest that capabilities will continue to improve exponentially, but competitive advantage will come not from having access to the best AI, but from having the taste and vision to use it in service of a clear aesthetic. Teams that combine powerful AI tools with strong design principles, clear vision, and disciplined taste will create products that stand out. The future of creation is not about replacing human judgment—it’s about amplifying it, expanding the possibility space, and giving creators the tools to explore more thoroughly and refine more intentionally than ever before.
Taste refers to the aesthetic judgment, creative vision, and design sensibility that distinguishes exceptional products from mediocre ones. In an era where AI can generate designs quickly, taste becomes a competitive moat because it's the human element that determines which AI-generated options are refined, elevated, and ultimately shipped to users. It's the ability to recognize quality, make intentional design choices, and maintain consistency across a product that creates lasting competitive advantage.
Figma Make lowers the barrier to entry for design creation by allowing anyone to generate layouts, flows, and prototypes through AI prompts. However, the tool doesn't eliminate the need for taste—it amplifies it. Designers and product builders still need to evaluate generated options, refine them, make intentional choices about which direction to pursue, and ensure consistency with their design system. Taste becomes even more valuable because it's the filter that transforms raw AI output into polished, cohesive products.
Design systems serve as the guardrails and constraints that help AI understand your product's visual language, patterns, and principles. When AI tools like Figma Make can infer from your existing design system, they generate options that are already aligned with your brand, spacing rules, typography, and component library. This means less manual refinement and more consistency, while still allowing designers to exercise taste in selecting and iterating on the best options.
GPT-3 demonstrated that scaling laws—the principle that larger models with more data and compute become exponentially more capable—were real and significant. This realization opened the door to AI applications that could understand context, intent, and nuance in ways previous models couldn't. For design tools, this means AI can now understand design intent from natural language prompts, infer patterns from existing designs, and generate coherent, contextually appropriate options rather than random outputs. The exponential improvement in model capabilities directly translates to more useful, intuitive design assistance.
Traditionally, these were separate phases: requirements → design → code. In the AI era, these boundaries are blurring. A high-fidelity design can serve as a spec. A prototype can replace a PRD. Code can be generated from design. The key insight is that all three are different representations of the same intent. As AI improves at translating between these representations, the question becomes not 'which phase comes first?' but 'which representation best captures our intent and allows us to explore the option space most effectively?' Different teams and projects will answer this differently, and tools need to support multiple workflows.
Arshia is an AI Workflow Engineer at FlowHunt. With a background in computer science and a passion for AI, he specializes in creating efficient workflows that integrate AI tools into everyday tasks, enhancing productivity and creativity.
Discover how FlowHunt integrates AI-powered design automation with your existing workflows to accelerate creation while maintaining your unique aesthetic vision.
Integrate FlowHunt with Figma to enable AI-powered design collaboration, automate repetitive tasks, and access design assets and metadata directly using natural...
Explore how AMP, Sourcegraph's frontier coding agent, is reshaping the AI development landscape by embracing rapid iteration, autonomous reasoning, and tool-cal...
Explore how context engineering is reshaping AI development, the evolution from RAG to production-ready systems, and why modern vector databases like Chroma are...
Cookie Consent
We use cookies to enhance your browsing experience and analyze our traffic. See our privacy policy.


