Get a Free Brand Audit Report With Wellows Claim Now!

Figma Partners With Google to Bring Gemini AI Into Design Workflows — How Will It Work?

  • October 10, 2025
    Updated
figma-partners-with-google-to-bring-gemini-ai-into-design-workflows-how-will-it-work

Figma is partnering with Google to add Gemini models to its platform, including Gemini 2.5 Flash, Gemini 2.0, and Imagen 4, bringing faster image generation and editing to its 13M monthly active users.

Early tests showed a ~50% latency reduction for “Make Image.”

📌 Key Takeaways

  • Gemini 2.5 Flash, 2.0, and Imagen 4 are coming to Figma for image creation and edits.
  • Figma cites 13M MAUs; image tools saw ~50% faster responses in testing.
  • The deal is not exclusive; Figma also appears as an app inside ChatGPT.
  • The timing aligns with Google’s Gemini Enterprise push for workplace AI.
  • Designers keep their usual flow; Gemini slots into existing Figma features.


Gemini 2.5 Flash And Imagen 4 Are Coming To Figma

Figma will integrate Gemini 2.5 Flash for image editing and generation, alongside Gemini 2.0 and Imagen 4 for creative tasks. The goal is to make prompts and quick iterations part of everyday design without leaving the canvas.

In early tests, Figma says “Make Image” responded roughly 50% faster with Gemini 2.5 Flash.

This is part of a broader play to meet “evolving needs” across product teams, not only designers. The integration arrives as Google promotes Gemini Enterprise to bring agents and models into workplace tools, with Figma named among early customers in coverage.


How It Changes Daily Work

For visual exploration, Gemini supports text-to-image and prompted edits directly in Figma’s image workflow.

That means faster concepting, cleaner revisions, and fewer round-trips to external generators or editors. The promise is to time back during sprints and design crits.

Figma’s presence as an app inside ChatGPT also signals a flexible approach: Gemini powers in-product creation, while third-party chat apps can still route ideas back into Figma. Designers get options rather than lock-in.

“Gemini 2.5 Flash will be integrated into editing and image generation, letting users make AI images with a prompt and request changes.”


Availability And The Bigger Picture

The partnership keeps Figma tied into Google Cloud while expanding its AI palette.

It lands the same week Google pitches Gemini Enterprise as a “front door” for AI at work, so the integration should benefit from Google’s enterprise momentum and governance features.

Importantly, this is not an exclusive lane. Figma is also listed among apps that can run inside ChatGPT, underscoring that the company expects designers to mix ecosystems and still end in Figma for assembly, review, and handoff.


How To Use Figma’s Gemini AI: Step-By-Step

Use these quick steps to try the new image features without changing your current workflow.

  • Update Figma, then open a file where you’d normally create or edit images.
  • Select an image tool (e.g., Make Image or an edit flow) and follow the on-screen prompt field.
  • Describe what you want (“hero image with neutral palette,” “swap background,” “add subtle texture”).
  • Iterate with short follow-ups (“brighter lighting,” “tighter crop,” “remove the plant”).
  • Compare variants in your frame, keep the best take, and continue your usual Figma review and handoff.

Tip: If you ideate in Gemini first and want a pixel-faithful mock in Figma, plugins like html.to.design can capture a clean HTML preview and import it as editable layers. Use this as an optional bridge when you start outside Figma.


Why It Matters

Integrating Gemini, where designers already work, speeds exploration and revision without forcing a tool switch. That matters in sprint cycles where minutes add up and momentum is fragile.

The move also fits Google’s enterprise AI push, suggesting better governance and performance as models evolve. If latency gains hold in production, Gemini-powered edits could become the default path for everyday image work in Figma.


For the recent AI News, visit our site.


If you liked this article, be sure to follow us on X/Twitter and also LinkedIn for more exclusive content.

Was this article helpful?
YesNo
Generic placeholder image
Articles written 881

Khurram Hanif

Reporter, AI News

Khurram Hanif, AI Reporter at AllAboutAI.com, covers model launches, safety research, regulation, and the real-world impact of AI with fast, accurate, and sourced reporting.

He’s known for turning dense papers and public filings into plain-English explainers, quick on-the-day updates, and practical takeaways. His work includes live coverage of major announcements and concise weekly briefings that track what actually matters.

Outside of work, Khurram squads up in Call of Duty and spends downtime tinkering with PCs, testing apps, and hunting for thoughtful tech gear.

Personal Quote

“Chase the facts, cut the noise, explain what counts.”

Highlights

  • Covers model releases, safety notes, and policy moves
  • Turns research papers into clear, actionable explainers
  • Publishes a weekly AI briefing for busy readers

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *