⏳ In Brief
- Google integrates Nano Banana into Gemini, available for free and paid users today.
- Model preserves likeness across edits, a frequent failure in older tools.
- Users can blend photos, restyle scenes, and keep identities consistent.
- Rollout spans app and web, with developers getting API, Studio, Vertex access.
- All outputs carry visible watermarks plus SynthID for provenance.
Google puts Nano Banana inside Gemini, with immediate global access
Google confirmed Nano Banana as its new image editing model for Gemini, focused on natural multi-step changes that keep people and pets recognisable. The company says both free and paid users can try it starting today.
Early previews placed the model No. 1 on LMArena’s Image Edit leaderboard, a sign of strong instruction-following and identity preservation.
Gemini now exposes those abilities directly inside its consumer app and web experience. At AllAboutAI, we pushed Nano-Banana to its limits. Here’s how it performed in our tests.
Launched August 26, 2025, Nano Banana brings likeness-preserving edits, photo blending, and multi-turn refinement into Gemini, with outputs watermarked and tagged by SynthID for provenance.
🍌 nano banana is here → gemini-2.5-flash-image-preview
– SOTA image generation and editing
– incredible character consistency
– lightning fast
available in preview in AI Studio and the Gemini API pic.twitter.com/eKx9lwWc9j
— Google AI Studio (@googleaistudio) August 26, 2025
What actually changed for editors, and why it matters
Gemini can combine photos, restyle rooms, and apply design mixing, while keeping faces and features stable. That reduces the near-miss problem where a person looks close, yet subtly wrong, after even simple edits.
Multi-turn editing lets users iterate, paint a wall, add furniture, then tweak details, without destroying earlier choices. The system aims for precise control, not just one-shot generations that drift off the prompt.
New tricks to try
- Blend two photos into one coherent scene
- Keep a subject’s likeness while changing outfits or hair
- Transfer the style of one image onto objects in another image
Nano Banana also enables photo-to-video from any edited image, plus likeness-preserving background and location swaps, multi-turn part-specific refinements, design mixing for style transfer, and blending multiple photos into one coherent scene.
nano banana is free and unlimited now
you can edit image, swap actor from any film with it, turn to video with Veo 3 and Kling 2.1, make character talk..
they called it Higgsfield Swap-to-Video
tutorial & examples: pic.twitter.com/tcHEN4RRcW
— el.cine (@EHuanglu) August 27, 2025
Where you can use it, and who gets access
Google says the update is live for paid and unpaid users globally in the Gemini app. That includes mobile and web, so most people can test edits without new installs or extra fees.
Developers get the model through the Gemini API, Google AI Studio, and Vertex AI, enabling product workflows that blend text prompts with repeatable edits for teams. That mirrors how creative stacks are adopting AI components.
Safety, provenance, and the deepfake question
All Gemini image outputs include a visible watermark and an invisible SynthID mark. That helps downstream platforms and users verify origin, even when images are re-shared or lightly compressed.
Outside coverage highlights the usual concern, that more powerful consumer editors can also speed misuse. Watermarking, policy filters, and consistent labelling will be essential as edits become more realistic.
Google emphasises provenance, visible watermarks, and SynthID by default. Analysts still flag risks around realistic identity edits, especially when content leaves Gemini.
Benchmarks, naming, and how we got here
The project circulated under the “Nano Banana” codename, generating buzz as it topped LMArena’s image editing board. Google later confirmed the model and folded it into Gemini for mainstream use.
Internally, the capability maps to Gemini 2.5 Flash Image, a step meant to match rival tools on fidelity and control. The company frames it as a practical editor that understands instructions, not just a novelty generator.
“We’re really pushing visual quality forward, as well as the model’s ability to follow instructions.” — Nicole Brichtova, Google DeepMind.
How will it change everyday editing
For casual users, the win is consistency; the person you edit today should still look like themselves tomorrow. For teams, multi-turn refinement trims rework, edits stack neatly, and results are easier to reuse.
Gemini’s approach also narrows the gap between chat and creation, since prompts can orchestrate several targeted changes in sequence.
That enables moodboard-to-mockup loops without jumping across apps.
Wondering how it compares to other image generation tools? You can check our detailed NanoBanana vs ChatGPT vs MidJourney vs Flux.
Conclusion
Nano Banana moves AI editing from clever trick to dependable tool, with likeness-preserving changes, multi-step control, and default watermarks to signal origin. The features now sit where users already work, inside Gemini.
The next proof points are robustness and guardrails at scale. If quality holds across edge cases, and provenance remains visible downstream, Gemini’s editor could become an everyday default for quick, credible image edits.
📈 Trending News
27th August 2025
- YouTube quietly used AI to alter your shorts — Without consent
- Japanese news giants ‘Asahi’ & ‘Nikkei’ just went to court to sue Perplexity
- xAI just sued Apple & OpenAI — What happens now?
- Perplexity launches $5 Comet Plus subscription tied to its AI browser
- HUMAIN launches Arabic-first AI chatbot built on ALLAM 34B
For more AI stories, visit AI News on our site.