All guides
Workflow 6 min read

Using reference images effectively

How to get the most out of NoHiddenPrompt's reference image feature — for style matching, character consistency, and iterative refinement.

Reference images are the single biggest unlock for getting consistent, on-brand results from Nano Banana 2 and Pro. Used well, they turn a hit-or-miss generation tool into a reliable creative workflow. Used badly, they confuse the model and produce mush. This guide covers how to use them well.

What references actually do

When you attach a reference image to a generation, the model treats it as visual context — guidance for style, color palette, composition, character identity, or all of the above. The text prompt still drives what is depicted; the reference influences how it looks. Think of the prompt as the script and the reference as the mood board.

Use one reference per intent

Don't dump six unrelated images and hope the model sorts it out. The most effective approach is one reference per thing you want to control. One image for style, one for character, one for the environment. Three focused references beat six scattered ones.

Be explicit in the prompt about what to copy

Tell the model what to take from each reference. 'Use the lighting style from the first reference image and the character's outfit from the second' produces dramatically better results than just attaching the images and writing a generic prompt. The model can't read your mind about which aspects of a reference matter.

Style matching

If you've found a visual style you love — a specific film stock look, an illustration aesthetic, a particular color palette — attach one image that exemplifies it and write a prompt like: 'In the same painterly style, color palette, and brush texture as the reference image, [your subject and scene].' This is the single most reliable use of references.

Character consistency

Generating the same character across multiple images is hard. The trick: get one good image of the character first, then use that image as a reference for every subsequent generation. Write prompts like 'The same character as in the reference image, now [doing different action in different setting].' Nano Banana Pro is meaningfully better at this than Nano Banana 2.

Iterative refinement

NoHiddenPrompt's most powerful workflow: generate an image, click 'Use as reference,' tweak your prompt, and regenerate. The model preserves what's working and updates what you asked it to change. This is far more reliable than trying to nail the perfect prompt on the first try.

What to avoid

  • Don't attach low-resolution references. The model sees pixel detail. A blurry reference produces a blurry-feeling output.
  • Don't attach images with heavy watermarks — the model may try to replicate the watermark.
  • Don't attach references that contradict your prompt. If your prompt asks for a forest and your reference shows a city, you get something muddy in between.
  • Don't max out at six references unless you really need them. Three is usually the sweet spot.

A concrete workflow

Say you're generating a series of book cover concepts featuring the same protagonist. Step one: generate a single hero portrait of the character with a detailed prompt and no references. Iterate until you have one you love. Step two: lock that portrait as your character reference for all subsequent generations. Step three: write each new prompt as 'The same character as in the reference image, [new scene].' Step four: when you find a scene composition that works, save that too — now you have a character reference and a composition reference for further iterations.

References turn AI image generation from a slot machine into a tool. Use them deliberately and your work will look like it came from a single creative vision rather than ten random rolls.

Keep reading