AI image generation: ethics and best practices
Practical, non-preachy guidance on using AI image generators responsibly — covering attribution, real people, copyright, and disclosure.
AI image generation is a genuinely new medium. The norms around how to use it responsibly are still being worked out, but a few principles have emerged from the first few years of widespread use that hold up well. This is a practical guide — not a lecture — covering the questions most users actually run into.
Disclose AI-generated images when context matters
Disclosure isn't always necessary. If you generate a stylized illustration as a desktop wallpaper, no one expects a disclaimer. If you publish an image as journalism, marketing, or evidence of something real, you should clearly label it as AI-generated. The rule of thumb: would a reasonable viewer assume this is a real photograph or human-made artwork? If yes, disclose.
Be careful with real people
Generating images of real, identifiable people without their consent is the area where AI image generators cause the most harm. Avoid creating images that depict real people in situations they didn't consent to — especially anything sexual, violent, or that could be mistaken for a real photo of a real event. Most jurisdictions are introducing laws specifically targeting non-consensual deepfakes.
Public figures: still careful
Politicians, celebrities, and other public figures have less expectation of privacy than private individuals, but that doesn't mean anything goes. Satirical or editorial images that are clearly labeled as such are generally accepted. Realistic fakes designed to deceive — fake quotes, fake events, fake photographs — are not.
Copyright and trained data
AI image models are trained on enormous datasets that include copyrighted images. The legal status of model outputs is still being litigated. In practice: if you generate something that looks closely like an existing copyrighted work — a recognizable Pixar character, a specific Annie Leibovitz portrait — don't use it commercially. Generic outputs that don't replicate specific protected works are generally safer, but legal certainty doesn't exist yet.
Attribution to artists
Prompting 'in the style of [living artist]' is technically possible but ethically fraught. Many artists have publicly objected to their style being mimicked by AI models. A safer approach: describe the visual qualities you want ('thick impasto brush strokes, muted earth-tone palette') rather than naming a specific living person. References to deceased artists or broad historical movements are less contentious.
Don't pass AI work off as human-made for commissions
If a client is paying you for human craftsmanship, don't deliver AI-generated work without telling them. This is straightforward fraud and it's increasingly easy to detect. If you use AI as part of your workflow — generating concepts, mocking up compositions, producing reference imagery — that's fine, but be upfront about it.
Children
Generating sexualized images of minors is illegal everywhere and a hard limit on every reputable model. Even non-sexual images of identifiable real children should be approached with extreme caution — a parent's consent is not the same as the child's, and a generated image of a child can follow them into adulthood.
Misinformation
Don't create realistic images of events that didn't happen and present them as if they did. This is the single most damaging misuse of AI image tools. If you're making fictional scenes for entertainment or art, that's a legitimate use — but make the fictional nature clear in the surrounding context.
Working with model safety policies
The Google Gemini models behind NoHiddenPrompt enforce content safety policies. They will refuse some prompts. Sometimes a refusal is overcautious and frustrating. The right response is to rephrase your prompt to make your intent clear, not to look for ways to bypass the policy. The policies exist for good reasons even when individual decisions are imperfect.
A short checklist
- Would a reasonable viewer be misled? Disclose.
- Does it depict a real person without consent? Don't make it.
- Does it closely replicate a specific copyrighted work? Don't use commercially.
- Are you delivering it to someone expecting human work? Tell them.
- Could it be mistaken for documentation of a real event? Label clearly or don't publish.
AI image generation is a powerful tool. Most uses are completely fine. The handful of edge cases above account for almost all the harm — avoid them and you'll be a responsible user of this medium.

