Hurry! Free Tokens Waiting for You – Register Today!

  • Home
  • Models
    • Grok 4 API
    • Suno v4.5
    • GPT-image-1 API
    • GPT-4.1 API
    • Qwen 3 API
    • Llama 4 API
    • GPT-4o API
    • GPT-4.5 API
    • Claude Opus 4 API
    • Claude Sonnet 4 API
    • DeepSeek R1 API
    • Gemini2.5 pro
    • Runway Gen-3 Alpha API
    • FLUX 1.1 API
    • Kling 1.6 Pro API
    • All Models
  • Enterprise
  • Pricing
  • API Docs
  • Blog
  • Contact
Sign Up
Log in
Technology, guide

How to get specific colors in Midjourney v7 — a practical, up-to-date guide

2025-09-15 anna No comments yet
Midjourney

Midjourney v7 sharpened control over text, image references, and style transfer — and that makes color control (what designers call palette prompting) far more reliable than it used to be. Version 7 shipped with improvements to style references and image handling, plus new workflow features (Draft Mode, Omni Reference, refined style adoption) that change how you should approach color-first prompts.

Midjourney v7 gives designers and creators much tighter control over color than earlier versions — when you know the right toolkit: image references, palette prompting, careful language, and a handful of parameters. This guide (with examples and ready-to-copy prompts) explains how to push Midjourney toward very specific colors, how palette prompting works in v7, which parameters matter most, and practical troubleshooting strategies so you get predictable results.

What changed in Midjourney v7 that affects color control?

Midjourney v7 (released and rolled out as the default in 2025) improves prompt interpretation, image fidelity, and compatibility with the style-reference workflows many artists used in earlier versions. Two changes matter most for color work:

  • Better prompt-to-pixel mapping — V7 reads and applies descriptive text and reference inputs more precisely, so words like “muted teal” or “#0A7373” are likelier to influence final pigments the way you intend.
  • Full compatibility with existing style-reference (sref) and palette techniques — methods that use a small image showing the exact colors you want (a palette image or “sref”) continue to work and often produce more consistent color outcomes in V7.

Midjourney v7 intentionally improved fidelity and “prompt mapping” — meaning prompts (and references) translate to outputs more predictably than before. That’s good news for color control: v7 responds better to style/image references and the new Omni Reference system.

The most useful color-related features you should know:

  • Style Reference (--sref) + style-weight (--sw) — use a palette image (or an SREF code) to transfer color, tone, textures and lighting as a style rather than literal content. --sw adjusts how strongly that style is applied (0–1000; default 100).
  • Image prompt + image-weight (--iw) — upload an image (palette or scene) at the start of your prompt and use --iw (0–3 in v7) to set how much the image influences the result (default 1). This is great when you want the palette to guide the whole composition.
  • Omni-Reference (--oref) + omni-weight (--ow) — a new v7 feature to “put THIS in my image.” Use --oref <imageURL> to anchor objects/colors and --ow (1–1000; default 100) to dial strength. Omni is best when you need a specific object, character or color treatment preserved across multiple outputs.
  • Style Reference Codes (SREF codes) — shorthand numeric style IDs you can reuse in prompts (--sref 123456789). These make repeating a color/style easier once you’ve found a good code.
  • --stylize (--s) and --exp — --stylize controls how “creative” vs. literal the model is (0–1000). --exp (0–100) is a newer experimental aesthetic parameter that can increase detail and change tonal mapping; use small values when you need controlled colors. Combine them carefully (high --stylize or --exp can distort literal colors).
  • Raw Mode (--raw) — reduces Midjourney’s automatic style bias, often producing more literal, photo-like color responses if you describe colors explicitly.

(Parameters always go at the end of your prompt — put style/image references in the right place: image URLs at the start to act as content prompts; --sref codes or --sref <imageURL> after the text to apply style.)

How can I force Midjourney 7 to use an exact palette?

There are three reliable patterns — pick one depending on how literal you need the match to be.

Pattern A — Palette image as a Style Reference (best balance of control + flexibility)

  1. Create a palette image: an image file with swatches laid out clearly (horizontal/vertical blocks). Tools like Coolors, Figma or Photoshop export simple swatch images.
  2. Upload the palette to Midjourney (web lightbox or Discord or host the image URL).
  3. Use --sref (style reference) and adjust --sw to taste. Example:
/imagine prompt: minimalist product hero shot, strong brand focus, negative space, soft studio lighting --sref https://example.com/my-palette.png --sw 250 --v 7 --ar 4:5 --s 40
  • --sref tells MJ to copy the palette’s overall look (hues, tone, contrast).
  • --sw 250 increases style influence (100 is default).
  • Keep --s (stylize) moderate (20–200) for literal color fidelity.

Pattern B — Palette image as an image prompt with image weight (most literal)

Place the palette image URL at the very start (the image prompt) and use --iw higher (max ~3) when you want the colors to dominate:

/imagine prompt: https://example.com/my-palette.png product mockup on colored backdrop, product colors matching swatches --iw 2.5 --v 7 --s 10 --raw --ar 16:9
  • --iw 2.5 increases the palette image influence.
  • --raw + low stylize keeps Midjourney literal so colors more closely match.

Pattern C — Omni-Reference (--oref) for object color locking (best for repeated character/object colors)

If you need a specific object or character to keep exact colors across many scenes (for brand mascots, product images), use --oref + --ow:

/imagine prompt: cinematic marketplace scene featuring our mascot holding a lantern, warm evening light --oref https://example.com/mascot_palette_object.png --ow 350 --v 7 --s 30
  • --oref pins the object and color characteristics; --ow 350 makes them strongly preserved.

How should I write prompts to describe colors? (words, hex codes, weights)

Use human color names first (and synonyms)

Text prompts that include exact color names and modifiers are useful for quick runs. Midjourney understands color names reliably: “ultramarine,” “teal,” “sage,” “burnt sienna,” “muted olive.” Use adjectives: “matte teal,” “desaturated sage,” “high-saturation cobalt accents.” This gives good control but is less exact than hex or palette images when you need pixel-perfect color.

Example: vintage poster, desaturated olive background, cobalt blue highlights, warm amber accents, soft grain

Hex codes: can I use #RRGGBB?

By adding hex codes (e.g., #0A7373) or CSS-like notation at the end of prompts to force particular colors.V7 often honors these cues — especially when combined with a palette image or a clear instruction like “use these hex colors”. Use hex codes as a final reinforcement. place them at the end of the textual prompt and still pair with --sref.

Example:
art deco poster, geometric shapes, use these colors: #0A7373 #EDAA25 #B7BF99, high contrast, grain texture --ar 2:3 --q 2

Use prompt weights for color prominence

When you have multiple color instructions, split them with :: and weight them:

"scene: beach at dusk :: teal sky::2 warm coral accents::1 muted sand::0.8 --v 7 --sref https://... --sw 180"

Weights tell MJ which color ideas to prioritize. Multi-prompt weights work well with --sref so the palette influences the whole palette hierarchy.

What parameters matter most for color accuracy and how should I set them?

Several Midjourney parameters influence how strictly the model follows your color directions. Below are the most relevant ones and recommended values for color-critical work.

--stylize (how literal vs. artistic)

  • What it does: Controls how strongly Midjourney applies artistic flair versus literal prompt interpretation. Default is 100; range is roughly 0–1000. Lower values make the model more literal (better for strict color control).
  • Recommendation: Use --stylize 0–50 for strict color fidelity. If you want an artistic interpretation that still broadly follows colors, --stylize 50–150 is a good compromise.

--quality / --q (render investment)

  • What it does: Higher --q values produce more detailed renders which can preserve color nuance, but cost more.
  • Recommendation: Use --q 1 or --q 2 for final color-critical renders; --q 0.5 or Draft Mode for fast exploration.

--chaos (variety vs. adherence)

  • What it does: Increases the variability of generated options. High --chaos can drift away from your color instructions.
  • Recommendation: Keep --chaos 0–20 if you need consistent color output. Use higher values only when exploring broad stylistic alternatives.

--seed (reproducibility)

  • What it does: Fixes randomness to allow you to replicate an exact look across reruns.
  • Recommendation: When you find a color result you like, save the --seed and reuse it to iterate while preserving color choices.

--no, --stop, and --ar

  • --no (exclude elements/colors): can be used to remove undesired color influences (e.g., --no green if green bleeds into highlights).
  • --stop can halt generation early to avoid over-rendering (helpful if color washes or re-grading happens late in the render).
  • --ar sets aspect ratio and sometimes affects how color is distributed across a composition (e.g., panoramic gradients vs. single-color backgrounds).
  • Recommendation: Use --no to ban stray colors, --stop around 70–85 for painterly color if needed, and set --ar to fit the composition that best showcases your palette.

Palette Prompting with Midjourney 7 — step-by-step practical workflow

This is a replicable recipe I use when I need a 3–5 color brand palette applied consistently across images.

Step 1 — design and export a clean palette image

Create a 600–1200 px wide image with 4–6 horizontal swatches (no text). Constrain values and lighting so each swatch is a flat color. Export PNG or JPG and host it (or upload to Midjourney web lightbox).

Step 2 — decide role: style vs content

  • If palette should influence look but not copy any object → use --sref.
  • If palette must dominate every pixel → use image prompt + --iw.
  • If palette needs to be tied to an object/character across scenes → use --oref. ([docs.midjourney.com][2])

Step 3 — pick initial parameter set (starter)

  • /imagine prompt: [your textual scene] --sref <paletteURL> --sw 180 --v 7 --s 40 --exp 10--q 2 --ar 3:2

Adjust --sw up or down, lower --s if colors drift.

Explanation:

  • --sref <PALETTE_URL> tells MJ7 where to get the palette.
  • --sw 180 uses a strong style weight so the palette dominates color choices.
  • --s 40 forces literal interpretation (less artistic “recoloring” freedom).
  • --q 2 improves render quality for subtle gradients and color fidelity.
  • --ar 3:2 matches your intended format.

Step 4 — iterate with image weights and stylize

  • If colors are too weak: increase --sw (style weight) or --iw (if using image prompt).Start at --sw 100 (default) and try --sw 200, --sw 400, --sw 700. Low --sw = palette hints; high --sw = almost exact palette dominance. Save seeds for reproducibility.
  • If colors are too creative/different: reduce --s and --exp. Try --raw for extra literalness.
  • If a background or prop occasionally introduces competing hues, add --no rules (e.g., --no neon, --no blue shadows)—but use sparingly, as excessive --no can degrade coherence.

Step 5 — Repeat and refine

Generate several variations (--repeat or multiple runs) and choose the variant closest to your target. Use --seed to lock near-matches and make micro-tweaks.

Step 6 — lock & batch

When you find a combo you like, either reuse the same --sref (or SREF code) or save the palette upload and run a batch of variations keeping --sref + --sw constant to produce consistent series. Use the --seed parameter to control randomness between runs if you want predictable variation.

How do you craft effective prompts for palette prompting? (examples)

Below are tested, ready-to-use prompt templates for common needs. Replace <PALETTE_URL> with your uploaded palette link; replace subject terms as needed.

Brand/Packaging mockup (photoreal)

/imagine prompt: packaging mockup for premium tea, matte box on reflective surface, product shot, shallow depth of field --sref <PALETTE_URL> --sw 500 --s 15 --q 2 --ar 1:1 --seed 12345

Why: strong --sw to force brand palette; low --s for literal colors; --q 2 for detail.

Editorial illustration (flat colors)

/imagine prompt: editorial illustration, flat graphic shapes, geometric composition, bold negative space --sref <PALETTE_URL> --sw 300 --s 40 --q 1 --ar 3:2

Why: moderate --sw to keep color story but allow stylization and composition.

UI / App mockup (precise color UX)

/imagine prompt: mobile app UI mockup, clean layout, large hero, placeholder icons, material design vibes --sref <PALETTE_URL> --sw 600 --s 10 --q 2 --ar 9:16 --no gradients --no textured background

Why: --sw 600 gives strict palette adherence; --s 10 keeps literal color mapping; --no gradients prevents gradient introduction if you want flat swatches.

Product studio shot where hex accuracy matters

/imagine prompt: studio product shot of ceramic bowl, direct front view, softbox lighting, neutral background --sref <PALETTE_URL> --sw 800 --s 5 --q 2 --ar 4:3 --seed 2025

Why: Very high --sw + very low --s = the reference palette is dominant and the model minimizes creative color deviations.

What are the common pitfalls?

Pitfall: Colors look different in different runs

  • Cause: randomness and high --chaos or high --stylize.
  • Fix: lower --chaos, use --seed to reproduce, and reduce --stylize to make the model more literal.

Pitfall: Colors look washed or desaturated

  • Cause: lighting or finish adjectives (e.g., “soft pastel”, “filmic fade”) override color intensity.
  • Fix: explicitly request “vivid”, “high saturation”, or “saturated pigments” and use --q 2 for more nuanced color gradations.

Pitfall: Palette appears in some elements but not others

  • Cause: ambiguous instructions about where palette colors should be applied.
  • Fix: explicitly assign palette colors to parts of the composition, e.g., “background = slate teal, subject clothing = warm ivory, accents = rusty orange.”

Pitfall: Hex codes ignored or misinterpreted

  • Cause: hex codes may be lower priority than strong style cues or earlier image references.
  • Fix: place hex codes at the end as “use these colors” and use a palette image. Combine text, hex, and image for best results. Community experiments show hex codes often work best as reinforcement rather than stand-alone commands.

FAQs

1. How do multiple style references or mixing palettes work?

You can provide multiple images to --sref (space-separated). MJ7 will blend influences; weight the balance by adjusting --sw (global) and by experimenting with the order of images (some community tools and guides suggest ordering matters for subtle effects). For stronger control, create a single composite palette image that already contains a prioritized arrangement of swatches. ([midjourneysref.com][10])

If you use --sref random, MJ will emit a numeric sref code you can reuse; combining codes is also possible and --sw will still affect strength.

2. Why doesn’t Midjourney always hit the exact hex I asked for?

Midjourney is not a color-management pipeline like a design tool; it’s a generative model trained on visual aesthetics. There are two practical reasons for mismatch:

  1. Interpretation layer: MJ maps textual color names to its learned visual distributions; “navy” may mean different RGBs in different contexts.
  2. Tone mapping & lighting: Scene lighting, surface material and post-processing influence perceived color (a “blue shirt” in warm tungsten light will look different).

3. Can I use Midjourney’s style-reference (sref) codes or Omni Reference for palettes? How?

Yes. Midjourney’s style-reference tools (sref codes) and the newer Omni Reference system function as ways to feed images or a set of reference IDs to the model. In V7, these systems remain compatible and are often used for palette prompting:

  • sref (style reference) / Omni Reference: Upload a palette image (or several images) and include the image references at the start of the prompt. You can combine multiple references (an art style + a color palette + a texture image) to get nuanced results. V7’s improved interpretation means the palette image is more reliably incorporated into color assignment across objects.

Practical tip: If you want Midjourney to prioritize color above style, put the palette first, then the style reference: palette.png style-ref.png prompt text --stylize 10 --s 50 (order matters: earlier references often get higher weight).

Getting Started

CometAPI is a unified API platform that aggregates over 500 AI models from leading providers—such as OpenAI’s GPT series, Google’s Gemini, Anthropic’s Claude, Midjourney, Suno, and more—into a single, developer-friendly interface. By offering consistent authentication, request formatting, and response handling, CometAPI dramatically simplifies the integration of AI capabilities into your applications. Whether you’re building chatbots, image generators, music composers, or data‐driven analytics pipelines, CometAPI lets you iterate faster, control costs, and remain vendor-agnostic—all while tapping into the latest breakthroughs across the AI ecosystem.

CometAPI offer a price far lower than the official price to help you integrate Midjourney API and Midjourney Video API, Welcome to register and experience CometAPI. .To begin, explore the model’s capabilities in the Playground and consult the API guide for detailed instructions. Before accessing, please make sure you have logged in to CometAPI and obtained the API key. 

Important Prerequisite: Before using MidJourney V7, you need to Start building on CometAPI today – sign up here for free access. Please visit docs. Getting started with MidJourney V7 is very simple—just add the --v 7 parameter at the end of your prompt. This simple command tells CometAPI to use the latest V7 model to generate your image.

Conclusion

Getting precise colors in Midjourney v7 is about stacking reliable signals: give the model an image palette, text reinforcement (names/hex codes), and parameter constraints (low stylize, low chaos, fixed seed). V7’s improved prompt fidelity makes these techniques more effective than ever, but the model still balances style, lighting, and texture cues. Use the checklist above, iterate conservatively, and treat Midjourney as a collaborator — precise inputs yield predictable color outcomes.

  • Midjourney
  • Midjourney V7
Start Today

One API
Access 500+ AI Models!

Free For A Limited Time! Register Now
Get Free Token Instantly!

Get Free API Key
API Docs
anna

Anna, an AI research expert, focuses on cutting-edge exploration of large language models and generative AI, and is dedicated to analyzing technical principles and future trends with academic depth and unique insights.

Post navigation

Previous
Next

Search

Start Today

One API
Access 500+ AI Models!

Free For A Limited Time! Register Now
Get Free Token Instantly!

Get Free API Key
API Docs

Categories

  • AI Company (2)
  • AI Comparisons (62)
  • AI Model (111)
  • guide (5)
  • Model API (29)
  • new (16)
  • Technology (473)

Tags

Anthropic API Black Forest Labs ChatGPT Claude Claude 3.7 Sonnet Claude 4 claude code Claude Opus 4 Claude Opus 4.1 Claude Sonnet 4 cometapi deepseek DeepSeek R1 DeepSeek V3 Gemini Gemini 2.0 Flash Gemini 2.5 Flash Gemini 2.5 Flash Image Gemini 2.5 Pro Google GPT-4.1 GPT-4o GPT -4o Image GPT-5 GPT-Image-1 GPT 4.5 gpt 4o grok 3 grok 4 Midjourney Midjourney V7 Minimax o3 o4 mini OpenAI Qwen Qwen 2.5 Qwen3 runway sora Stable Diffusion Suno Veo 3 xAI

Contact Info

Blocksy: Contact Info

Related posts

Midjourney001
Technology

Can Midjourney Remove Background

2025-08-22 anna No comments yet

Artificial-intelligence image tools have changed how designers, marketers, and hobbyists create visual assets — and a common question is whether Midjourney can produce images with transparent backgrounds or remove backgrounds cleanly. This article aggregates the latest official features, community workflows, and practical step-by-step instructions so you can choose the fastest, highest-quality route for your project. […]

Midjourney's HD Video Feature Goes Live A Game-Changer for AI Creatives
Technology, new

Midjourney’s HD Video Feature Goes Live A Game-Changer for AI Creatives

2025-08-18 anna No comments yet

Midjourney’s HD video mode goes live — higher fidelity, higher cost, wider availability: Midjourney officially rolled out an HD video mode for its newly introduced video tools, opening higher-resolution AI video rendering to paying professional users. The addition upgrades Midjourney’s image-to-video workflow with a higher-pixel option that the company says targets creators who need crisper, […]

Does Midjourney do Video
Technology

Does Midjourney do Video

2025-07-27 anna No comments yet

Midjourney, long celebrated for its state‑of‑the‑art image synthesis, has recently taken a bold step into the realm of video generation. By introducing an AI‑driven video tool, Midjourney aims to extend its creative canvas beyond static images, enabling users to produce animated clips directly within its platform. This article examines the genesis, mechanics, strengths, limitations, and […]

500+ AI Model API,All In One API. Just In CometAPI

Models API
  • GPT API
  • Suno API
  • Luma API
  • Sora API
Developer
  • Sign Up
  • API DashBoard
  • Documentation
  • Quick Start
Resources
  • Pricing
  • Enterprise
  • Blog
  • AI Model API Articles
  • Discord Community
Get in touch
  • support@cometapi.com

© CometAPI. All Rights Reserved.  

  • Terms & Service
  • Privacy Policy