Free Guide

Higgsfield MCP. The 5-Minute Setup.

One terminal command connects Claude Code or Codex to 30+ AI video and image models. No new subscriptions, no API keys, no glue code.

52 SECONDS · NARRATED
Why this matters

One connection. Every model worth using.

Higgsfield's MCP server is the first time we've seen a single integration give an AI agent native access to dozens of premium video and image models. One place to generate everything, controlled by the same Claude Code window you already work in.

30+
image and video models accessible from a single MCP connection
$0
in per-tool subscriptions while you start (free Higgsfield credits)
5 min
from zero to your first AI video generated through Claude
What you'll need

Three things. That is it.

Claude Code or Codex installed locally.
Five minutes. Top to bottom.
A free Higgsfield account. No card required.
01

Install your AI agent

Install Claude Code from Anthropic. Same setup also works in Codex.

install
$npm install -g @anthropic-ai/claude-code
02

Sign up for Higgsfield

Go to higgsfield.ai and create an account. Free tier comes with credits. No card required at signup.

03

Add the MCP server

One command. Registers Higgsfield's MCP server with Claude Code over HTTP.

register the server
$claude mcp add --transport http higgsfield https://mcp.higgsfield.ai/mcp
04

Authenticate (OAuth)

First time Claude calls a Higgsfield tool, it prints an auth URL.

A. Open the URL Claude prints.
B. Copy the auth code, paste it back into Claude.
C. You're in. Token persists across sessions.
05

Verify the connection

Confirm Higgsfield shows up green.

verify
$claude mcp list
higgsfield: - ✓ Connected
06

Generate your first video

Just ask Claude.

You
Generate a 5-second cinematic video of a city at sunset, vertical 9:16. Use Seedance 2.0 fast.

Claude picks the right tool, polls status, and hands you a downloadable URL when the render finishes — usually 30 to 90 seconds.

What this unlocks

Workflows you can now build.

🎬
Autopilot social
Claude writes the script, Higgsfield generates the video, Blotato posts it. End-to-end.
📈
Ad creative at scale
Test 100+ video and image variations per client per week. Same brief, dozens of permutations.
🔌
One agent, every tool
Bring your own MCPs alongside Higgsfield. Claude becomes the conductor.
Pro tips

Four things that save you a week.

01 · Pro plan caps at 3 concurrent jobs.
Submit batches in groups of three, then poll.
02 · Default to Seedance 2.0 fast.
Text-only, ~30 second renders, supports 9:16 / 16:9 / 1:1.
03 · Spell brand names correctly.
Auto-transcribers will write "Cloud Code" for "Claude Code". Always double-check captions.
04 · IP-flagged prompts? Just reroll.
Reframe and resubmit. Rejected jobs do not spend credits.
FAQ

Common questions.

That's it. No catch. Thanks for commenting HIGGS.

I'm Blake. I post AI workflow guides for dental practices and content creators. Follow @closingmorecases for the next one.

Free Guide

Higgsfield MCP. The 5-Minute Setup.

One terminal command connects Claude Code or Codex to 30+ AI video and image models. No new subscriptions, no API keys, no glue code. The full walkthrough is below — start with the video.

52 SECONDS · NARRATED · CMC × HIGGSFIELD
Why this matters

One connection. Every model worth using.

Higgsfield's MCP server is the first time we have seen a single integration give an AI agent native access to dozens of premium video and image models — Seedance, Kling, VEO, Sora, Flux, plus the rest — with no API key plumbing and no per-tool subscription stack.

For dental practices and creators running content and ads at scale, that means one place to generate everything, controlled by the same Claude Code or Codex window you already work in. Below is the exact setup.

30+
image and video models accessible from a single MCP connection
$0
in per-tool subscriptions while you start (free Higgsfield credits)
5 min
from zero to your first AI video generated through Claude
What you'll need

Three things. That is it.

Claude Code or Codex installed locally. The CLI agent is what actually calls the MCP tools.
Five minutes. The whole walkthrough, top to bottom.
A free Higgsfield account. Sign up and you start with credits — no card required.
01

Install your AI agent

If you do not already have it, install Claude Code from Anthropic. The same MCP setup also works in Codex and most other MCP-compatible agents.

install
$npm install -g @anthropic-ai/claude-code

On macOS, Homebrew works too: brew install anthropic-claude-code. VS Code users can install the Claude Code extension and skip the CLI step.

02

Sign up for Higgsfield

Go to higgsfield.ai and create an account. Free tier comes with credits so you can generate your first videos without paying a thing. You will not be asked for a card on signup.

Once you are signed in, you are done with this step — the MCP handles authentication itself in the next two steps.

03

Add the MCP server

One command. This registers the Higgsfield MCP server with Claude Code over HTTP.

register the server
$claude mcp add --transport http higgsfield https://mcp.higgsfield.ai/mcp

Codex users: same idea, slightly different syntax — codex mcp add higgsfield --url https://mcp.higgsfield.ai/mcp. Check your agent's MCP docs for exact flags.

04

Authenticate (OAuth)

The first time Claude tries to use a Higgsfield tool, it will print an authentication URL and ask for an auth code. Three small steps:

A
Open the URL Claude prints in your browser.
B
Approve, copy the code, paste it back into Claude.
C
You are in. Token persists across sessions.
05

Verify the connection

Run this and confirm Higgsfield shows up with a green check.

verify
$claude mcp list
higgsfield: https://mcp.higgsfield.ai/mcp - ✓ Connected

Not seeing the checkmark? Re-run step 03, then redo the OAuth in step 04. The most common cause is closing the auth page before pasting the code.

06

Generate your first video

Just ask Claude. Real prompt, real model, real video file at the end.

You
Generate a 5-second cinematic video of a city at sunset, vertical 9:16. Use Seedance 2.0 fast.

Claude will pick the right Higgsfield tool, submit the job, poll status, and hand you a downloadable URL when the render finishes — usually 30 to 90 seconds. From there you can download, edit, or chain it into your next prompt.

What this unlocks

The workflows you can now build.

🎬
Autopilot social
Claude writes the script, Higgsfield generates the video, Blotato posts it. End-to-end, no human in the loop after kickoff.
📈
Ad creative at scale
Test 100+ video and image variations per client per week. Same brief, dozens of model+style permutations, real performance data.
🔌
One agent, every tool
Bring your own MCPs alongside Higgsfield. Claude becomes the conductor — research, copy, design, video, posting, all in one.
Pro tips

Four things that will save you a week.

01 · Pro plan caps at 3 concurrent jobs.
Submit batches in groups of three, then poll. Trying to fire 10 at once will fail silently after job four.
02 · Default to Seedance 2.0 fast.
Text-only, ~30 second renders, supports 9:16 / 16:9 / 1:1. The other models are great too but slower or need a reference image to start.
03 · Spell brand names correctly in on-screen text.
Auto-transcribers will write "Cloud Code" or "C-Dance" instead of "Claude Code" and "Seedance". The audio is fine — only the captions are wrong. Always double-check before publishing.
04 · IP-flagged prompts? Just reroll.
If a job comes back ip_detected, the model thought your prompt referenced protected content. Reframe and resubmit — the rejected job does not spend credits.
FAQ

Common questions.

That's it. No catch. Thanks for commenting HIGGS.

I'm Blake. I post AI workflow guides for dental practices and content creators. Follow @closingmorecases for the next one.