Seedance ComfyUI Workflow Complete Guide
What is ComfyUI for Seedance?
ComfyUI is a node-based interface for advanced AI workflows that gives you granular control over Seedance video generation. Unlike the standard web interface, ComfyUI allows you to:
- Chain multiple generation steps
- Apply custom preprocessing
- Fine-tune individual parameters
- Automate batch processing
- Create reusable workflow templates
Prerequisites
Before starting, make sure you have:
- ComfyUI installed (version 0.1.0 or higher)
- Seedance API access (coming soon)
- Basic understanding of node-based interfaces
- 8GB+ VRAM recommended for local processing
Installing Seedance ComfyUI Node
Method 1: Via ComfyUI Manager
- Open ComfyUI Manager
- Search for "Seedance"
- Click "Install" on the Seedance node package
- Restart ComfyUI
Method 2: Manual Installation
cd ComfyUI/custom_nodes
git clone https://github.com/seedance/comfyui-seedance
cd comfyui-seedance
pip install -r requirements.txtBasic Seedance Workflow
Step 1: Text-to-Video Node Setup
Create a basic text-to-video workflow:
- Add Seedance Text2Video node
- Add Load Prompt node
- Add Preview Video node
- Connect nodes: Prompt → Text2Video → Preview
Example Prompt:
A cyberpunk dancer performing in a neon-lit alley,
rain falling, dramatic camera angles, vibrant colors,
cinematic lighting, 8k qualityStep 2: Parameter Configuration
Key parameters to adjust:
- Steps: 20-50 (higher = better quality, slower)
- CFG Scale: 7-12 (controls prompt adherence)
- Seed: -1 for random, set number for consistency
- Aspect Ratio: 16:9, 9:16, or 1:1
- Duration: 5-10 seconds
Step 3: Advanced Settings
For professional results:
- Motion Strength: 0.5-0.8 (controls movement intensity)
- Temporal Consistency: 0.7-0.9 (reduces flickering)
- Audio Sync: Enable for music video mode
- Upscale Factor: 1.5x or 2x for higher resolution
Advanced Workflows
Workflow 1: Character Consistency Pipeline
Create videos with the same character across multiple shots:
Character Reference → Image Encoder → Text2Video → Character Lock- Upload reference image of your character
- Use Image Encoder to extract features
- Apply Character Lock to maintain appearance
- Generate multiple shots with consistent character
Workflow 2: Music Video Automation
Sync video generation with audio beats:
Audio File → Beat Detector → Scene Generator → Seedance → Compositor- Load your music track
- Auto-detect beats and transitions
- Generate scenes matching rhythm
- Composite final video with audio
Workflow 3: Batch Style Transfer
Apply different styles to the same base video:
Base Video → Style Extractor → [Style 1, Style 2, Style 3] → Batch SeedanceGenerate multiple style variations simultaneously.
Pro Tips for ComfyUI
Tip 1: Use ControlNet for Precise Control
Add ControlNet nodes to guide composition:
- Depth ControlNet: Control camera perspective
- Pose ControlNet: Guide character movements
- Edge ControlNet: Define scene boundaries
Tip 2: Optimize Processing Speed
- Use Queue System for batch processing
- Enable Low VRAM Mode if needed
- Cache Embeddings for repeated prompts
- Process at lower resolution first, then upscale
Tip 3: Create Reusable Templates
Save your workflows as templates:
- Configure your ideal settings
- Save as
.jsonfile - Share with team or community
- Load template for consistent results
Common Issues and Solutions
Issue: Video Flickering
Solution:
- Increase Temporal Consistency to 0.85+
- Reduce Motion Strength to 0.6
- Use higher Step count (40+)
- Enable Frame Interpolation node
Issue: Character Changes Between Frames
Solution:
- Use Character Reference node
- Lock seed value for consistency
- Increase CFG Scale to 9-10
- Add Character Consistency node
Issue: Slow Processing
Solution:
- Reduce video duration to 5 seconds
- Lower resolution to 720p
- Decrease Step count to 25
- Use GPU acceleration if available
Issue: Audio Not Syncing
Solution:
- Enable Audio-Visual mode explicitly
- Check audio file format (WAV/MP3)
- Adjust sync offset in settings
- Use Beat Alignment node
Example Workflows (Download)
Workflow 1: Basic Text-to-Video
{
"nodes": [
{"id": 1, "type": "LoadPrompt", "data": {...}},
{"id": 2, "type": "SeedanceText2Video", "data": {...}},
{"id": 3, "type": "PreviewVideo", "data": {...}}
]
}Workflow 2: Image-to-Video with Style
Workflow 3: Music Video Generator
API Integration (Coming Soon)
When API access is available, you'll be able to:
from comfyui_seedance import SeedanceAPI
api = SeedanceAPI(api_key="your-key")
result = api.generate_video(
prompt="cyberpunk dancer in neon city",
duration=5,
aspect_ratio="16:9",
style="cinematic"
)
video_url = result.get_video_url()Community Workflows
Join our Discord to access:
- 100+ community-created workflows
- Weekly workflow challenges
- Expert tips and tutorials
- Workflow marketplace
Next Steps
Now that you've mastered ComfyUI basics:
- Experiment with different node combinations
- Share your workflows with the community
- Join weekly workflow challenges
- Build custom nodes for specific needs
Related Resources
Ready to start creating? Try Seedance Free →
