# Generate Video

## Summary

The Generate Video Node is the primary tool for creating videos using AI models in Workflows. It supports both text-to-video and image-to-video generation, giving you the flexibility to produce video content from a written description, a reference image, or a combination of both. You can choose from a wide range of AI models each with its own strengths in style, motion quality, and output fidelity and fine-tune settings like duration, resolution, aspect ratio, and more.

Each AI model has different available parameters and supported settings, so it's worth experimenting with the options to find the combination that best fits your creative vision.

***

### How to Use&#x20;

1. Choose Your Model:
   * Select an AI video model from the Model dropdown. Each model offers different strengths—some excel at cinematic motion, others at realistic physics or fast iteration. See the Video Models page for a full comparison.
2. Set Your Input:
   * Text-to-Video: Connect a Prompt or AI Copilot node and describe the scene you want (e.g., "A drone shot flying over a misty mountain range at sunrise").
   * Image-to-Video: Connect an image from an Import, Generate Image, or Edit Image node to use as the first frame or visual reference, and add a prompt to describe the desired motion.
3. Configure Settings:
   * Adjust Duration, Resolution, Aspect Ratio, and other parameters to match your output requirements.
4. Generate:
   * Click Run, and the AI model will process your inputs and produce a video based on your settings.

***

### Choosing the Right Settings

The settings below are the most common across models. Some models may offer additional parameters or omit certain options refer to the model's own documentation for specifics.

| Setting        | Type                                  | Impact on Output                                                                                                                              |
| -------------- | ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| Model          | Dropdown                              | Selects the AI model used for generation. Each model has unique characteristics in motion quality, realism, speed, and supported features.    |
| Duration       | Dropdown (e.g., 5s, 10s)              | Sets the length of the generated video. Available durations vary by model.                                                                    |
| Resolution     | Dropdown (e.g., 720p, 1080p)          | Determines the output video resolution. Higher resolutions produce sharper results but may take longer to generate.                           |
| Aspect Ratio   | Dropdown (16:9, 9:16, 1:1, 4:3, etc.) | Defines the video frame dimensions. Choose 16:9 for widescreen, 9:16 for vertical/mobile, or 1:1 for square formats.                          |
| Generate Audio | Checkbox (default: enabled)           | When enabled, the AI generates a matching audio track alongside the video. Supported on select models.                                        |
| Camera Fixed   | Checkbox                              | When enabled, locks the camera in place so there is no camera movement in the generated video. Useful for static scenes or product showcases. |
| Seed           | Number Input                          | A fixed number for reproducible results. Use the same seed with identical settings to get consistent outputs across runs.                     |

***

### Input Modes

#### Text-to-Video

Describe a scene, action, or environment in a connected Prompt node, and the AI generates a corresponding video from scratch. The more specific your description, the closer the output will match your vision.

**Example prompts:**

* "A slow-motion close-up of coffee being poured into a ceramic mug, steam rising, warm morning light"
* "A futuristic city at night with flying cars, neon signs reflecting on wet streets, cinematic drone shot"
* "A golden retriever running through a field of wildflowers, shot from a low angle, shallow depth of field"

#### Image-to-Video

Connect a still image as a reference and describe the motion you want applied. The AI uses the image as the visual foundation and animates it based on your prompt.

**Example setups:**

* **Input:** Product photo of a sneaker → **Prompt:** "Slow 360-degree rotation on a white background with soft shadows"
* **Input:** Landscape illustration → **Prompt:** "Gentle wind blowing through the trees, clouds drifting, birds flying across the sky"
* **Input:** Portrait photo → **Prompt:** "Subject slowly smiles and turns head to the right, natural movement"

***

### Sample Use Cases

#### Social Media Video Ads

Generate short, attention-grabbing video ads from a single product image and a descriptive prompt. Set the aspect ratio to 9:16 for Stories/Reels or 1:1 for feed posts, and enable Generate Audio for a complete ready-to-post asset.

#### Animated Concept Art

Bring a static illustration or concept art piece to life by connecting it to the Generate Video node and describing the motion—camera pans, particle effects, character movement—to create a cinematic preview of your design.

#### Explainer and Presentation Videos

Use text-to-video to quickly generate scene-by-scene clips for explainer videos or pitch decks. Combine multiple Generate Video nodes in a workflow, each with a different prompt, to build a full sequence.

#### Music Video Visualizations

Generate stylized, abstract, or cinematic video clips to accompany music tracks. Experiment with different models and prompts for each section of the song, then chain clips together in your editing workflow.

***

### Video Models

Visit [Video Models](https://help.imagine.art/workflows/understanding-nodes/video-nodes/video-models) to explore all available models, compare their supported settings, and find the one that fits your project.
