# Motion Transfer

### Summary

The Motion Transfer Node captures motion from a reference video and applies it to a character image, generating a new video where your character performs the exact movements from the reference. Connect a still character image and a reference video of someone dancing, walking, gesturing, or performing any action and the AI transfers that motion onto your character with realistic body movement and natural physics.

This node requires two inputs:

* Character Image - the subject you want to animate.&#x20;
* Reference Video - the motion source you want to transfer from.

{% hint style="info" %}
Choose a reference video with clear, well-defined movements. Avoid clips with heavy occlusion (objects blocking the person), rapid camera movement, or multiple people.
{% endhint %}

***

### How to Use

1. Add the Node:
   * Click the Add (+) button and select Motion Transfer from the Video node category.
2. Connect a Character Image:
   * Link a character image via the Character Image input handle (marked in orange). This is the subject that will be animated with the transferred motion. Use a clear, well-lit image where the character's full body or relevant body parts are visible.
3. Connect a Reference Video:
   * Link a video via the Reference Video input handle (marked in green). This is the motion source—the AI will extract the movement from this video and map it onto your character.
4. Configure Settings:
   * Select your Model, adjust Guidance Scale, Inference Steps, and other parameters from the Properties panel (see settings table below).
5. Generate:
   * Click Run, and the AI will produce a video of your character performing the motion from the reference video.

***

### Choosing the Right Settings

| Setting         | Type                               | Impact on Output                                                                                                                                                     |
| --------------- | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Model           | Dropdown (e.g., Wan 2.2 Move)      | Selects the AI model for motion transfer. Different models handle body types, motion complexity, and rendering quality differently.                                  |
| Guidance Scale  | Slider (default: 1)                | Controls how closely the output follows the reference motion. Lower values allow more creative freedom; higher values produce a stricter match to the source motion. |
| Resolution      | Dropdown (e.g., 480p, 720p)        | Determines the output video resolution. Higher resolution captures finer detail but takes longer to generate.                                                        |
| Inference Steps | Slider (default: 20)               | Controls how many processing passes the AI runs. More steps generally produce smoother, higher-quality results but increase generation time.                         |
| Video Quality   | Dropdown (e.g., High, Medium, Low) | Sets the overall rendering quality of the output video.                                                                                                              |
| Seed            | Number Input                       | A fixed number for reproducible results across generations.                                                                                                          |

***

### Sample Use Cases

<details open>

<summary>Viral Dance and Trend Videos</summary>

Grab a trending dance video as your reference and transfer the choreography onto an AI-generated character, brand mascot, or illustrated figure perfect for jumping on social trends without filming yourself.

</details>

<details open>

<summary>Animated Character Performances</summary>

Bring concept art or illustrated characters to life by transferring real human performances onto them. Record a quick acting reference, and your character inherits every gesture, head tilt, and body movement.

</details>

<details open>

<summary>Virtual Try-On with Movement</summary>

Combine a fashion image with a walking or posing reference video to showcase clothing in motion giving e-commerce customers a dynamic view of how garments look and flow on a moving body.

</details>

***

### Motion Transfer Models

Visit [Video Models](https://help.imagine.art/workflows/understanding-nodes/video-nodes/video-models) to explore all available models and find the one that fits your needs for creating or transforming videos.
