0 / 5000
Reference image to influence the style or content of the output.
Seed unlocked - will use random seed
AI Video Editor Online — Edit Video with Text Prompts
Re-shooting costs time, crew, and budget. Runway Gen-4 Aleph eliminates the reshoot by editing your existing footage through text prompts — change the lighting, swap the season, add or remove objects, shift the visual style — all while preserving the original camera motion and subject movement frame by frame. This is not a filter overlay: Gen-4 Aleph is an in-context video model that understands your scene's spatial layout, lighting direction, and object relationships before applying any transformation. Upload an MP4 or WebM clip (up to 16 MB — only the first 5 seconds are processed), describe the edit, select from 6 aspect ratios (16:9, 9:16, 4:3, 3:4, 1:1, or 21:9), and generate. Premium plan required.
What Is Runway Gen-4 Aleph?
Runway Gen-4 Aleph belongs to a category called in-context video models. Traditional editors like Premiere Pro or DaVinci Resolve operate at the pixel level — you mask, keyframe, and composite manually. Gen-4 Aleph inverts this workflow: it constructs a scene graph of your input clip, mapping objects, surfaces, light sources, depth layers, and camera trajectory before applying any instruction. A prompt like 'make it snow' does not merely overlay white particles — the model adjusts surface reflections, shifts color temperature toward blue, and adds accumulation on horizontal surfaces, all synchronized to the existing camera motion across every frame.
The processing constraint is deliberate: Gen-4 Aleph reads only the first 5 seconds of your uploaded clip (24-30 fps = 120-150 frames of full-resolution scene analysis) and accepts files up to 16 MB in MP4 or WebM format. Within that window, it supports 6 output aspect ratios and an optional reference image to anchor a specific style. A Premium plan is required for access. The model handles changes that would require hours of manual compositing — relighting with directional shadow recalculation, object insertion with matched reflections, or full environment swaps — delivered in minutes rather than days.
Gen-4 Aleph Editing Capabilities
Four transformation categories — camera, objects, environment, and style — each preserving temporal coherence.
Camera and Novel View Synthesis
Gen-4 Aleph can fabricate camera angles absent from the original footage. Describe a reverse shot, a low-angle push-in, or a medium close-up, and the model renders the new viewpoint while preserving subject identity, lighting direction, and motion trajectory. It can also extend a shot beyond its original cut point or transplant camera movement from one clip to another — useful for matching A-roll and B-roll dynamics.
- Novel viewpoint synthesis from a single input angle — generate reverse shots, low angles, or close-ups that never existed in the original footage
- Shot extension and continuation with motion-matched frames for seamless editing timelines
- Camera motion transfer — apply one clip's camera trajectory (dolly, pan, crane) to a completely different scene
Object Insertion and Removal
Add, remove, or retexture elements within a scene. Gen-4 Aleph matches the inserted object's lighting, shadow angle, surface reflections, and perspective to the existing footage frame by frame. Removing an object fills the gap with contextually plausible background that remains stable across all frames — similar to content-aware fill but with temporal coherence that static tools lack.
- Object insertion with matched lighting direction, shadow length, surface reflections, and perspective — no manual compositing
- Clean removal with temporally consistent background fill across all frames, replacing content-aware fill workflows
- Texture swap and material change on existing surfaces (wood to marble, matte to glossy)
- Foreground extraction without a physical green screen — the model isolates subjects from complex backgrounds
Environment and Atmosphere Editing
Swap locations, seasons, weather, and time of day without re-shooting. The model identifies ground planes, sky regions, and architectural features before applying environment changes, so a 'sunset' prompt shifts the light angle and elongates shadows rather than merely tinting the frame orange. Surface interactions are physically motivated: rain makes roads reflective, snow accumulates on horizontal ledges.
- Season and weather transformation — rain, snow, fog, sunshine — with surface interaction (wet roads reflect, snow accumulates on ledges)
- Time-of-day relighting with directional shadow recalculation (dawn elongates shadows eastward, noon shortens them)
- Location backdrop replacement while preserving foreground subject motion, silhouettes, and parallax
Visual Style and Color Transfer
Apply a reference image's color palette, texture, and artistic treatment to your footage. Gen-4 Aleph separates structure (edges, motion, depth) from appearance (color, texture, grain), preserving the former while replacing the latter. This enables style transfers from film stills, paintings, or brand mood boards without manual color grading or LUT matching.
- Reference-image style transfer for precise color palette, texture, and grain control — attach a film still or brand mood board
- Artistic transformation — anime, oil painting, pencil sketch, watercolor — while preserving subject edges and motion
- Selective relighting and color grade adjustments via text prompt without affecting scene geometry
Input and Output Specifications
Technical constraints, supported formats, and cost details for Runway Gen-4 Aleph video editing on this platform.
- Input formats: MP4 and WebM — maximum file size 16 MB
- Processing window: first 5 seconds of the uploaded clip (120-150 frames at 24-30 fps)
- Output aspect ratios: 16:9, 9:16, 4:3, 3:4, 1:1, 21:9 — selected at generation time
- Optional reference image (JPEG, PNG) for style guidance and color palette anchoring
- Optional seed parameter for reproducible output across prompt iterations
- Access: Premium plan (any paid subscription or credit pack) required
- Provider: Runway Gen-4 Aleph via Replicate — output preserves temporal coherence and directional lighting
How to Edit Video Online with AI
Upload a clip, describe the change, generate the edit — no software installation.
1. Upload Your Clip
Select an MP4 or WebM file up to 16 MB. Only the first 5 seconds (120-150 frames at 24-30 fps) are analyzed, so trim your clip to the most important segment before uploading. Stable footage with clear subjects and consistent lighting produces the sharpest transformations.
2. Describe the Transformation
Write a text prompt specifying the change: 'replace the background with a snowy mountain,' 'add warm golden-hour lighting,' or 'remove the sign on the wall.' Optionally attach a reference image to anchor a specific color palette or art style. Choose from 6 aspect ratios (16:9, 9:16, 4:3, 3:4, 1:1, 21:9) to match your distribution platform.
3. Generate, Review, Iterate
Each generation typically finishes within minutes. Review the output, then adjust the prompt incrementally — change one variable at a time for predictable results. Use the optional seed parameter when you need reproducible variations across multiple edits of the same clip.
Text Prompt Writing Guide
The text prompt is the primary editing interface. A well-structured prompt targets one transformation per generation, specifies the change concretely, and identifies elements to preserve. Refining your prompt approach before generating saves time and resources.
- Target one transformation per prompt — 'add rain' and 'change to night' produce better results as two separate generations than one combined request
- Name objects explicitly: 'remove the red sign above the door' outperforms the vague 'remove the sign' because it resolves spatial ambiguity
- Describe the desired result, not the editing technique: write 'golden-hour backlighting with long shadows' instead of 'apply a warm color grade'
- Attach a reference image when text alone produces ambiguous styles — a film still locks in color palette and grain more reliably than adjectives
- Use the seed parameter to compare prompt variations: keep the seed constant, change one word, and compare outputs side by side
- Start with a short, focused prompt (under 20 words) and add detail only if the first output misses specific elements
Video Editor Use Cases
Edit existing footage instead of re-creating it — from style changes to location swaps.
Visual Style Transformation
Convert phone footage into vintage film grain, neon cyberpunk, oil painting, or anime aesthetics — the model applies the style while preserving camera motion and subject movement. Use a reference image to lock in a specific color palette, or describe the target style in the text prompt. Each generation covers one style variant.
Seasonal Marketing Refresh
Take an existing product video and swap the background from summer patio to winter cabin, or change packaging colors from holiday red to spring pastel — without re-filming the product. Gen-4 Aleph preserves the product's shape, reflections, and motion while transforming everything around it.
A/B Test Video Variants
Generate 5 different mood versions of the same 5-second clip for TikTok, Reels, and Shorts — sunny versus moody, warm versus cool, minimal versus busy — per variant. Compare engagement metrics across variants without re-shooting or re-editing in Premiere.
Phone Footage to Cinema
Add atmospheric fog, golden-hour lighting, rain effects, or shallow depth-of-field to handheld phone footage. Gen-4 Aleph applies the effect across all frames with temporal consistency, so fog drifts naturally and lighting changes track camera movement. The result looks like a graded, lit set — from raw phone video.
Location and Weather Changes
Turn a sunny street into a snow-covered lane, shift a daytime office to a nighttime scene with desk lamps, or replace an indoor backdrop with an outdoor mountain vista. The model maintains subject silhouettes, motion paths, and camera tracking while rewriting the environment around them.
Object Addition and Removal
Remove an unwanted sign from the background, add a coffee cup to a table, or replace a plain wall with a branded banner — all through text prompts. Gen-4 Aleph matches the inserted object's lighting direction, shadow angle, and perspective to the existing scene, producing results that would normally require compositing software.
Honest Limitations and Workarounds
Gen-4 Aleph processes only the first 5 seconds of your input clip and accepts files up to 16 MB (MP4 or WebM). Heavily detailed scenes with many moving objects can reduce edit fidelity — the model allocates attention across all elements, so simpler scenes yield sharper results. If the output is close but not perfect, refine the text prompt incrementally rather than rewriting it from scratch.
Some transformations hit the model's training boundaries: very specific brand logos, small text rendering, and extreme physics changes (e.g., gravity reversal) may not resolve cleanly. Reference images help anchor the target style when text prompts alone produce ambiguous results. Test with short, focused prompts first to maximize output quality per generation.
Video Editor FAQ
Answers to common questions about AI video editing with Runway Gen-4 Aleph — costs, formats, capabilities, and limitations.