Donald Feury

Software Development, Linux, and Gaming

Odysee YouTube


Something I had to figure how to do recently to make a Fiverr gig promo video was stack videos together.

This is when you see two or more videos playing side by side at the same time. It is often used to compare before and after results, such as what I did.

Horizontal Stacking

So what does it look like to stack two videos together horizontally? It looks something like this:

Horizontal Stack Example

Lets see how we achieve this result:

ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex \
"[0:v][1:v]hstack=inputs=2:shortest=1[outv]" \
-map "[outv]" hstacked.mp4

You'll see we are passing in two videos as inputs with the -i option, video1.mp4 and video2.mp4.

For the filter graph we have:

“[0:v][1:v]hstack=inputs=2:shortest=1[outv]”

We are taking the video streams from the two inputs and passing them into the hstack filter. The inputs option indicates how many video streams are being used as inputs (defaults to 2) and the shortest option indicates how long the output video stream will be. By default, it will be the length of the longest video stream. Setting shortest=1 will make it the length of the shortest video stream instead.

After that, we just map the video stream created from hstack to the output file and you're good to go.

One thing about using hstack taken from the ffmpeg filter documentation:

All streams must be of same pixel format and of same height

If I recall, this means the videos have to be the same height and have the same encoding. Otherwise, the output just doesn't work.

Vertical Stacking

Lets see what vertically stacked videos looks like:

Vertically Stacked Videos

Lets see how we achieve this result:

ffmpeg -i video1.mp4 -i video2.mp4 -filter_complex \
"[0:v][1:v]vstack=inputs=2:shortest=1[outv]" \
-map "[outv]" hstacked.mp4

This is almost exactly the same as horizontal stacking but we use vstack instead of hstack, even the arguments are the same.

The vstack filter has the same conditions as hstack but they have to be the same width, instead of height.

Combining Stacks

A weird idea I had after playing around with these was to combine them. After doing so I got this result:

Double Stacked Videos

Thats pretty interesting, looks like a 2x2 grid of videos playing.

Now, how did we achieve this effect?

ffmpeg -i video1.mp4 -i video2.mp4 -i video3.mp4 -filter_complex \
"[0:v][1:v]hstack=inputs=2:shortest=1[row1];
[0:v][2:v]hstack=inputs=2:shortest=1[row2];
[row1][row2]vstack=inputs=2:shortest=1[outv]" \
-map "[outv]" ow-creation-double-stack.mp4

Lets go over the filter graph:

[0:v][1:v]hstack=inputs=2:shortest=1[row1]

Here we are horizontally stacking the first video stream and the second video stream and calling the new stream [row1].

[0:v][2:v]hstack=inputs=2:shortest=1[row2]

Next, we horizontally stack the first video stream with the third video stream and call that new stream [row2].

[row1][row2]vstack=inputs=2:shortest=1[outv]

Finally, we take the two horizontally stacked video streams, and vertically stack them on top of each other! That is pretty neat.

With that, you should be able to horizontally and vertically stack videos with ffmpeg!

Thank you for reading!

#ffmpeg #videoediting

YouTube Video

If you've been watching my videos from the posts I make on here (thank you if you have), I got a dirty little secret...

I haven't done any manual editing on the past three videos.

Now, that doesn't mean there isn't any editing, there is a little but that will grow in time.

No, instead a single ffmpeg command does all the editing for me.

So what exactly is it doing you ask? So far it does the following:

  • Delaying and overlaying a sub animation over the main video (usually just a recording)
  • Adding a fade in effect to the start of the video
  • Adding a fade out effect before the outro
  • Appends the outro to the end of the video

Oh, whats that? You want to see the magic? I got you fam.

Script

#!/usr/bin/env sh

IN=$1
OUT=$2
OVER=$3
OVER_START=$4
OUTRO=$5
DURATION=$(get_vid_duration $IN)
FADE_OUT_DURATION=$6
FADE_IN_DURATION=$7
FADE_OUT_START=$(bc -l <<< "$DURATION - $FADE_OUT_DURATION")
MILLI=${OVER_START}000

ffmpeg -i $IN -i $OUTRO -filter_complex \
    "[0:v]setpts=PTS-STARTPTS[v0];
    movie=$OVER:s=dv+da[overv][overa];
    [overv]setpts=PTS-STARTPTS+$OVER_START/TB[v1];
    [v0][v1]overlay=-600:0:eof_action=pass,fade=t=in:st=0:d=$FADE_IN_DURATION,fade=t=out:st=$FADE_OUT_START:d=$FADE_OUT_DURATION[mainv];
    [overa]adelay=$MILLI|$MILLI,volume=0.5[a1];
    [0:a:0][0:a:1][a1]amix=inputs=3:duration=longest:dropout_transition=0:weights=3 3 1[maina];
    [mainv][maina][1:v][1:a]concat=n=2:v=1:a=1[outv][outa]" \
    -map "[outv]" -map "[outa]" $OUT

Thats a chunky boy, so let me break down what exactly is happening.

Arguments

IN=$1
OUT=$2
OVER=$3
OVER_START=$4
OUTRO=$5
DURATION=$(get_vid_duration $IN)
FADE_OUT_DURATION=$6
FADE_IN_DURATION=$7
FADE_OUT_START=$(bc -l <<< "$DURATION - $FADE_OUT_DURATION")
MILLI=${OVER_START}000

These are all the arguments I'm passing into the script to build the ffmpeg command.

  • IN=$1 – this is the path to the main video that I want to use, probably a recording I did earlier in the day.

  • OUT=$2 – this is the path I want to save the final video to.

  • OVER=$3 – this is the file path to the subscription animation I started using. I thought it better to pass this in, since I may change what animation I'm using at some point.

  • OVER_START=$4 – the timestamp, in seconds, to start playing the subscription animation in the main video. Its needed to offset the animation's video frame timestamps and delay the audio.

  • DURATION=$(get_vid_duration $IN) – I'm using another script to get the duration, in seconds, of the main video. Its using ffprobe to grab the metadata in a specific format.

Here is the getvidduration script for reference:

#!/usr/bin/env sh

IN=$1

ffprobe -i $IN -show_entries format=duration -v quiet -of csv="p=0"
  • FADE_OUT_DURATION=$6 – the duration in seconds of the fade out effect. It is also used to calculate the starting time of the fade out effect.

  • FADE_IN_DURATION=$7 – same as last but for the fade in effect.

  • FADE_OUT_START=$(bc -l <<< "$DURATION - $FADE_OUT_DURATION") – uses the duration and fade out duration to calculate the exact second to start the fade out effect. Passed into a terminal calculator program called bc.

  • MILLI=${OVER_START}000 – The milliseconds version of the overlay animation duration. One of the filters I use needs milliseconds instead of seconds.

Filtergraph

"[0:v]setpts=PTS-STARTPTS[v0];
movie=$OVER:s=dv+da[overv][overa];
[overv]setpts=PTS-STARTPTS+$OVER_START/TB[v1];
[v0][v1]overlay=-600:0:eof_action=pass,fade=t=in:st=0:d=$FADE_IN_DURATION,fade=t=out:st=$FADE_OUT_START:d=$FADE_OUT_DURATION[mainv];
[overa]adelay=$MILLI|$MILLI,volume=0.5[a1];
[0:a:0][0:a:1][a1]amix=inputs=3:duration=longest:dropout_transition=0:weights=3 3 1[maina];
[mainv][maina][1:v][1:a]concat=n=2:v=1:a=1[outv][outa]"
  • [0:v]setpts=PTS-STARTPTS[v0]; – this is making sure that the main video's video stream is starting at the same 00:00:00 timestamp as the animation for proper offsetting. This might not be necessary but I'd rather make sure.

  • movie=$OVER:s=dv+da[overv][overa]; – loading in the sub animation's video and audio stream to be available for use in the rest of the filtergraph.

  • [overv]setpts=PTS-STARTPTS+$OVER_START/TB[v1]; – offset the sub animation's timestamps by the OVER_START argument.

  • [v0][v1]overlay=-600:0:eof_action=pass – overlay the sub animation's video stream over the main video stream with an offset on the x position of -600 (bumps it over to the left).

  • fade=t=in:st=0:d=$FADE_IN_DURATION – adds a fade in effect at the start of the video stream, with duration of FADEINDURATION.

  • fade=t=out:st=$FADE_OUT_START:d=$FADE_OUT_DURATION[mainv]; – adds a fade out effect at the end of the video stream, starting at FADEOUTSTART and lasting FADEOUTDURATION.

  • [overa]adelay=$MILLI|$MILLI – adds a delay of MILLI milliseconds to the audio of the sub animation's audio, to sync it up with video stream that was offset.

  • volume=0.5[a1]; – the sub animation's little ding sound is kinda loud, so I cut its volume in half.

  • [0:a:0][0:a:1][a1]amix=inputs=3:duration=longest:dropout_transition=0:weights=3 3 1[maina]; – we mix in both audio streams from the main video, and the audio stream from the sub animation together into one stream. Duration says to set the combined stream's length to the length of the stream with the longest input. Dropout transition and weights are used to offset the increase in volume that occurs when the sub animation sound ends. Its not perfect but it helps.

  • [mainv][maina][1:v][1:a]concat=n=2:v=1:a=1[outv][outa] – finally we take processed video and audio streams, and concat on the end of them, the video and audio streams of the outro passed into script. I just use a blank screen with some music playing for now.

Output

-map "[outv]" -map "[outa]" $OUT

Finally, we map the fully processed video and audio stream to the output file. This way, ffmpeg will write those streams out to the file, instead of the unprocessed streams straight from the input files.

With that, we have successfully:

  • [x] Overlaid the sub animation, at the desired time, in the main video.
  • [x] Added a fade in effect to the start of the video.
  • [x] Added a fade out effect to the end of the video.
  • [x] Concatenated the outro to the end of the video after the fade out effect.

Things I would like to add:

  • [] Color correction – Hard to do right now since I don't have consistent lighting in my office.
  • [] Better Outro – Something instead of a blank screen with music.
  • [] Get an Intro – Get a decent intro to add to the start of the video.

#ffmpeg #videoediting #productivity

YouTube Video

This one was a doozy to figure out but I finally managed to overlay a little subscribe animation over my main videos.

Normally I would explain it but its pretty long winded so if you want the full explanation, please watch the video.

The short answer is, its using a complex filtergraph to offset the timestamps on the frames of the sub animation and delay its audio so they play over the main video when I wanted them to.

The command is as follows:

#!/usr/bin/env sh

IN=$1
OVER=$2
OUT=$3
START=$4
MILLI=${START}000

ffmpeg -i $IN -filter_complex \
"[0:v]setpts=PTS-STARTPTS[v0];
movie=$OVER:s=dv+da[overv][overa];
[overv]setpts=PTS-STARTPTS+$START/TB[v1];
[v0][v1]overlay=-600:0:eof_action=pass[out1];
[overa]adelay=$MILLI|$MILLI,volume=0.5[a1];
[0:a:0][0:a:1][a1]amix=inputs=3:duration=longest:dropout_transition=0:weights=3 3 1[outa]" \
-map "[out1]" -map "[outa]" $OUT

#ffmpeg #videoediting

Odysee YouTube


I have found it very useful to concatenate multiple video files together after working on them separately. It turns out, that is rather simple to do with ffmpeg.

How do we do this?

There are three methods I have found thus far:

  • Using the concat demuxer approach

    • This method is very fast as is avoids transcoding
    • This method only works if the files have the same video and audio encoding, otherwise artifacts will be introduced
  • Using file level concatenation approach

    • There are some encodings that support file level concatenation, kinda like just using cat on two files in the terminal
    • There are very few encodings that can do this, the only one I've used the is MPEG-2 Transport Stream codec (.ts)
  • Using a complex filtergraph with the concat filter

    • This method can concat videos with different encodings
    • This will cause a transcoding to a occur, so it takes time and may degrade quality
    • The syntax is hard to understand if you've never written complex filtergraphs before for ffmpeg

Lets look at the examples, first the concat demuxer approach:

ffmpeg -f concat -i list.txt -c copy out.mp4

Unlike most ffmpeg commands, this one takes in a text file containing the files we want to concatenate, the text file would look something like this:

file 'video1.mp4'
file 'video2.mp4'

The example for the file level concatenation would look like this:

ffmpeg -i "concat:video1.ts|video2.ts" -c copy out.ts

and the last example would be like so:

ffmpeg -i video1.mp4 -i video2.flv -filter_complex \
"[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]" \
-map "[outv]" -map "[outa]" out.mp4

This one is probably pretty confusing, so let me explain the complex filtergraph syntax:

Unlike using filters normally with ffmpeg using -vf or -af, when using a complex filtergraph, we have to tell ffmpeg what streams of data we are operating on per filter.

At the start you see:

[0:v][0:a][1:v][1:a]

This translates in plain english to:

Use the video stream of the first input source, use the audio stream from the first input source, use the video stream from the second input source, and use the audio stream from the second input source.

The square bracket syntax indicates:

[indexofinput:stream_type]

Those of us with experience in programming will understand why the index starts at 0 and not 1

Now after we declared what streams we are using, we have a normal filter syntax:

concat=n=2:v=1:a=1

concat is the name of the filter

n=2 is specifying there are two input sources

v=1 indicates each input source has only one video stream and to write only one video stream out as output

a=1 indicates each input source has only one audio stream and to write only one audio stream out as output

Next, we label the streams of data created by the filter using the bracket syntax:

[outv] [outa]

Here, we are calling the newly created video stream outv and the audio stream outa, we need these later when using the -map flag on the output

Lastly, we need to explicitly tell ffmpeg what streams of data to map to the output being written to the file, using the -map option

-map “[outv]” -map “[outa]”

That names look familiar? Its what we labeled the streams created from the concat filter. We are telling ffmpeg:

Don't use the streams directly from the input files, instead use these data streams created by a filtergraph.

And with that, ya let it run and tada, you have concatenated two videos with completely different encodings, hurray!

#ffmpeg #videoediting

YouTube Video

The Magic

This is probably one of the more helpful tricks I've shown off so far.

Here is command using ffmpeg that will, get this, remove all frozen frames from a video, leaving only the frames showing work being done.

ffmpeg -i video.mp4 -vf mpdecimate,setpts=N/FRAME_RATE/TB -an out.mp4

Let this run for awhile and BOOM, you get a clean recording devoid of down time.

This has one catch, mpdecimate, the filter removing the frozen frames, only operates on the frames of a video, not the audio.

I use the -an option to remove the audio, if not, it would be desynced and make no sense.

This has saved me HOURS of editing when working on my wife's recording of her doing graphic design work.

I hope it helps ya'll too.

The Plug

If this trick helped ya'll, I would appreciate it if you would share the YouTube video and go sub to my channel, I'm working on growing my channel and any help would be amazing.

#ffmpeg #videoediting

YouTube Video

It turns out removing the audio from a video is very easy with ffmpeg!

To remove the audio from a video:

ffmpeg -i video.mp4 -c:v copy -an out.mp4

The -an option will completely remove the audio from a video, and since we're just copying the video codec as is, this most likely will only takes seconds.

To extract the audio from a video:

ffmpeg -i video.mp4 -vn audio.wav

As you might have guessed, the -vn option will remove the video stream from the output, leaving only the audio. In this case, I'm re-encoding the audio as WAV, as that is a very good encoding to work with in terms of further processing.

#ffmpeg

Odysee YouTube


I figured out that you don't need to use a full fledge video editor to apply some basic transition effects to video and audio.

Using ffmpeg, you can apply basic fade in and fade out to video using the fade filter. For audio, the afade filter can be used to achieve a similar effect.

ffmpeg -i video.mp4 -vf "fade=t=in:st=0:d=3" -c:a copy out.mp4

This will start the video with a black screen and fade in to full view over 3 seconds.

ffmpeg -i video.mp4 -vf "fade=t=out:st=10:d=5" -c:a copy out.mp4

This will make the video start fading to black over 5 seconds at the 10 second mark.

ffmpeg -i music.mp3 -af "afade=t=in:st=0:d=5" out.mp3

The audio file will start at a low volume and fade in to full over 5 seconds.

ffmpeg -i music.mp3 -af "afade=t=out:st=5:d=5" out.mp3

The audio file will start fading to zero volume over 5 seconds starting at the 5 second mark.

ffmpeg -i video.mp4 -vf "fade=t=in:st=0:d=10,fade=t=out:st=10:d=5" -c:a copy out.mp4

You can apply both fade in and out effects at the same time, to avoid having to re-encode twice.

Enjoy

#ffmpeg #videoediting

YouTube Video

In this video I go a little more in-depth about how to cut sections of video and audio out into separate files using just ffmpeg

Enjoy

#ffmpeg #videoediting

YouTube Video

So I've been messing around with ffmpeg alot to see how much video editing I can do without using an actual GUI video editor, so far it turns out that the answer is quite a bit.

I go over three main common tasks I've needed to do recently with only ffmpeg:

  • Cutting out a section of a video into its own file

    • The main benefit of this is that this process does not cause a re-encode, which saves an amazing amount of time. When I did the same thing in Davinci Resolve, it took 15ish minutes to render out the clip. ffmpeg took a whole 20 seconds.
  • Cleaning up a video clip by removing “frozen frames”, using the ffmpeg filter called mpdecimate (Frozen frames are sections of a video where there appears to be nothing happening, hence it looks frozen)

    • This is really useful for making time lapses of processes, as you remove all the dead frames and you get only 100% action going on.
  • Taking the decimated video and creating a time lapse of any duration in seconds by calculating the value needed to adjust the presentation timestamps to get the desired duration.

    • This one has been really handy for making time lapses of my wife's design work. I've used it so far to make versions for Instagram and Tiktok.

Enjoy the video

#ffmpeg

YouTube Video

I have a old Thinkpad X200 laptop and if there is one thing it doesn't like, its Electron apps that consume 30% of its RAM.

So I used a combination of two programs to replace the official Spotify client, spotify-tui and spotifyd, both of which are written in Rust.

Enjoy

#spotify #linux