-
Notifications
You must be signed in to change notification settings - Fork 200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework hardware acceleration decoder selection #1705
base: master
Are you sure you want to change the base?
Rework hardware acceleration decoder selection #1705
Conversation
…hwaccel_decoders prior to setting opts. This helps prevent the use of a hwaccel when the source codec is unsupported by the hardware (i.e. older generation Nvidia GPUs not supporting AV1 or HEVC).
formatline = next((line.strip() for line in self._get_stdout([self.ffmpeg_path, '-hide_banner', '-h', 'decoder=%s' % decoder]).split('\n')[1:] if line and line.strip().startswith(prefix)), "") | ||
formats = formatline.split(":") | ||
return formats[1].strip().split(" ") if formats and len(formats) > 0 else [] | ||
format_line = next((line.strip() for line in self._get_stdout([self.ffmpeg_path, '-hide_banner', '-h', f"decoder={decoder}"]).split('\n')[1:] if line and line.strip().startswith(prefix)), "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes avoid an index out-of-bounds-errors when trying to get formats for invalid codecs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this was just a mistake on my part, the line
return formats[1].strip().split(" ") if formats and len(formats) > 0 else []
should have read
return formats[1].strip().split(" ") if formats and len(formats) > 1 else []
Fixed that with e62addf
@@ -1490,64 +1494,85 @@ def checkDisposition(self, allowed, source): | |||
return False | |||
return True | |||
|
|||
# Hardware acceleration options now with bit depth safety checks | |||
def setAcceleration(self, video_codec, pix_fmt, codecs=[], pix_fmts=[]): | |||
def set_decoder(self, video_codec: str, pix_fmt: str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Renamed this as its specific to decoders.
opts.extend(['-vcodec', _decoder]) | ||
|
||
# If there's a manually specified hwaccel/decoder pairing for this codec, use it. | ||
if video_codec in self.settings.hwaccel_decoder_override: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a specific input codec, users will be able to specify a hwaccel
and a decoder
using the format <codec>:<hwaccel>.<decoder>
.
ex:
hwaccel_decoder_override = av1:vaapi.av1
Happy to add a bit to the wiki about this.
|
||
is_supported_decoder = target_decoder in codecs[video_codec]['decoders'] | ||
|
||
if is_supported_decoder and target_decoder in self.settings.hwaccel_decoders: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This slightly modifies existing behavior. Only specifying hwaccels=
in settings will now do nothing. Most examples I've seen of users attempting hardware acceleration on this repo have been specifying their decoders anyways.
Reviewing this now Question though, does using If your ffmpeg build doesn't have a vaapi decoder for av1 isn't it just falling back to software based on those options? Perhaps my understanding isn't correct but that was my assumption Also, could you share the ffmpeg command generated for this output
And additionally could you share the command generated with your fork? Seems like I probably need to allow -hwaccel to be added if the decoder list is empty since some people still use that basic config option to get some generic hwaccel |
It does, confirmed with Was about to grab you some ffmpeg outputs but it looks like something just changed with the |
Had a bug with the startup script on sonarr-sma that I just fixed like 2 minutes ago, might want to just do a fresh pull and try again |
Yeah upon reviewing the decoders, it looks like vaapi is the only one that doesn't respect the codec_hwaccel naming convention for the decoders (though it does for the encoders) |
meh, I'm now an hour down a rabbit hole discovering why I have a half-finished attempt to put Regardless, off the top of my head I believe the script without any changes generated:
And now with the changes it generates
Currently, the script will always set |
Thoughts on adding support for https://github.com/jellyfin/jellyfin-ffmpeg? Looks like a great source of pre-compiled ffmpeg binaries w/ hwaccel support. edit: I mean this specifically in reference to the docker containers, I can open a corresponding PR there if you're interested. |
@lizardfish0 I use those builds myself but when it comes to the SMA mod, they can't be used with Alpine linux. |
Bit to explain here...
tl;dr, I needed a way to force the use of a specific
hwaccel
/decoder
pair.This PR addresses two problems:
hwaccel_decoders
.Currently, the script will attempt to use a decoder that matches
<input codec>_<hwaccel name>
, provided it exists inffmpeg -decoders
. The first and largest issue with this approach is that this codec might not actually be supported by the underlying hardware. For example, if the input codec isav1
and you've providedcuda
as a hwaccel, then you'll need Nvidia 30-series or later to run theav1_cuvid
decoder, but ffmpeg doesn't know that.Additionally, the script will currently append the first valid (exists in ffmpeg)
hwaccel
it finds. In my case, even though my GPU didn't supportav1
, I'd like to use my CPU's iGPU to perform the decoding. This PR makes that possible.<input codec>_<hwaccel name>
mold. I added a new setting to override/manually control the process.I wasn't aware of this, but when trying to solve my use-case I learned that there are internal ffmpeg codecs that support hardware acceleration, and there are external ffmpeg codecs specifically built for a single hardware platform. ffmpeg will implicitly use the internal codec unless you tell it otherwise.
i.e. you can run the implicit
hevc
decoder with-hwaccel cuda
, or run-vcodec hevc_cuvid
with-hwaccel cuda
. I don't know too much about how these are maintained separately, but I read that sometimes there are differences in implementation that make one more efficient, so perhaps this will be useful to some.Core problem: there is no good way to query ffmpeg to know if a given decoder is actually going to work with the provided hardware. The only one who knows whether it's going to work is the user.
Might be helpful to run through my situation to understand why this might be useful.
Hardware:
Previously, I had
This worked well until I ran into an AV1 file, which failed transcoding
My iGPU supports AV1 decoding, so I figured I could wrangle the settings into performing the decode with that. However, the following fails because the script attempts to use
cuda
.Even with the change to filter by
hwaccel_decoders
, the script then searches for<input codec>_<hwaccel name>
and there is noav1_vaapi
. The solution would be to use-hwaccel vaapi
with-vcodec av1
.Thus the second change.
If you have a cleaner solution I'm happy to work it out, but I think this works pretty well. I explored modifying the way
hwaccel_decoders
works, where we could detect a listed decoder that wasn't valid according to ffmpeg but corresponded to an internal decoder with hardware acceleration. Adding a new setting to manually specifyhwaccel/decoder
pairings seemed much cleaner and less confusing to users.