It is a function that encodes or decodes video data into various formats. Usually this is done by a specialized chip on hardware or a dedicated function of the CPU. The purpose is to compress the data into a manageable stream. Raw video data is gigantic. H.264 is an example of a codec commonly used all over. MJPEG is another.
Codec differes from format or container. The container is the file specification for which the data is contained. Avi or mp4 or mov are examples of file extensions for various containers. Containers have a set of codecs they support for a variety of reasons.
To muddy the waters even more there is also another layer when streaming the actual data. RDP is an example protocol for converting the data into TCP or UDP packets so they can traverse the internet.
Video transport and storage is an infinitely complex and fun topic with a lot of history. All of this is because video data is so dang huge and so prolific. Saving on data storage and transport costs can save a lot.
If AV1 noise synthesis “removes” banding, that banding was never part of the video in the first place, but your video player system created it during bit depth reduction, since you’re viewing on an 8-bit display. This can be prevented with dithering, which AV1 noise synthesis can substitute for.
The dithering pattern is random in each frame, so distinguishing between dithered gradients and noise/film grain baked into the Blu-Ray source is hardly possible.
For the encoder, randomly dithered gradients and film grain are just noise. Both AomEnc and SVT-AV1 can remove this noise (thus causing banding) for better compression, but also record information about the noise to allow for statistically identical noise to be composited back on top of each frame during playback, hiding the bands again.
My issue here is simply that there is no reference for what noise that requires –denoise-noise-level=1 looks like, or how I should recognize noise that requires –denoise-noise-level=6 and so on. If my anime screenshot is level 6 already, then is “Alien (1978)”, level 12? level 18? Higher even?
Software decoding video of any kind on a phone is a terrible idea, mostly for battery reasons. AV1 should only be available to hardware decode capable chips unless you’re on a desktop or something like that.
Should YouTube send an AV1 stream to a phone without hardware decoding? No. Should a phone without hardware decoding have the ability to play AV1 streams? Yes.
YouTube as a content deliverer is going to care more about using less bandwidth than the devices battery, so if the phone can play it, you bet your ass YouTube will send it.
No. YouTube cares way, way more about user retention and engagement than it does about bandwidth. They will absolutely send the stream that keeps the device running longer and playing more ads.
YouTube makes its money with ads. The more videos a user can watch the more ads YouTube can deliver to the user. Close to 90% (!!!) of all YouTube traffic comes from mobile devices. Therefore it is in YouTubes own interest to allow its users to watch as many videos as possible.
You can be sure that big CDNs will still provide hw-decodeable streams to mobile devices, despite google now offering a performant software decoder on Android. For the same reason twitch not only introduced AV1, but at the same time HEVC as new codec (since HEVC now has broad hw-decode support among mobile devices, unlike AV1).
Funfact: YouTube’s AV1 streams are often considerably larger than the VP9 variant. They are not opting for bandwidth savings currently.
The integration of software decoding is mainly meant as a fallback. As AV1 will become more widespread you will run into cases where there will only be an AV1 encoded version of a video. Not on big platforms like YouTube, but smaller sites. Think the embedded clip in a Wordpress blog or something like that.
Having said that, dav1d is a very very performant software decoder and its impact on battery life is often negligent for the average user.
This adds several new options to the SVT-AV1 standalone encoder (–enable-variance-boost, –variance-boost-strength). I notice that, by default, variance boost appears to be turned off.
For testing, it’s probably easier to run the standalone SvtAv1EncApp with aforementioned parameters and pipe ffmpeg’s rawvideo output to it. Either way, I assume this functionality will eventually become an option for ffmpeg’s -svtav1-params, now that it’s in the SVT-AV1 main tree.
Disclaimer: this is just what I concluded from a quick check of the patch and ffmpeg code.
I’ve had this blurry or blocky mess with the previous versions of SVT on keyframes. I had to disable temporal filtering (enable-tf=0). With the current version I don’t need to do that any more.
I’m unsure if that was a bug in svt or if others just considered it normal or if it was because I don’t use av1an. Or if it’s what you are experienced but I don’t get any messy frames any more. At least not with preset 6 or below.
It is important to consider that a large part of encoding fidelity is the encoder used, not the encoder format. Are you referring to svt here, or some gpu hardware encoder?
Jesus reddit, holy hypocrisy! Yes, those communities belong to us users. How about you stop fucking us over by making your website / apps worse and worse to use lol.
av1
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.