This is a cache of https://news.slashdot.org/story/24/11/04/2140206/ffmpeg-devs-boast-of-up-to-94x-performance-boost-after-implementing-handwritten-avx-512-assembly-code. It is a snapshot of the page at 2024-11-05T01:10:42.369+0000.
FFmpeg Devs Boast of Up To 94x Performance Boost After Implementing Handwritten AVX-512 Assembly Code - Slashdot

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Media Programming Hardware

FFmpeg Devs Boast of Up To 94x Performance Boost After Implementing Handwritten AVX-512 Assembly Code (tomshardware.com) 42

Anton Shilov reports via Tom's Hardware: FFmpeg is an open-source video decoding project developed by volunteers who contribute to its codebase, fix bugs, and add new features. The project is led by a small group of core developers and maintainers who oversee its direction and ensure that contributions meet certain standards. They coordinate the project's development and release cycles, merging contributions from other developers. This group of developers tried to implement a handwritten AVX512 assembly code path, something that has rarely been done before, at least not in the video industry.

The developers have created an optimized code path using the AVX-512 instruction set to accelerate specific functions within the FFmpeg multimedia processing library. By leveraging AVX-512, they were able to achieve significant performance improvements -- from three to 94 times faster -- compared to standard implementations. AVX-512 enables processing large chunks of data in parallel using 512-bit registers, which can handle up to 16 single-precision FLOPS or 8 double-precision FLOPS in one operation. This optimization is ideal for compute-heavy tasks in general, but in the case of video and image processing in particular.

The benchmarking results show that the new handwritten AVX-512 code path performs considerably faster than other implementations, including baseline C code and lower SIMD instruction sets like AVX2 and SSSE3. In some cases, the revamped AVX-512 codepath achieves a speedup of nearly 94 times over the baseline, highlighting the efficiency of hand-optimized assembly code for AVX-512.

FFmpeg Devs Boast of Up To 94x Performance Boost After Implementing Handwritten AVX-512 Assembly Code

Comments Filter:
  • Neat... But, (Score:4, Insightful)

    by Valgrus Thunderaxe ( 8769977 ) on Monday November 04, 2024 @05:55PM (#64919749)
    With a 94x improvement, someone needs to fix their compiler.
  • I hope it's a real world benchmark not some contrived situation and propaganda from Intel.

  • by dfghjk ( 711126 ) on Monday November 04, 2024 @06:07PM (#64919771)

    Why didn't they just ask ChatGPT to rewrite it? Handwritten? Haven't we been told there's no reason for that?

  • by ip_freely_2000 ( 577249 ) on Monday November 04, 2024 @06:09PM (#64919777)
    I've had 90%+ optimizations on certain data processing functions by hand coding and tuning instead of depending on libraries and other 'productivity' tools. In my coding life (which is long but fortunately very nearly over) we've added layer upon layer of complexity which is sometimes not necessary.
    • I stopped coding/optimizing in assembly over 20 years ago, and then only utilized knowledge of it for debugging, cybersecurity, or fun purposes for a few years. Nowadays I have zero use for it other than getting super annoyed that people don't know it.

      • Hey punk, get off my ROM!

        My last significant use of assembly was late 80s / early 90s. Working on embedded systems with 8-bit microprocessors, tiny boot ROM capacity, and rudimentary or no compiler, there was no choice. Around 2017, I had to take a deep dive into GCC compiled ARM code to characterize an obscure but dramatic failure in a specific embedded situation. This turned out to be incorrect code generated by GCC. Once characterized, it was not difficult to work around, but it required machine level

    • by Tony Isaac ( 1301187 ) on Monday November 04, 2024 @06:21PM (#64919801) Homepage

      That, and programmers often use boneheaded algorithms because they don't know any better.

      Remember Bubble Sort? If you tried to build the most inefficient algorithm possible, it's hard to imagine one that would beat Bubble Sort. And yet for years, every computer programming textbook taught this algorithm that's useful for basically nothing, and isn't even intuitive. Students normally react with "How does that even work???" But you know that algorithm made its way into more than a few production systems.

      Software optimization employs some very specific techniques. Notably, using some kind of profiler to identify where your bottlenecks are, and looking for ways to reduce execution or loop counts, or ways to reduce the time spent in each iteration. There's a whole lot of software, including decoding algorithms, that never went through any kind of proper optimization analysis.

      I agree, it's not surprising to find ways to increase performance by 90+%, regardless of the language chosen.

      • by Entrope ( 68843 )

        If you tried to build the most inefficient algorithm possible, it's hard to imagine one that would beat Bubble Sort.

        Challenge accepted [wikipedia.org].

        • Funny! Well, since this sort compares its slowness to Bubble Sort, it would seem that Bubble Sort might still get 2nd place for slowest!

        • by vux984 ( 928602 )

          Bah - I like random sort, which is essentially:

          swap two elements at random
          check if the list is sorted now
          repeat if the list is not sorted

          Given enough time, it will sort the list, quite by accident. ;)

      • If you tried to build the most inefficient algorithm possible, it's hard to imagine one that would beat Bubble Sort.

        Then you lack imagination. Bubble sort is O(n^2). There are O(n^3) sorting algorithms. Here's an O(n!) sort:

        1. Shuffle data randomly
        2. Test if it is sorted. If yes, you're done, else go to 1.

        And yet for years, every computer programming textbook taught this algorithm that's useful for basically nothing

        Bubble sort is useful for very small datasets, like 10 or so, and constrained memory or cache capacity.

        But bubble sort is most taught as an example of a naive implementation leading to poor performance.

        isn't even intuitive. Students normally react with "How does that even work???"

        It's obvious why Bubble sort works. It is way easier to understand than Quicksort.

        • 1. Shuffle data randomly
          2. Test if it is sorted. If yes, you're done, else go to 1.

          Look, I've told you before - I get really tired of people reposting my code without attribution.

      • Remember Bubble Sort? If you tried to build the most inefficient algorithm possible, it's hard to imagine one that would beat Bubble Sort. And yet for years, every computer programming textbook taught this algorithm that's useful for basically nothing, and isn't even intuitive. Students normally react with "How does that even work???" But you know that algorithm made its way into more than a few production systems.

        Bubble Sort sorts an already sorted list in O(n) time. Try doing the same thing with Merge Sor

        • Shell sort and insertion sort are both simple and both do better than bubble sort, even in your "ideal" scenario.

      • Remember Bubble Sort? If you tried to build the most inefficient algorithm possible, it's hard to imagine one that would beat Bubble Sort

        Bubble sort is faster than Quicksort for less than 8 items. That sounds like nothing, but then you realize many if not most sorts done probably have fewer than 8 items.

      • And yet for years, every computer programming textbook taught this algorithm that's useful for basically nothing, and isn't even intuitive.

        You didn't pay any attention in class. Bubble sort is held up in virtually every textbook as an example of something that does the base line job in an inefficient way. It is literally taught as an example of not being useful.

        That said I think your assertion that it isn't intuitive is quite silly. It's probably the most intuitive algorithm that is. Is current value bigger than next value in array? If so, switch, repeat, done. There is literally nothing more intuitive than comparing two numbers and just movin

    • Optimizing in assembly used to be a routine thing. I guess it went away because: 1. Takes forever. 2. Only very smart people can do it. 3. Idiot managers only want shipped product, today!

      I think #2 is the real bottleneck. Geniuses have lots of options, working for idiots is a shitty option.

      • I wonder how much faster it will be when it's written in rust?

        • Haha I came her to comment almost the same thing. Can't tell if you are being sarcastic though. But I sure was going to be.

          The masturbating security monkeys sure seem to think Rust is hot shit but I want REAL WORLD examples dammit. If I was a billionaire I would be paying top programmers to battle head to head, and benchmark both the code, and TIME TO WRITE the code, of C vs Rust

      • Well, maintenance nightmare aside, it's actually pretty hard to get performance out of handwritten assembly on x86. It's not worth it. I've seen the compiler spit out utter garbage but it still runs at roughly the same speed of handwritten assembly, just because of the amazing pipeline process the CPU has. The main benefit you'd get is a smaller binary file. The SIMD instructions are the outlier, where compiler support might not be good enough, where the instructions are difficult to generate for.

    • Mostly I have used assembler to do stuff needed in a system, stuff that a general purpose library does you on a full operating system. But in an embedded system you are the full operating system, and the RTOSs out there don't give you system startup code and the like. For instance, cache invalidation instructions, interrupt/exception handlers, memory barriers, context switching, etc. Other times you _know_ the code is very slow and can be sped up, and can't easily be sped up with pure standard C code.

      In

  • by Xylantiel ( 177496 ) on Monday November 04, 2024 @06:19PM (#64919795)
    I fiddled with this a bit once and it seemed that not all chips implemented "actual" AVX-512. i.e. some chips just support the instructions, they don't actually have the hardware to do all those operation in parallel. Maybe that is discussed more in the article.
  • Due to the nature of assembler, that code will be bound to one CPU architecture.

    Won't help those of us on ARM or Apple silicon.

    I suppose we could offload video over the network to an x86 architecture box, but that'll eat up some of that 94x.

    Wonder what Windows and MacOS emulators make of raw machine code?

    • I have an acquaintance that works for one of the FAANG companies. The focus of their team is to hand write assembly for performance critical operations across the company. They do this for multiple chip architectures.

      Your phone or PC might well have some of that code in it.

    • AVX512 instructions are specific to x86. If you want the same sort of accelerations in ARM (including Apple M series processors) you need to use something like Scalable Vector Extension (SVE) or Scalable Vector Extension 2 (SVE2) which is written for the ARM architecture family.
    • Architecturally, it's not a problem. You just encapsulate the assembly in a function, and use polymorphism, based on whether the CPU feature is available. For the platforms that don't have the ability, they will just run slower. Apparently 94x slower or something.
  • by Anonymous Coward

    A while back, I was working at a place that did video production. They had these expensive, barely working video appliances that the license fees were just plain extortion, and the support was often, "buy our newer model, and we might fix that". I took the physical appliance, removed the disk with the vendor OS and set it aside if need be, installed Linux and used ffmpeg for everything that appliance did. It worked perfectly, and did what we needed it to do, and might as use the Supermicro hardware that

  • Since the summary couldn't be bothered, AVX-512 is an instruction set extension for X86 processors.
  • Hand coding can drastically improve performance if you know what you're doing and the compiler is doing a poor job. That being said, Intel is deprecating its AVX-512 support since it wasn't worth the silicon. AMD on the other hand did a much better job of it.

The fancy is indeed no other than a mode of memory emancipated from the order of space and time. -- Samuel Taylor Coleridge

Working...