Lack of PCIe bandwidth can nerf RTX 5090 by up to 25% in content creation workloads — Puget data confirms performance hit when using older generations and fewer lanes
Large language model AI workloads are unaffected, but content creation and 3D rendering took a real hit.

It's always been clear that PCIe bandwidth has an effect on gaming, especially when playing on some of the most high-end GPUs. But what about content creation? Puget Systems put this idea to the test and found that lacking proper PCIe bandwidth can seriously hinder performance in video rendering and game development. That suggests professionals working with multiple add-in cards may need to take care in their placement to ensure the best possible performance for their workloads.
PCIe generation is almost exclusively down to your motherboard's specifications. But just because a board supports the latest PCIe 5 on its 16x slots doesn't mean they're all equal. Plugging cards into both slots can cause them to default to a mere 8x lane configuration, halving bandwidth for both cards. Installing additional NVMe SSDs can also reduce the number of available lanes, potentially impacting performance in a range of professional tasks. The same goes for older motherboards with support for older standards, too.
Puget proved that PCIe bandwidth can limit performance in its DaVinci Resolve benchmarks. The best performance was, unsurprisingly, with the PCIe configuration with the most bandwidth. Puget found only margin of error differences between PCIe 5 16x, PCIe 5 8x, and PCIe 4 16x. But when switching down to PCIe 5 4x, PCIe 4 8x, or PCIe 3 16x, performance took a 10% hit.
Dropping down one tier further to PCIe 4 4x or PCIe 3 8x saw a total dip of 25% from the original figures. That's the kind of performance drop-off that could have a serious impact on a business, potentially reducing profits through longer time to results, or more staff and equipment time required.
After Effects saw less of a drop off, with only the slowest PCIe configurations falling outside of the margin of error, but they still show an impact from having less available PCIe bandwidth. Puget recorded similar results with Unreal Engine 5.5 virtual production tests, where PCIe 4.0 4x and PCIe 3 8x were around 7% slower than the highest bandwidth configurations.
It also recorded a five percent difference between the highest and lowest bandwidth options in Blender, though the Llama large language model benchmark shows little effect.
In conclusion, it seems that available PCIe bandwidth can have a notable impact on performance in professional applications, and particularly in video editing. However, it should be noted that this test was performed with an Nvidia RTX 5090 graphics card. That's the fastest GPU in the world outside of some professional options, and it demands the most PCIe bandwidth of any card. Systems built around more modest GPUs may not suffer so much from PCIe bandwidth constraints.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
For those on the cutting edge, though, be careful with your add-in card configurations if you want to manage PCIe bandwidth effectively. Or just wait for PCIe 6.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
-
mac_angel So, this touches on something I've been curious about, but haven't been able to confirm anywhere, other than an older video I found on YouTube.Reply
Can you run NVidia Surround with three identical GPUs and displays without SLI? The video shows that you can, but nowadays you'd need a computer with enough PCIe lanes, which means something like what's in the article, and three really F'ing expensive (thanks, NVidia) GPUs. A bit much for a "more than likely, yes". -
Li Ken-un We technically could get 24 PCIe 5.0 lanes for 3 PCIe x8 (electrically) slots on current consumer boards, but AMD probably forces board makers to divert a minimum of lanes to M.2 slots.Reply
I’ve been looking for a non-Epyc/Threadripper board that can do so since Zen 4 came out. I don’t think such a board exists or will ever exist. -
bit_user
No, it hasn't. Toms hasn't really examined this, to my knowledge, but TechPowerUp has been checking in on this subject, every generation or so, and has found performance differences between the highest-bandwidth connectivity options to be pretty negligible.The article said:It's always been clear that PCIe bandwidth has an effect on gaming, especially when playing on some of the most high-end GPUs.
https://www.techpowerup.com/review/nvidia-geforce-rtx-5090-pci-express-scaling/33.html
That was only one of the content creation workloads they tested. The rest weren't nearly as sensitive.The article said:Puget proved that PCIe bandwidth can limit performance in its DaVinci Resolve benchmarks.
If you actually look at the graph and think about what it's saying, I take the opposite conclusion.
Think about that and let it sink in. When using the fastest GPU currently available, you can run it at 1/4th of its max PCIe bandwidth and only lose 10%, on the most sensitive of the content creation workloads they tested!The article said:when switching down to PCIe 5 4x, PCIe 4 8x, or PCIe 3 16x, performance took a 10% hit.
So, you can go ahead run two cards at x8 each, or put a single card in a x16 PCIe 4.0 slot without a second thought! It's pretty crazy that you could even go to x4 and still hardly notice!
IMO, that just underscores what I've been saying all along, which is that consumer PCs still don't really need PCIe 5.0! It's only when you start getting into the bifurcation scenarios where any value in it can be seen.
It's always been true that, if you cut PCIe bandwidth by enough, you'd eventually hit a point where you'd notice. However, I think the headline has it backwards. I think the real news is that PCIe 5.0 x16 is still overkill and unjustifiable. If I were writing the piece, I'd probably put something like:The article said:In conclusion, it seems that available PCIe bandwidth can have a notable impact on performance in professional applications,
Battery of Content Creation Tests Reveals PCIe 5.0 x16 is still Overkill, in 2025 -
bit_user
Only when you use settings too high for it to fit everything in on-board memory. Then, you're forcing it to do way more PCIe traffic than it would normally do.FunSurfer said:Well, if PCIe 4.0 x8 can hinder rtx 5060 ti performance...
Every generation, people take some low-end card with a small amount of memory and reduced PCIe lanes and repeat variations on this same basic experiment. The moral of the story is that you need to use settings that are in line with what your card can handle.
If you buy an 8 GB card, then you ought to be aware that it can't handle the same options that a 16 GB card would. And relying on PCIe as a crutch still isn't likely to produce a very satisfactory experience, since your 1% lows are going to be trash either way. -
Misgar
PCIe 3? I'm running even older PCIe 2 hardware (LSI SAS2008 HBA controllers) in a number of systems. Fine for lower speed hard disks and external SAS LTO4 tape drives. I can backup 800GB to tapes costing a few dollars with PCIe 2 hardware.jg.millirem said:Who’s still using PCIe 3? -
bit_user
Well, which side are you most interested in? If you're talking about PCIe cards, then you can probably find a lot of people still using older ones. A lot of SFP+ 10 gigabit Ethernet cards flooded onto the used market that are PCIe 2.0 x8. So, if you had a slot for them, it became a cheap and easy way to add 10 gig to your machine. I think we're currently at the point where we're starting to see some PCIe 3.0 x4 ones.jg.millirem said:How many people like you are there?
In terms of motherboards, there are still some people with older boards out there, but also lower-end machines are stuck on PCIe 3.0. For instance, the M.2 slot on a 2-year-old Alder Lake-N board I have is only PCIe 3.0, because it's a low-end chip and Intel cheaped out and didn't give it PCIe 4.0. Even today, the Twin Lake chips they're selling don't have PCIe 4.0.
If you look at a lot of AM4 boards, the chipset lanes of all but the highest-end are PCIe 3.0. AMD is still actively selling AM4 into more cost-sensitive markets.
https://www.pugetsystems.com/labs/articles/amd-x570-vs-b550-vs-a520-chipset-comparison-1969/ -
Misgar
Probably more than you imagine, but they do exist if you read posts on other forums, e.g.:jg.millirem said:How many people like you are there?
https://forums.servethehome.com/index.php
As @bit_user mentioned, some people with more than one computer need to transfer large amounts of data at speeds of 10Gigbits or faster (25/40/100GB/s). This can be achieved with cheap ex-server SFP+ Gen.2 & Gen.3 NICs.
Very few "standard" motherboards have built in 10GB Ethernet. Fewer still support SFP+ DAC and OM3 fibre optic, without additional PCIe cards. When I'm backing up 600GB+ data between systems, I don't want to wait ages.
In addition to SFP+ network cards, I also fit LSI SAS HBA controllers in many of my machines for backups to external SAS LTO4 tape drives. Safer (in my opinion) and cheaper than multiple (slow SMR) USB hard disks.
I also use SAS HBA IT-mode controllers with 8-disk RAIDZ2 arrays in TrueNAS core servers. Some people (data horders) need several multi-Terabyte arrays with some redundancy. You could buy a 20TB+ USB3 drive and put all your eggs in one basket.
I fill two or three full length PCIe slots on each of my ATX motherboards with PCIe cards. When graphics isn't important and the iGPU is sufficient, I fit an LSI SAS card or SFP+ NIC in the first PCIe x16 slot. In headless remote servers, you don't always need a powerful plug in GPU card.
If your main interest is gaming on a single modern PC, there's no need to worry about older generations of PCIe. Just relax.:)