GPU advice on 5080

Hi, I hope this post is in the correct place… just started my journey using Hugging Face and Comfy so please be patient.

It is evident that my current GPU (4060 8GB VRAM) is not man enough for the task… even the most simple workflows like edit an image can take 5 to 10 mins. For some reason the price and availability for a high end card in the UK has gone crazy… Amazon with cards at £5k?

The best I can narrow it down to is a Gigabyte GeForce RTX 5080 WINDFORCE OC SFF 16G Graphics Card - 16GB GDDR7, 256bit, PCI-E 5.0, 2670MHz Core Clock, 3 x DP 2.1a, 1 x HDMI 2.1b, GV-N5080WF3OC-16GD

Could someone advise it this would be a worthwhile upgrade as copilot is saying that the 50xx series is not a good choice for this kind of task

Many thanks in advance

1 Like

IMO wait a few months for RAM and VRAM prices to go down. Prices went up because the news claimed there were contracts to buy 12 months worth of 90% of production of VRAM and SSDs from major manufacturers. Well, OpenAI did not have a contract, it had a letter of intent, which it has recently broken. In West Virginia monthly power bills are now more than peoples’ mortgages due to a local data center. Cities attract data centers to claim they created jobs but the regular people pay the bill.

I think we’ll see more resistance to data centers, letters of intent will not become contracts for a few companies, and VRAM and SSD prices will come down.

Here are some sites with price histories per unit of RAM/VRAM.

  1. RAM Price History | Historical DRAM Prices 1957-Present
  2. PC Part Picker. https://pcpartpicker.com/trends/price/memory/
  3. RAM Scout. https://pcpartpicker.com/trends/price/memory/
1 Like

True. GPU prices are skyrocketing all over the world, though the extent varies by country. The surge in prices for VRAM and RAM, plus HDDs, SSDs is really painful…:cold_face:

It’s gotten to the point where they’re actually remanufacturing older-generation GPUs (like the 3060 Ti)… Even Moore would be surprised.

That aside, the reason Copilot treats the RTX 5080 like a landmine is that, until about halfway through last year, it actually was one. If Copilot draws on its built-in knowledge or picks up old articles via web searches, it will likely conclude that “Blackwell is still too early for AI applications.”

In reality, there’s absolutely no problem if you’re using ComfyUI today.

The real concern is VRAM capacity. While it’s absolutely true that “16GB is better than 8GB,” that doesn’t necessarily mean “16GB is enough.” If you prioritize running large, modern models smoothly over peak performance with smaller models, there’s a clear incentive to choose a card with more VRAM—even if it’s from a slower, older generation.

Also, while it barely runs on Windows 10 and is still advanced-user territory on Windows 11, in a Linux environment, AMD GPUs become a realistic option for ComfyUI. If you search for terms like “Wan 2.2 AMD ComfyUI,” you’ll find plenty of how-to articles.

However, for the time being, I can state with certainty that “if you’re using open-source AI models, an NVIDIA GPU is definitely the easier choice*.”


For your use, VRAM, Windows software maturity, and price-to-performance matter more than factory clock speed. ComfyUI’s own docs show that once you are pushed into lowvram or novram behavior, speed and usability drop fast, and your RTX 4060 is exactly the kind of card that gets pushed there: 8GB GDDR6, 128-bit, 3,072 CUDA cores. By contrast, the RTX 5080 class gives you 16GB GDDR7, 256-bit, and far more compute, which is why this upgrade feels so different in practice. (ComfyUI)

Best overall

The RTX 5080 gets called “bad” mostly for two reasons. First, early Blackwell support was rough, so a lot of launch-era advice got stuck in people’s heads. PyTorch only added official Blackwell support in 2.7 with CUDA 12.8 wheels in April 2025. Current ComfyUI guidance is now much more normal: the Windows portable Nvidia build uses Python 3.13 + CUDA 13.0, which tells you the platform is no longer being treated as a fragile special case. So on a clean 2026 install, the old “50-series is not good for this” claim is largely outdated. (PyTorch)

For your purpose, the 5080 is good. Your exact Gigabyte card is 16GB GDDR7, 256-bit, 10,752 CUDA cores, 2670 MHz core clock, and Gigabyte specifies 304 × 126 × 50 mm, 850W recommended PSU, and a single 16-pin power connector. That is a major jump from your 4060, and it is exactly the kind of jump that makes image generation and image editing stop feeling like a constant memory-management exercise. (GIGABYTE)

It is not perfect. The 5080 is still a 16GB card, so it is not the “I never think about VRAM again” option. Benchmark-wise, it is clearly faster than a 5070 Ti in AI image generation, but it still trails a 4090 in relevant Stable Diffusion tests. In StorageReview’s Procyon results, the 5080 scored 4,650 in Stable Diffusion 1.5 FP16, while the 4090 scored 5,260. That is why the 5080 is best described as strong and clean, not ultimate. (StorageReview.com)

Best value

This is the option I would call the smartest spend. The 5070 Ti still gives you the part that matters most: 16GB GDDR7 on a 256-bit bus. Gigabyte’s own spec page shows 8,960 CUDA cores and a 2497 MHz factory clock for this exact Windforce card. In AI image benchmarks, it is slower than the 5080, but not weak. StorageReview measured Stable Diffusion 1.5 FP16 at 1.664 seconds per image on the 5070 Ti versus 1.344 seconds on the 5080. So the 5080 is faster, but the 5070 Ti already gets you out of the painful 8GB zone. Current product results in this chat show the 5070 Ti class is materially cheaper than the 5080 class, which is why I call it the value winner. (GIGABYTE)

Best alternative if you care more about headroom than Windows simplicity

The appeal here is obvious: 24GB VRAM and a 384-bit bus. That gives you much more breathing room for heavier models and future experimentation. AMD’s official RX 7900 XTX page confirms the card line, and board specs from major vendors show the expected 24GB GDDR6 and 384-bit memory path. The catch is software maturity on Windows. AMD’s ROCm Windows compatibility matrix does include the RX 7900 XTX, but ComfyUI’s own system requirements still describe AMD Windows/Linux support as experimental for the relevant Radeon generations. So for a beginner on Windows, I would treat this as a specialist alternative, not the default safe choice. On Linux, it becomes much more attractive. (AMD)

Attribute Gigabyte GeForce RTX 5080 WINDFORCE OC SFF 16GB Gigabyte GeForce RTX 5070 Ti WINDFORCE OC SFF 16GB Sapphire Radeon RX 7900 XTX Pulse 24GB
VRAM / bus 16GB GDDR7, 256-bit (GIGABYTE) 16GB GDDR7, 256-bit (GIGABYTE) 24GB GDDR6, 384-bit (AMD)
Current Windows support picture Mainstream Nvidia path in current PyTorch + ComfyUI (PyTorch) Mainstream Nvidia path in current PyTorch + ComfyUI (PyTorch) Supported in ROCm Windows matrix, but AMD path is still experimental in ComfyUI (ROCm Documentation)
AI-image speed signal Faster than 5070 Ti, but still behind 4090 in cited SD tests (StorageReview.com) Respectable, but slower than 5080 in SD 1.5 FP16 (StorageReview.com) Main attraction is memory headroom rather than the smoothest Windows benchmark story in the cited sources (AMD)
Best fit Best overall for a beginner Windows setup Best value Best if 24GB matters more than convenience
Main downside Same 16GB ceiling as cheaper cards Slower than 5080 More Windows-side friction than Nvidia

My answer for your exact case is this: the 5080 is not a bad choice in 2026. It is actually a good one. The old bad reputation mostly came from early Blackwell support friction and from people seeing inflated prices and expecting 24GB-class flexibility from a 16GB card. If you want the best overall card for Windows + ComfyUI + Hugging Face with the least hassle, the 5080 is the right answer. If you want the best value, the 5070 Ti is the sharper buy. If you know you will care more about future VRAM headroom than Windows smoothness, then 24GB cards become interesting, but they are not the safest first move for a newcomer. (PyTorch)

One last rule of thumb, because it really matters here: for your kind of work, the order is VRAM first, then software support/maturity, then memory bus and GPU tier, and clock speed last. The difference between 8GB and 16GB changes your daily experience far more than a small clock uplift ever will. (ComfyUI)

very comprehensive reply, many thanks indeed… given I will probably not want to be buying another card after this in the near future, it sounds like a 24GB is the way to go.. not AMD… would that be a fair assessment? Then it’s about cost… maybe, as suggested, wait for prices to drop

1 Like

Yeah. Good choice. If you go with 24GB of VRAM, you’ll have significantly more headroom than with 16GB. As you know, the baseline is around 8GB, so from a headroom perspective, it’s effectively double… which is especially advantageous when doing complex tasks like ControlNet. It’s also beneficial for high-resolution generation. That said, well, the price is a bit of an issue…

As for the price, it ultimately comes down to the market speculation mentioned above and the mismatch between semiconductor manufacturers’ production plans and actual demand. (Even if semiconductor manufacturers set production plans, they can’t suddenly change what they’re producing. There’s a long time lag between manufacturing and shipping. That has always led to wild price swings… (There were even times when DRAM could be bought for next to nothing!)

So, I think prices will eventually come down, but personally, I’m expecting it to take several months or more… unless there’s some global-scale disruption (like new market entrants or a recession).

I’m not good at market forecasting, so I’ll avoid making specific predictions…

I’ve been in tech for the majority of my adult life and I am old now. Never, EVER buy Gigabyte hardware… It is junk. When it comes to GPU’s stick to ASUS 1st or MSI as a second choice. Never waste cash on any other companies cards or chances are you’ll never get your moneys worth out of it. ASUS TUF is generally the best option. Just make sure your case has optimized airflow.

1 Like

By the way, since my PC case is a bit tight (in terms of GPU slot length), I’m an MSI fan when it comes to GeForce cards.

thanks for the advice… it’s a big investment so trying to make the right choice isn’t easy… some reports say to stay away from AMD… do you agree?

1 Like

First off, it’s not that AMD cards have performance issues. The problem lies in driver and library support, which varies significantly from one OS to another. While things are improving day by day, almost no one really knows where we stand right now… That’s why so many people say, “I wouldn’t actively recommend it.”

On Windows 10, it’s quite a gamble. On Windows 11, if you’re just using ComfyUI, you’ll probably be fine. I’m sure of it. On Linux, ROCm support has matured to the point where it’s almost on par with CUDA… or so I’ve heard. I’m a Windows user, so this is secondhand information. As for Mac, I have no idea…

Many people consider AMD as an option for the same reason you do. This is especially true for those using ComfyUI with open-source video generation models (like Wan, Hunyuan, or LTX). That’s because the difference between 16GB and 24GB is absolutely critical. So, there should be plenty of how-to articles from last year and this year. If video generation models work, image generation models usually run even more smoothly (excluding minor plugins).

Therefore, I think the best approach is to start by looking for those kinds of articles…