True. GPU prices are skyrocketing all over the world, though the extent varies by country. The surge in prices for VRAM and RAM, plus HDDs, SSDs is really painful…
It’s gotten to the point where they’re actually remanufacturing older-generation GPUs (like the 3060 Ti)… Even Moore would be surprised.
That aside, the reason Copilot treats the RTX 5080 like a landmine is that, until about halfway through last year, it actually was one. If Copilot draws on its built-in knowledge or picks up old articles via web searches, it will likely conclude that “Blackwell is still too early for AI applications.”
In reality, there’s absolutely no problem if you’re using ComfyUI today.
The real concern is VRAM capacity. While it’s absolutely true that “16GB is better than 8GB,” that doesn’t necessarily mean “16GB is enough.” If you prioritize running large, modern models smoothly over peak performance with smaller models, there’s a clear incentive to choose a card with more VRAM—even if it’s from a slower, older generation.
Also, while it barely runs on Windows 10 and is still advanced-user territory on Windows 11, in a Linux environment, AMD GPUs become a realistic option for ComfyUI. If you search for terms like “Wan 2.2 AMD ComfyUI,” you’ll find plenty of how-to articles.
However, for the time being, I can state with certainty that “if you’re using open-source AI models, an NVIDIA GPU is definitely the easier choice*.”
For your use, VRAM, Windows software maturity, and price-to-performance matter more than factory clock speed. ComfyUI’s own docs show that once you are pushed into lowvram or novram behavior, speed and usability drop fast, and your RTX 4060 is exactly the kind of card that gets pushed there: 8GB GDDR6, 128-bit, 3,072 CUDA cores. By contrast, the RTX 5080 class gives you 16GB GDDR7, 256-bit, and far more compute, which is why this upgrade feels so different in practice. (ComfyUI)
Best overall
The RTX 5080 gets called “bad” mostly for two reasons. First, early Blackwell support was rough, so a lot of launch-era advice got stuck in people’s heads. PyTorch only added official Blackwell support in 2.7 with CUDA 12.8 wheels in April 2025. Current ComfyUI guidance is now much more normal: the Windows portable Nvidia build uses Python 3.13 + CUDA 13.0, which tells you the platform is no longer being treated as a fragile special case. So on a clean 2026 install, the old “50-series is not good for this” claim is largely outdated. (PyTorch)
For your purpose, the 5080 is good. Your exact Gigabyte card is 16GB GDDR7, 256-bit, 10,752 CUDA cores, 2670 MHz core clock, and Gigabyte specifies 304 × 126 × 50 mm, 850W recommended PSU, and a single 16-pin power connector. That is a major jump from your 4060, and it is exactly the kind of jump that makes image generation and image editing stop feeling like a constant memory-management exercise. (GIGABYTE)
It is not perfect. The 5080 is still a 16GB card, so it is not the “I never think about VRAM again” option. Benchmark-wise, it is clearly faster than a 5070 Ti in AI image generation, but it still trails a 4090 in relevant Stable Diffusion tests. In StorageReview’s Procyon results, the 5080 scored 4,650 in Stable Diffusion 1.5 FP16, while the 4090 scored 5,260. That is why the 5080 is best described as strong and clean, not ultimate. (StorageReview.com)
Best value
This is the option I would call the smartest spend. The 5070 Ti still gives you the part that matters most: 16GB GDDR7 on a 256-bit bus. Gigabyte’s own spec page shows 8,960 CUDA cores and a 2497 MHz factory clock for this exact Windforce card. In AI image benchmarks, it is slower than the 5080, but not weak. StorageReview measured Stable Diffusion 1.5 FP16 at 1.664 seconds per image on the 5070 Ti versus 1.344 seconds on the 5080. So the 5080 is faster, but the 5070 Ti already gets you out of the painful 8GB zone. Current product results in this chat show the 5070 Ti class is materially cheaper than the 5080 class, which is why I call it the value winner. (GIGABYTE)
Best alternative if you care more about headroom than Windows simplicity
The appeal here is obvious: 24GB VRAM and a 384-bit bus. That gives you much more breathing room for heavier models and future experimentation. AMD’s official RX 7900 XTX page confirms the card line, and board specs from major vendors show the expected 24GB GDDR6 and 384-bit memory path. The catch is software maturity on Windows. AMD’s ROCm Windows compatibility matrix does include the RX 7900 XTX, but ComfyUI’s own system requirements still describe AMD Windows/Linux support as experimental for the relevant Radeon generations. So for a beginner on Windows, I would treat this as a specialist alternative, not the default safe choice. On Linux, it becomes much more attractive. (AMD)
| Attribute |
Gigabyte GeForce RTX 5080 WINDFORCE OC SFF 16GB |
Gigabyte GeForce RTX 5070 Ti WINDFORCE OC SFF 16GB |
Sapphire Radeon RX 7900 XTX Pulse 24GB |
| VRAM / bus |
16GB GDDR7, 256-bit (GIGABYTE) |
16GB GDDR7, 256-bit (GIGABYTE) |
24GB GDDR6, 384-bit (AMD) |
| Current Windows support picture |
Mainstream Nvidia path in current PyTorch + ComfyUI (PyTorch) |
Mainstream Nvidia path in current PyTorch + ComfyUI (PyTorch) |
Supported in ROCm Windows matrix, but AMD path is still experimental in ComfyUI (ROCm Documentation) |
| AI-image speed signal |
Faster than 5070 Ti, but still behind 4090 in cited SD tests (StorageReview.com) |
Respectable, but slower than 5080 in SD 1.5 FP16 (StorageReview.com) |
Main attraction is memory headroom rather than the smoothest Windows benchmark story in the cited sources (AMD) |
| Best fit |
Best overall for a beginner Windows setup |
Best value |
Best if 24GB matters more than convenience |
| Main downside |
Same 16GB ceiling as cheaper cards |
Slower than 5080 |
More Windows-side friction than Nvidia |
My answer for your exact case is this: the 5080 is not a bad choice in 2026. It is actually a good one. The old bad reputation mostly came from early Blackwell support friction and from people seeing inflated prices and expecting 24GB-class flexibility from a 16GB card. If you want the best overall card for Windows + ComfyUI + Hugging Face with the least hassle, the 5080 is the right answer. If you want the best value, the 5070 Ti is the sharper buy. If you know you will care more about future VRAM headroom than Windows smoothness, then 24GB cards become interesting, but they are not the safest first move for a newcomer. (PyTorch)
One last rule of thumb, because it really matters here: for your kind of work, the order is VRAM first, then software support/maturity, then memory bus and GPU tier, and clock speed last. The difference between 8GB and 16GB changes your daily experience far more than a small clock uplift ever will. (ComfyUI)