Correct. That’s basically the entire thing with virtualizing in the enterprise. Along with all the other benefits (uptime, ease of backups, disk/cpu/memory management, etc.)
Though the the virtual desktops are not going to be used for gaming - and certainly couldn’t compete with the graphics of the dedicated card mentioned in the story.
It’s not about gaming - it’s about AI. It won’t be long before editing software incorporate text-to-image and other machine learning processes (think: look at this footage and edit something for me). These systems will need a GPU at least this good.
I wonder how fast this renders Stable Diffusion things…also, related to all the talk about how big this is…seems most cases of computers I’ve built in the past 10 years or so would be empty except for two things - the gigantic heat sink + fans, and the ginormous GPU.
Not in the market in any case.
I get the impression that, while they are not and cannot realistically give up on PCIe; Nvidia’s attention is most closely focused on the needs of their own SXM socket (used in Nvidia’s own DGX systems; or partner systems with Nvidia-provided HGX boards), which are specced for up to 700w/socket, with some potential secondary attention on the OCP OAM format, if hyperscale customers insist, which tops out at around 450w/socket air cooled; but has electrical specifications that allow for up to 700w/socket with the recommendation that liquid cooling be used.
This won’t stop them from cutting parts down to fit niches down to just-barely-better-than-intel laptop graphics; but “does that look dubiously sensible in a PCIe slot?” is no longer really relevant toward the top end.
This topic was automatically closed after 5 days. New replies are no longer allowed.