Will there be AI accelerator cards in the future of gaming PCs?

Hi there, I want to share some thoughts and want to hear your opinions on it.

Recently, AI developments are booming also in the sense of game development. E.g. NVIDIA ACE which would bring the possibility of NPCs which run an AI model to communicate with players. Also, there are developments on an alternative to ray tracing where lighting, shadows and reflections are generated using AI which would need less performance and has similar visual aesthetics as ray tracing.

So it seems like raster performance is already at a pretty decent level. And graphic card manufacturers are already putting increasingly AI processors on the graphics card.

In my eyes, the next logical step would be to separate the work of the graphics card, which would be rasterisation and ray tracing, from AI. Resulting in maybe a new kind of PCIe card, an AI accelerator, which would feature a processor optimized for parallel processing and high data throughput.

This would allow developers to run more advanced AI models on the consumer's pc. For compatibility, they could e.g. offer a cloud based subscription system.

So what are your thoughts on this?

averyminya,

Look into what Mystic AI was doing. It's effectively what you were talking about but based in reality :)

nanoobot,
nanoobot avatar

I think a big current issue on nvidia at least is that cuda/drivers do not elegantly allow multitasking. Games/video can tank performance of AI, and AI tanks performance of games/video, even if the card has capacity to run them both at the same time. I expect the solution is to solve this at a driver/OS/CUDA level and then just keep improving card specs each generation.

We may see a greater share of GPU silicon go to AI oriented hardware too in time, depending on the application.

colournoun,

Unless the AI processing is much more specialized than graphics, I think manufacturers would put that effort into making more powerful GPUs that can also be used for AI tasks.

TheTrueLinuxDev,

They would try to alleviate the cost on running GPU by making an AI accelerator chip like Tensor Core, but it'll get bottleneck by limited VRAM when Neural Net models require steep amount of memory. it's more productive to have something like NPU that runs either on RAM or by it's own memory chips offering higher amount of capacity to run such neural net and avoid the roundtrip data copying between GPU and CPU.

TheTrueLinuxDev,

Absolutely, I would suggest looking into two separate devices that focuses solely on AI acceleration:

Analog Neural Accelerator

and

Neural Processing Unit

Port8080,

Two very interesting articles. Thank you for that!

Especially the analog processor is a game changer with having the computation directly in memory. Generally, analog computers are a very interesting subject!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • gaming@beehaw.org
  • modclub
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • cubers
  • GTA5RPClips
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • tacticalgear
  • ethstaker
  • kavyap
  • Durango
  • anitta
  • everett
  • Leos
  • provamag3
  • mdbf
  • ngwrru68w68
  • cisconetworking
  • tester
  • osvaldo12
  • megavids
  • khanakhh
  • normalnudes
  • JUstTest
  • lostlight
  • All magazines