Advice Wanted - Self Hosting Industrial GPUs

I just bought a "new" homelab server and am considering adding in some used/refurbished NVIDIA Tesla K80s. They have 24 GB of VRAM and tons of compute power for very cheap if you get them used.

The issue is that these cards run super hot and require extra cooling set ups. I was able to find this fan adapter kit on eBay. But I still worry that if I pop one or two of these bad boys in my server that the fan won't be enough to overcome the raw heat put off by the K80.

Have any of you run this kind of card in a home lab setting? What kind of temps do you get when running models? Would a fan like this actually be enough to cool the thing? I appreciate any insight you guys might have!

toxuin,

I’ve got a k80 and it’s… underwhelming.

  • It’s CUDA API is very old (11.4). Nothing works with it - you have to compile all the things from scratch.
  • The last driver that supported it is nvidia-driver-470 which is not even included anymore in 22.04 ubuntu…
  • Under debian, you can’t (I couldn’t…) install both cuda-drivers-470 and nvidia-driver version 470.
  • It doesn’t mix well with other modern cards like 3090.
  • It idles at around 70W and when in use makes my R730 sound like an industrial vacuum cleaner.
  • It’s not even a really-24Gb card. It’s two 12Gb cards wearing a trench-coat.

I does run 30B models tho. And it is cheap.

plotting_homelab,

so i am looking to get a k80, p40 or 3060 regarding the support for cuda in future i see that it is possible to use a old gpu without the current cuda version even if a program requires it? or is it not usable in some programs today? compiling from scratch isnt a problem and drivers is something i can probably handle to but are there more real problems for future proofing?

plotting_homelab,

so looking at your “server” it seems like a workstation i have no experience with k80’s but from what i know all server gpu’s are designed passive to be cooled by some loud fingerremover5000’s so i think if you would upgrade the fans it should be fine since its only 300w. if cooling really is a problem than maybe some shrouds might help it but i don’t think a single k80 is difficult to cool since in the data center they probably ran 4-6 in one 4u chassis

boblemmy,

No matter what grade of heat dissipation equipment the GPU device itself uses, the temperature of the room is low enough is the key.

So an additional expense for you is the power to cool the air, put your servers in a room that is small enough and sealed enough so that the air conditioner will consume less power.

If you use an additional ordinary fan to blow air against the servers, you can significantly reduce the temperature.

wagesj45,
wagesj45 avatar

Thanks for the input. I live in a cold state and all the machines are in the basement, so even in summer it's pretty cool in there. This gives me hope!

plotting_homelab,

just wondering but isnt a airco kinda overkill would a fan to the outside not be enough? I mean yea if you have 20c (68f) air vs 30c (86f) the 20c air is going to cool better but if that means 800w of power consumption to cool a 400w server that means a big powerbill here in the eu.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • localllama@sh.itjust.works
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • osvaldo12
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • GTA5RPClips
  • modclub
  • JUstTest
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • Durango
  • khanakhh
  • tester
  • provamag3
  • cisconetworking
  • Leos
  • anitta
  • megavids
  • lostlight
  • All magazines