New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
4 x 4090 GPUs in a 4U Rackmount (A100 alternative solution)
CloudNinjas
Member, Patron Provider
in General
Looking for a good GPU solution for AI? We have designed a new box with 4 x 4090 GPUs in a 4U Rackmount. Good alternative to the A100 and at a much lower price point. Contact [email protected] if interested in a quote. Take care.
Comments
Take care with wooden floors.
Any chance we can ogle the build before talking price?
Trying to understand the advantage of this build vs. say, separate Ryzen 7000 systems with single 4090 each as an end user (not hosting provider), as the gpu's can't share vram anyway (no nvlink, cmiiw). What kind of cpu and disks backplane?
I guess the Ryzen systems are not rack-mountable if wanting to use gpu. But with rental this would be the provider's problem.
What kind of power should I request from my colo provider if I want to deploy your box? Any special temperature requirement?
Never managed the lifecycle of gpu systems before, mostly in and out of clouds. Trying to calculate how much I can save with ownership / committed use. Feel free to change my mind
NVLink just allows faster interconnects, it doesn't pool the VRAM such that you can run models that don't fit on a single GPU without any parallelization strategy. You'd still need to use something like DeepSpeed. https://huggingface.co/docs/transformers/perf_train_gpu_many
appreciate the offer and effort but maybe more suitable for HET (highendtalk)
For gpu systems, 4x4090 is sorta lowend hardware . HET would be 8xA100 or 8xH100.