GPU server for deep learning

We received a consultation from a customer about a GPU server for deep learning.
We would like a configuration with two NVIDIA A600s for a budget of around 100 million yen.

 

Since NVIDIA A100 does not have a display output, the proposed configuration adds an entry-class video card for screen output. You can remove this card and use onboard graphics, but in terms of performance, the drawing performance will be minimal.

 

Additionally, this configuration supports installation of up to 100 x NVIDIA A4 units. However, if the main memory capacity is less than the video memory capacity when using CUDA, operation problems may occur, so we recommend that you also increase the main memory when expanding the GPU.

 

In addition, as long as you use NVIDIA A100 x2, you can cover the power consumption with a 100V power supply, but if you want to add a GPU, please use it in a 200V environment.

【Main Specifications】

CPU Xeon Gold 6326 (2.90GHz 16 cores) x2
memory 256GB REG ECC
storage 1TB SSD S-ATA
video Nvidia T600
GPU NVIDIA A100 80GB x2
network on board (10GBase-T x2)
Housing + power supply Tower type housing + 2200W power supply (up to 100W when using 1200V)
OS Ubuntu 20.04.4
Others NVIDIA CUDA Toolkit 11 / cudnn
PyTorch