Medical Imaging Deep Learning Machine

A customer inquired about a machine for deep learning using medical images.
We would like to make a proposal within a budget of 60 yen, and the conditions for consideration are as follows.

・Software used: TensorFlow, Keras, Pytorch, CUDA
・OS: No installation (Ubuntu 22.04 planned)

You are asking about the number of GPUs that can be installed in the proposed configuration, as training on GPUs is assumed.
We also received questions about whether we should give priority to GPU memory capacity or the number of GPUs installed.

Based on the conditions of your inquiry, we propose the following configuration.

CPU Core i7-13700K (3.40GHz 8-core + 2.50GHz 8-core)
memory 32GB
storage 1TB SSD S-ATA
video NVIDIA Geforce RTX4080 16GB
network on board (2.5GBase-T x1) Wi-Fi x1
Housing + power supply Tower type housing + 850W
OS None

We proposed a configuration equipped with the 13th generation Core i7.

I chose the Geforce RTX4080 for the video card.
If you choose the RTX1, which is one rank higher, it will exceed your budget, so it is a choice that prioritizes cost.
The RTX4080 also has almost the same number of CUDA cores as the previous generation RTX3090, so it is positioned as a high-end model in terms of simple processing performance.

In addition, we do not consider installing multiple video cards in this case because it is difficult to accommodate within the budget.
The configuration of this example assumes operation with one video card and does not support card expansion.
If you would like a configuration that can accommodate two video cards, please let us know as we will change the base configuration to one optimized for workstations.

■Which should be prioritized, GPU memory capacity or the number of GPUs installed?

As an example, consider the case of which is better, 16 x 1GB GPU memory card or 8 x 2GB cards, the former is preferred.
It is thought that the total number of CUDA cores will be larger with two video cards, but the resources of multiple video cards are basically managed separately on CUDA, so programs that exchange data across cards must handle resource management by coding.
Therefore, basically, it is easier to handle by securing the video memory with one video card.
It's hard to generalize about this point because the situation differs depending on whether the type of video card is Geforce or RTX series, but for consultation in this case, the best possible choice is a high-performance (high CUDA core count) video card x 1.

The configuration of this case study is based on the conditions given by the customer.
Please feel free to contact us even if you are considering different conditions from what is posted.

 

■ Keywords

・What is Deep Learning?
DeepLearning is a type of machine learning that uses multilayer neural networks to perform advanced pattern recognition and prediction.Since it generally requires a large amount of data, it is considered to be an effective method when data are abundant.
DeepLearning is also widely used in fields such as image recognition, speech recognition, and natural language processing.Because it can learn complex features and relationships, it can achieve higher accuracy than traditional machine learning methods.

reference:[Special article] What is machine learning? * Jump to our owned media "TEGAKARI"

 

・What is TensorFlow?
TensorFlow is a machine learning library published as open source by Google. It supports multiple languages ​​such as Python and C++, and enables high-speed calculation using CPUs and GPUs.It is suitable for applications such as image recognition, natural language processing, and time-series data processing, and is also characterized by the ability to use pre-trained neural networks.It is widely used in the latest deep learning research and development because it can learn on large datasets.

reference:TensorFlow *Jumps to an external site

 

・What is Keras?
Keras is a deep learning library written in Python.It features an easy-to-use and intuitive API design for rapid neural net prototyping.It uses TensorFlow and Theano for the backend and runs on both CPU and GPU.Also, since it is written in Python, it can be expanded flexibly and is suitable for research and development purposes.

reference:Hard *Jumps to an external site

 

・What is CUDA Toolkit?
CUDA Toolkit is a parallel computing platform for GPUs provided by NVIDIA. High-speed parallel programming using NVIDIA's GPU architecture is possible from C/C++. The computing power of GPUs can be utilized in various fields such as DeepLearning, scientific computing, and computer graphics.It includes tools such as compilers, libraries, and debuggers, and is provided as an SDK.It also supports multi-GPU environments and can be used in a wide range of environments from workstations to clouds.

reference:CUDA Toolkit (NVIDIA Corporation) *Jumps to an external site