A machine for machine learning about seismic motion

A customer involved in vehicle and logistics research consulted us about a machine for conducting research on seismic motion.
In order to perform large-scale data learning at high speed, we would like to have sufficient memory and GPU.
The customer is planning to perform processing using code developed in Python by himself, but Sony'sNeural Network ConsoleI hear that it is also used in a supporting role.

In addition, the conditions for machine introduction are as follows.

・Budget within 100 million yen
・Increase the amount of RAM and GPU as much as possible within the above budget
・GPU: Geforce RTX3090
・OS: Windows 10

We have heard from customers that they are unfamiliar with PCs, so they are worried about memory and GPU capacity and balance, etc., and requested that we propose an appropriate configuration based on the above conditions.

Based on these, we propose the following specifications.

【Main Specifications】

CPU AMD Ryzen7 7700X (4.50GHz 8 cores)
memory 64GB
Storage 1 2TB M.2 SSD
Storage 2 8TB HDD S-ATA
video NVIDIA Geforce RTX4090 24GB (internal exhaust)
network on board (2.5G x1 10/100/1000Base-T x1) Wi-Fi x1
Housing + power supply Middle tower case + 1000W
OS Microsoft Windows 10 Pro 64 bit

In machine learning using GPU, the amount of memory installed in GPU is importantIt will be.
In general machine learning using GPU, data is first moved from main memory to GPU memory before it operates.
*The exception is when the customer's self-made Python code emphasizes the main memory of the machine itself.
for that reason,Main memory rarely needs more than GPU memory + α.

■ Points

・If the GPU memory capacity is large, it is possible to increase the batch size during GPU learning, which may improve the learning speed. (especially,In the case of learning using a large amount of image data(This trend can be seen in

・I would like to recommend a product with a large GPU memory capacity, but it is difficult to fit within the budget of 100 million yen.

Considering the budget and GPU memory conditions, we considered using the NVIDIA Geforce series as a GPU.
Due to the end of the specified RTX3090, it is the successor RTX4090.The video memory is 24GB, and the main memory is 64GB, which is a generous spec.

In addition, we received the following questions from our customers.

■Questions

The following [1] to [3] are assumed as the proper use of GPU and CPU.

[1] Perform preprocessing of learning data on the CPU side.

[2]Transfer preprocessed data to GPU for machine learning.

[3] Processing prediction results, etc. by the model trained on the GPU on the CPU

In the proposed configuration, the difference in capacity between the main memory and the GPU memory is large, but is it possible to change the GPU memory from 24GB to 48GB?

When training with GPU, it seems that there are many patterns where inference is also done with GPU.[3]As a result of confirming with the customer, it was assumed that inference and estimation accuracy evaluation and plotting would be performed.
CPU inference is possible without special features, but processing can be very slow.Therefore, we asked you to check whether there is a possibility that the inference program you are using is embedded using a vendor-specific mechanism prepared by the CPU hardware (like CUDA for GPU). .
For example, Intel'sDeep Learning BoostIf the software is built on the assumption that this mechanism is implemented, it is necessary to use a CPU that implements this mechanism.

reference:Intel Deep Learning Boost (Intel DL Boost) *Jumps to an external site

As a result of confirmation and interviews, we were able to confirm that the inference part is supported by using GPU with CUDA, so we have decided that there is no need to consider Intel Deep Learning Boost on the CPU side.

Also, if you want to increase the amount of GPU memory, you will need to change the video card itself to another product.
As a candidate, the RTX A6000 48GB is considered, but since it is difficult to adopt within the budget, we have informed the customer that we have selected the RTX4090 24GB, and received their understanding.For customers with a budget, we will guide you with a configuration changed to the RTX A6000, so please feel free to contact us.

 

■FAQ

・What is machine learning?
Machine learning is a mechanism that allows computers to perform specific tasks by accumulating data.
Computers autonomously improve recognition and prediction accuracy.
For machine learning, please see related articles on our owned media "TEGAKARI".

reference:[Special article] What is machine learning? *Jumps to an external site

 

・What is Neural Network Console?
Neural Network Console (NNC) is a deep learning tool developed by Sony.DeepLearning can be done without programming.

reference:Neural Network Console (Sony) *Jumps to an external site

 

・What is Inference?
Inference refers to having machine learning answer questions.
In the previous phase, learning, machine learning reads classification and discrimination from a large amount of training data, adjusts parameters, and solves problems based on inference.
If there is an error in the result of the inference, the learning phase is returned to, the parameters are adjusted, and the inference is performed again to improve the accuracy.