21)
Message boards :
Number crunching :
Need help with linux and app_info
(Message 69863)
Posted 28 May 2020 by ![]() Post: Apparently there is still some confusion about GPU usage. I think people are over thinking the issue. Just consider the GPU as if it were just another cpu core. Although it takes a special app but it consumes 1 work unit at a time, processes it, spits out the result and then gets another work unit. If you have two GPU's e.g 2 GTX 1660 Ti's like me then each GPU will get a work unit and each GPU does not know or care about the other GPU, thus you get 2 work uints being processed at the exact same time. If you had 6 GPU's you'd be able to process 6 gpu-type work units in parallel. Now, yes, you can run more than 1 WU's on a GPU, simultaneously but you generally take a performance hit when you do, you need to actually test it to be sure. SLI makes 2 GPU's look like 1 and as far as I have heard has no performance benefit for the kind of computational work we do. I tried the app_confg you posted but what i get then is 2 work units assigned to GPI 0 and none assigned to GPU 1. I think it needs some kind of device line added to it? |
22)
Message boards :
Number crunching :
Need help with linux and app_info
(Message 69860)
Posted 27 May 2020 by ![]() Post: sorry, I would assume the benefit of having multiple GPU's was to crunch in parallel otherwise - whats the point? I want both GPU's fully occupied all the time. As it is right now, only 1 GPU is in use at any given time. Thanks. |
23)
Message boards :
Number crunching :
Need help with linux and app_info
(Message 69857)
Posted 27 May 2020 by ![]() Post: Hi, I have 2 gpu's installed on an i9-9900k system and boinc 7.16.3
CUDA: NVIDIA GPU 1: GeForce GTX 1660 Ti (driver version 440.31, CUDA version 10.2, compute capability 7.5, 4096MB, 3972MB available, 5668 GFLOPS peak)
|
©2025 Astroinformatics Group