Welcome to MilkyWay@home

GPU support

Message boards : Number crunching : GPU support
Message board moderation

To post messages, you must log in.

AuthorMessage
kasdashdfjsah

Send message
Joined: 3 Feb 24
Posts: 14
Credit: 232,519
RAC: 0
Message 77234 - Posted: 20 Sep 2024, 22:52:02 UTC

When will this project get GPU support?
ID: 77234 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Link
Avatar

Send message
Joined: 19 Jul 10
Posts: 819
Credit: 21,098,268
RAC: 5,709
Message 77235 - Posted: 23 Sep 2024, 19:15:59 UTC - in response to Message 77234.  

It lost GPU support with the end of Separation over a year ago and according to the information in "News" there are currently no plans to release N-Body GPU app.
ID: 77235 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 993
Credit: 378,253,749
RAC: 17,935
Message 77346 - Posted: 19 Feb 2025, 19:45:00 UTC - in response to Message 77235.  
Last modified: 19 Feb 2025, 19:48:29 UTC

It lost GPU support with the end of Separation over a year ago and according to the information in "News" there are currently no plans to release N-Body GPU app.
There was once a plan for nbody on GPU. Did it get discarded?

And who broke the profile picture upload mechanism?
The above was double spaced between sentences, I apologise for the forum software ruining my post.
ID: 77346 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Link
Avatar

Send message
Joined: 19 Jul 10
Posts: 819
Credit: 21,098,268
RAC: 5,709
Message 77347 - Posted: 20 Feb 2025, 10:09:55 UTC - in response to Message 77346.  

There was once a plan for nbody on GPU. Did it get discarded?
Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it.
ID: 77347 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
kdk

Send message
Joined: 24 Mar 25
Posts: 1
Credit: 1,304,348
RAC: 53
Message 77425 - Posted: 18 May 2025, 0:28:20 UTC - in response to Message 77347.  

There was once a plan for nbody on GPU. Did it get discarded?
Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it.


Are there any programming geniuses out there that might be able to help out with bringing GPU support to this project outside of the Miklyway@home team? I wish I could help out, but I have absolutely zero programming skills and/or understanding. To my newbie and knowledgeable understanding, GPU processors can process WAY more information WAY faster than a CPU, right?

Either way, I'm happy to help crunch numbers. This project is VERY important so we can accurately pinpoint where Borg space starts and stops. ;-)
ID: 77425 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 993
Credit: 378,253,749
RAC: 17,935
Message 77427 - Posted: 18 May 2025, 1:12:37 UTC - in response to Message 77425.  
Last modified: 18 May 2025, 1:16:11 UTC

There was once a plan for nbody on GPU. Did it get discarded?
Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it.


Are there any programming geniuses out there that might be able to help out with bringing GPU support to this project outside of the Miklyway@home team? I wish I could help out, but I have absolutely zero programming skills and/or understanding. To my newbie and knowledgeable understanding, GPU processors can process WAY more information WAY faster than a CPU, right?

Either way, I'm happy to help crunch numbers. This project is VERY important so we can accurately pinpoint where Borg space starts and stops. ;-)
Graphics cards do some things faster, some things a similar speed, some things slower, and some things not at all. Their processors are completely different, they do simpler instructions but thousands at once. So it may not be possible.

I hope it is, because when they did it for the now finished Milkyway Seperation, a gain of times 80 speed resulted! It was so fast they had to bundle 5 tasks in one to stop the server getting overloaded. I was still getting 300 per card at a time and doing all those (5x300=1500 tasks) on one card in a few hours.

There were programmers who optimized seti a while ago, I think they now hang out in Einstein. They may be able to help, can't remember their names though, ask in there.
The above was double spaced between sentences, I apologise for the forum software ruining my post.
ID: 77427 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Ianab

Send message
Joined: 7 Feb 09
Posts: 2
Credit: 14,691,740
RAC: 6,292
Message 77637 - Posted: 30 Aug 2025, 22:04:44 UTC - in response to Message 77425.  

Not every algorithm benefits from GPU assistance.

Now if you have a large chunks of data that can be calculated concurrently, like the Einstein project. Then the job of sifting through the radio / ligo data can be broken down into hundreds of separate chunks or functions, and each one calculated concurrently. Because the GPU has hundreds (or thousands ) of compute cores, and each one can work on their assigned task simultaneously, you can see a massive speed improvement, vs a single CPU core that has to process each chunk of data sequentially. Basically you can start step 2,3,4,5 etc, without waiting for the result of step 1.

But other algorithms can be based on calculating something, and feeding that result into the next calculation. You can't start the next step until you have completed the current one. In that scenario there may be little benefit in passing the calculations off to a GPU, because you can't make use of the 100s of compute cores working at the same time. A single GPU core is likely slower then your CPU core anyway, they just make up for that in numbers. If the algorithm doesn't lend itself to the parallel processing style, it may even perform worse.

I don't know the details of the MW calculations, but that's one possible scenario.
ID: 77637 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 993
Credit: 378,253,749
RAC: 17,935
Message 77638 - Posted: 30 Aug 2025, 22:38:32 UTC - in response to Message 77637.  

Not every algorithm benefits from GPU assistance.

Now if you have a large chunks of data that can be calculated concurrently, like the Einstein project. Then the job of sifting through the radio / ligo data can be broken down into hundreds of separate chunks or functions, and each one calculated concurrently. Because the GPU has hundreds (or thousands ) of compute cores, and each one can work on their assigned task simultaneously, you can see a massive speed improvement, vs a single CPU core that has to process each chunk of data sequentially. Basically you can start step 2,3,4,5 etc, without waiting for the result of step 1.

But other algorithms can be based on calculating something, and feeding that result into the next calculation. You can't start the next step until you have completed the current one. In that scenario there may be little benefit in passing the calculations off to a GPU, because you can't make use of the 100s of compute cores working at the same time. A single GPU core is likely slower then your CPU core anyway, they just make up for that in numbers. If the algorithm doesn't lend itself to the parallel processing style, it may even perform worse.

I don't know the details of the MW calculations, but that's one possible scenario.


But even with that, you could simply run multiple tasks at once on the GPU until you run out of GPU RAM.
The above was double spaced between sentences, I apologise for the forum software ruining my post.
ID: 77638 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : GPU support

©2026 Astroinformatics Group