Message boards :
Number crunching :
GPU support
Message board moderation
| Author | Message |
|---|---|
|
Send message Joined: 3 Feb 24 Posts: 14 Credit: 232,519 RAC: 0 |
When will this project get GPU support? |
|
Send message Joined: 19 Jul 10 Posts: 819 Credit: 21,100,301 RAC: 5,547 |
It lost GPU support with the end of Separation over a year ago and according to the information in "News" there are currently no plans to release N-Body GPU app.
|
|
Send message Joined: 5 Jul 11 Posts: 993 Credit: 378,253,871 RAC: 17,476 |
It lost GPU support with the end of Separation over a year ago and according to the information in "News" there are currently no plans to release N-Body GPU app.There was once a plan for nbody on GPU. Did it get discarded? And who broke the profile picture upload mechanism? The above was double spaced between sentences, I apologise for the forum software ruining my post. |
|
Send message Joined: 19 Jul 10 Posts: 819 Credit: 21,100,301 RAC: 5,547 |
There was once a plan for nbody on GPU. Did it get discarded?Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it.
|
|
Send message Joined: 24 Mar 25 Posts: 1 Credit: 1,304,348 RAC: 53 |
There was once a plan for nbody on GPU. Did it get discarded?Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it. Are there any programming geniuses out there that might be able to help out with bringing GPU support to this project outside of the Miklyway@home team? I wish I could help out, but I have absolutely zero programming skills and/or understanding. To my newbie and knowledgeable understanding, GPU processors can process WAY more information WAY faster than a CPU, right? Either way, I'm happy to help crunch numbers. This project is VERY important so we can accurately pinpoint where Borg space starts and stops. ;-) |
|
Send message Joined: 5 Jul 11 Posts: 993 Credit: 378,253,871 RAC: 17,476 |
Graphics cards do some things faster, some things a similar speed, some things slower, and some things not at all. Their processors are completely different, they do simpler instructions but thousands at once. So it may not be possible.There was once a plan for nbody on GPU. Did it get discarded?Yes, it was discarded because it was painfully slow and the Milkyway@home team does not have the ressources to work on it. I hope it is, because when they did it for the now finished Milkyway Seperation, a gain of times 80 speed resulted! It was so fast they had to bundle 5 tasks in one to stop the server getting overloaded. I was still getting 300 per card at a time and doing all those (5x300=1500 tasks) on one card in a few hours. There were programmers who optimized seti a while ago, I think they now hang out in Einstein. They may be able to help, can't remember their names though, ask in there. The above was double spaced between sentences, I apologise for the forum software ruining my post. |
|
Send message Joined: 7 Feb 09 Posts: 2 Credit: 14,695,448 RAC: 6,405 |
Not every algorithm benefits from GPU assistance. Now if you have a large chunks of data that can be calculated concurrently, like the Einstein project. Then the job of sifting through the radio / ligo data can be broken down into hundreds of separate chunks or functions, and each one calculated concurrently. Because the GPU has hundreds (or thousands ) of compute cores, and each one can work on their assigned task simultaneously, you can see a massive speed improvement, vs a single CPU core that has to process each chunk of data sequentially. Basically you can start step 2,3,4,5 etc, without waiting for the result of step 1. But other algorithms can be based on calculating something, and feeding that result into the next calculation. You can't start the next step until you have completed the current one. In that scenario there may be little benefit in passing the calculations off to a GPU, because you can't make use of the 100s of compute cores working at the same time. A single GPU core is likely slower then your CPU core anyway, they just make up for that in numbers. If the algorithm doesn't lend itself to the parallel processing style, it may even perform worse. I don't know the details of the MW calculations, but that's one possible scenario. |
|
Send message Joined: 5 Jul 11 Posts: 993 Credit: 378,253,871 RAC: 17,476 |
Not every algorithm benefits from GPU assistance. But even with that, you could simply run multiple tasks at once on the GPU until you run out of GPU RAM. The above was double spaced between sentences, I apologise for the forum software ruining my post. |
©2026 Astroinformatics Group