Welcome to MilkyWay@home

Twin CPUs and multi-core nbody tasks - success :-)

Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70080 - Posted: 29 Aug 2020, 20:22:31 UTC - in response to Message 70078.  

How will this make it better? Twice the effort going into 64 bit? Less untidy programming catering for both?


If they actually revisit the codebase and remove all the workarounds and jumps to handle 32 bit code, it would reduce the size of the applications and possibly speed them up.
Mainly talking about the BOINC applications like the client and the manager. The science apps are a different story. They said a long time ago that the 32 bit science apps are faster than the equivalent 64 bit app in some cases because the memory access is simpler.


Surely we should continue using both if it's not a one size fits all?
ID: 70080 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 540,090,388
RAC: 86,730
Message 70081 - Posted: 29 Aug 2020, 23:56:54 UTC - in response to Message 70080.  

Talking about two DIFFERENT things here. Boinc apps are NOT the science apps.
ID: 70081 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,950,829
RAC: 21,429
Message 70082 - Posted: 30 Aug 2020, 3:01:09 UTC - in response to Message 70075.  

No, they are just dropping all the x86 32 bit versions.


What did you mean by "Hopefully with the totally 64 bit versions coming it will be a lot better"?

Oh, sorry, two of you in here now. The other guy! Oy you! What did you mean by the above?


Simple no more 32 bit versions of Boinc will be written and all future version will be 64 bit ONLY!! They are removing ALL the 32 bit legacy stuff from the client side, that's us, software. I have no clue if they are doing the same to the server side or not but ALOT of projects are dumping 32 bit apps going forward. They will continue to provide Pi and Droid stuff but not 32 bit computer stuff. A few projects have already done it while others are analying the 32 bit usage percentage at their projects, those that have reported the results have said it's under 5% now.


How will this make it better? Twice the effort going into 64 bit? Less untidy programming catering for both?


According to the Developers there is ALOT of 32bit crap in there holding things back, ie showing the actual amount of memory on gpu's that have more than 4gb so projects can filter then out of app that crash when trying to run them, ie Einstein. The 64bit stuff is already in there, making one without the 32bit stuff in it has been in the works for awhile to make sure Boinc still works afterwards.
ID: 70082 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70085 - Posted: 30 Aug 2020, 17:34:33 UTC - in response to Message 70081.  

Talking about two DIFFERENT things here. Boinc apps are NOT the science apps.


I know. I thought you or someone else mentioned they were planning on stopping the 32bit science apps too.
ID: 70085 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70086 - Posted: 30 Aug 2020, 17:37:59 UTC - in response to Message 70082.  

According to the Developers there is ALOT of 32bit crap in there holding things back, ie showing the actual amount of memory on gpu's that have more than 4gb so projects can filter then out of app that crash when trying to run them, ie Einstein. The 64bit stuff is already in there, making one without the 32bit stuff in it has been in the works for awhile to make sure Boinc still works afterwards.


It's not just the memory they need to filter on, I've got cards with plenty memory that won't run gravity, because they lack the newer instruction set. What they should be doing is looking at the model of graphics card, and comparing it to a list of ones that the program has been tested on. Or just testing for certain instructions being available. I notice when you start Boinc that it has a big list of AVX, MMX, etc for CPUs. Does it check cards like that too?
ID: 70086 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,950,829
RAC: 21,429
Message 70091 - Posted: 31 Aug 2020, 10:39:27 UTC - in response to Message 70086.  

According to the Developers there is ALOT of 32bit crap in there holding things back, ie showing the actual amount of memory on gpu's that have more than 4gb so projects can filter then out of app that crash when trying to run them, ie Einstein. The 64bit stuff is already in there, making one without the 32bit stuff in it has been in the works for awhile to make sure Boinc still works afterwards.


It's not just the memory they need to filter on, I've got cards with plenty memory that won't run gravity, because they lack the newer instruction set. What they should be doing is looking at the model of graphics card, and comparing it to a list of ones that the program has been tested on. Or just testing for certain instructions being available. I notice when you start Boinc that it has a big list of AVX, MMX, etc for CPUs. Does it check cards like that too?


Nope not until they dump the 32bit stuff it can't...conflicts you know.
ID: 70091 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 540,090,388
RAC: 86,730
Message 70094 - Posted: 31 Aug 2020, 16:00:11 UTC - in response to Message 70085.  

Talking about two DIFFERENT things here. Boinc apps are NOT the science apps.


I know. I thought you or someone else mentioned they were planning on stopping the 32bit science apps too.

No, I did not say that. I said that David Anderson wants to eliminate the 32 bit clients. He controls the BOINC applications. Has nothing to do with any science application other than Nebula.
ID: 70094 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 540,090,388
RAC: 86,730
Message 70095 - Posted: 31 Aug 2020, 16:07:15 UTC - in response to Message 70086.  

According to the Developers there is ALOT of 32bit crap in there holding things back, ie showing the actual amount of memory on gpu's that have more than 4gb so projects can filter then out of app that crash when trying to run them, ie Einstein. The 64bit stuff is already in there, making one without the 32bit stuff in it has been in the works for awhile to make sure Boinc still works afterwards.


It's not just the memory they need to filter on, I've got cards with plenty memory that won't run gravity, because they lack the newer instruction set. What they should be doing is looking at the model of graphics card, and comparing it to a list of ones that the program has been tested on. Or just testing for certain instructions being available. I notice when you start Boinc that it has a big list of AVX, MMX, etc for CPUs. Does it check cards like that too?

The capabilities of the graphics cards are read from the vendors API stack. For our use that is either the CUDA API or the OpenCL API depending on which vendor the science applications use.

Already at a disadvantage on the memory capacity with Nvidia cards as mentioned previously because of incorrect API usage. And at the low level, each card is still restricted by the silicon and gpu firmware installed. You will never get a OpenCL 1.0 capable card perform a OpenCL 1.2 or 2.0 instruction.
ID: 70095 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70096 - Posted: 31 Aug 2020, 17:13:43 UTC - in response to Message 70095.  

The capabilities of the graphics cards are read from the vendors API stack. For our use that is either the CUDA API or the OpenCL API depending on which vendor the science applications use.

Already at a disadvantage on the memory capacity with Nvidia cards as mentioned previously because of incorrect API usage. And at the low level, each card is still restricted by the silicon and gpu firmware installed. You will never get a OpenCL 1.0 capable card perform a OpenCL 1.2 or 2.0 instruction.


So who is responsible for Einstein Gravity trying to run on my R9 280X cards and failing? Has the card misreported it can do it? Or is Einstein not properly checking?
ID: 70096 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70103 - Posted: 3 Sep 2020, 11:41:56 UTC - in response to Message 70078.  

How will this make it better? Twice the effort going into 64 bit? Less untidy programming catering for both?


If they actually revisit the codebase and remove all the workarounds and jumps to handle 32 bit code, it would reduce the size of the applications and possibly speed them up.
Mainly talking about the BOINC applications like the client and the manager. The science apps are a different story. They said a long time ago that the 32 bit science apps are faster than the equivalent 64 bit app in some cases because the memory access is simpler.


They also need to rewrite the scheduler. I've yet again got something stupid happening here.

I have Collatz set to run 1 WU per GPU, Einstein to run 2 per GPU, and Milkyway to run 2 per GPU (because that's what it's most efficient at).

Normally this works fine, but I've just noticed it being daft. It's running the last Einstein in it's queue on half the GPU. It's got Collatz and Milkyway queued. But it's refusing to shove the Milkyways into the other half of the GPU. I can only assume because I have Collatz at higher priority, so it wants to run that. But it won't move Einstein out because that's also a fairly high priority. Half the GPU is idling! Anybody got a common sense stick?
ID: 70103 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 540,090,388
RAC: 86,730
Message 70105 - Posted: 3 Sep 2020, 16:49:30 UTC - in response to Message 70103.  

Check how much cpu support is needed for each task type and make sure you have enough. Having only half of the load on a card with 0.5 share should only be transitional. It is making room to allow the full gpu tasks to start based on the REC needs. I have run both Einstein, Milkyway and GPUGrid on my cards with 0.5 shares for both Einstein and Milkyway and 1.0 share for GPUGrid and the cards and the scheduler behaved themselves. Many times I would have a Einstein and a Milkyway running on a card together with no issues. Only when a GPUGrid task was queued to run next did one of the 0.5 tasks drop off and none to replace it because the scheduler knew the GPUGrid task was next up.
ID: 70105 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70106 - Posted: 3 Sep 2020, 18:16:50 UTC - in response to Message 70105.  

Check how much cpu support is needed for each task type and make sure you have enough. Having only half of the load on a card with 0.5 share should only be transitional. It is making room to allow the full gpu tasks to start based on the REC needs. I have run both Einstein, Milkyway and GPUGrid on my cards with 0.5 shares for both Einstein and Milkyway and 1.0 share for GPUGrid and the cards and the scheduler behaved themselves. Many times I would have a Einstein and a Milkyway running on a card together with no issues. Only when a GPUGrid task was queued to run next did one of the 0.5 tasks drop off and none to replace it because the scheduler knew the GPUGrid task was next up.


I've set all GPU tasks to "need" 0.01 cores of CPU, since they get a higher process priority in Windows and always shove CPU tasks out of the way automatically (I've tested it).

It's not being sensible at all.
If the full gpu tasks are more important, it should do those and stop the half ones immediately.
If the half gpu tasks are more important, it should either download more of them, or run a full GPU task for a bit to reduce the buffer size.

Also, since I had some more half GPU tasks sat waiting, it might as well have those running aswell! It would be like you and me having a big heavy job to do, say moving a sofa, which requires us both. Also we each have some smaller jobs to do. You decide one of yours is very important and go do that, and the way Boinc works, I'd sit and wait instead of doing something I have which is less important than the sofa.
ID: 70106 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Keith Myers
Avatar

Send message
Joined: 24 Jan 11
Posts: 696
Credit: 540,090,388
RAC: 86,730
Message 70107 - Posted: 3 Sep 2020, 19:30:50 UTC - in response to Message 70106.  

You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.
ID: 70107 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70110 - Posted: 4 Sep 2020, 18:19:10 UTC - in response to Message 70107.  

You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.


I see no evidence of a FIFO pipeline. If you lower the priority of a project, it will leave those tasks half done and start different ones.

The point is it's not very intelligent at all as to what runs when. What would simplify it is to only apply project weights at one point - when they're downloaded. It seems to be doing it a second time when it picks one to run. An extreme example:

Set MW to weighting 1000. Set Einstein to weighting 1.
Boinc will download MW most of the time, but every so often it will get some Einstein. But since 1/1001 of the time adds up to running one task every blue moon, Boinc will hardly touch that task until it gets close to the deadline. Then at the very last minute it will start it, only to find you've shut the machine off, or are playing a game, or it misjudges the time remaining, so it gets returned late.
ID: 70110 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,950,829
RAC: 21,429
Message 70111 - Posted: 5 Sep 2020, 10:16:45 UTC - in response to Message 70110.  

You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.


I see no evidence of a FIFO pipeline. If you lower the priority of a project, it will leave those tasks half done and start different ones.

The point is it's not very intelligent at all as to what runs when. What would simplify it is to only apply project weights at one point - when they're downloaded. It seems to be doing it a second time when it picks one to run. An extreme example:

Set MW to weighting 1000. Set Einstein to weighting 1.
Boinc will download MW most of the time, but every so often it will get some Einstein. But since 1/1001 of the time adds up to running one task every blue moon, Boinc will hardly touch that task until it gets close to the deadline. Then at the very last minute it will start it, only to find you've shut the machine off, or are playing a game, or it misjudges the time remaining, so it gets returned late.


There's no winning no matter which way you choose...ie your way or if you chose to have it set to run immediately upon downloading it stops whatever else you are doing and runs to completion meaning you lose out on that special badge or bunkering event because it needed that 1% workunit. The point is Boinc has no clue you want to watch a movie tomorrow night or compete in that event or shut down the pc because it's roasting hot or you are going out of town or whatever, it's a computer not an AI.
ID: 70111 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,142,956
RAC: 2
Message 70112 - Posted: 5 Sep 2020, 17:52:23 UTC - in response to Message 70111.  
Last modified: 5 Sep 2020, 17:52:50 UTC

You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.


I see no evidence of a FIFO pipeline. If you lower the priority of a project, it will leave those tasks half done and start different ones.

The point is it's not very intelligent at all as to what runs when. What would simplify it is to only apply project weights at one point - when they're downloaded. It seems to be doing it a second time when it picks one to run. An extreme example:

Set MW to weighting 1000. Set Einstein to weighting 1.
Boinc will download MW most of the time, but every so often it will get some Einstein. But since 1/1001 of the time adds up to running one task every blue moon, Boinc will hardly touch that task until it gets close to the deadline. Then at the very last minute it will start it, only to find you've shut the machine off, or are playing a game, or it misjudges the time remaining, so it gets returned late.


There's no winning no matter which way you choose...ie your way or if you chose to have it set to run immediately upon downloading it stops whatever else you are doing and runs to completion meaning you lose out on that special badge or bunkering event because it needed that 1% workunit. The point is Boinc has no clue you want to watch a movie tomorrow night or compete in that event or shut down the pc because it's roasting hot or you are going out of town or whatever, it's a computer not an AI.


It knows how often the computer is switched off or placed into pause due to a game etc. It should therefore know not to start a 3 hour remaining task 4 hours before the end, in case I play the same game for 5 hours I played the whole of the last week! Even putting user interaction aside, it's only a predicted remaining time, Boinc can get it very wrong, so it should leave a much wider margin of error.

Anyway, my way above would work fine. Let's say you choose weightings of 1 and 1000. It should download 1 task from project A and 1000 from project B before getting another A, but once it has downloaded them just run them.
ID: 70112 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3315
Credit: 519,950,829
RAC: 21,429
Message 70113 - Posted: 6 Sep 2020, 1:31:59 UTC - in response to Message 70112.  

You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.


I see no evidence of a FIFO pipeline. If you lower the priority of a project, it will leave those tasks half done and start different ones.

The point is it's not very intelligent at all as to what runs when. What would simplify it is to only apply project weights at one point - when they're downloaded. It seems to be doing it a second time when it picks one to run. An extreme example:

Set MW to weighting 1000. Set Einstein to weighting 1.
Boinc will download MW most of the time, but every so often it will get some Einstein. But since 1/1001 of the time adds up to running one task every blue moon, Boinc will hardly touch that task until it gets close to the deadline. Then at the very last minute it will start it, only to find you've shut the machine off, or are playing a game, or it misjudges the time remaining, so it gets returned late.


There's no winning no matter which way you choose...ie your way or if you chose to have it set to run immediately upon downloading it stops whatever else you are doing and runs to completion meaning you lose out on that special badge or bunkering event because it needed that 1% workunit. The point is Boinc has no clue you want to watch a movie tomorrow night or compete in that event or shut down the pc because it's roasting hot or you are going out of town or whatever, it's a computer not an AI.


It knows how often the computer is switched off or placed into pause due to a game etc. It should therefore know not to start a 3 hour remaining task 4 hours before the end, in case I play the same game for 5 hours I played the whole of the last week! Even putting user interaction aside, it's only a predicted remaining time, Boinc can get it very wrong, so it should leave a much wider margin of error.

Anyway, my way above would work fine. Let's say you choose weightings of 1 and 1000. It should download 1 task from project A and 1000 from project B before getting another A, but once it has downloaded them just run them.


Umm not exactly but anyway...I use settings of zero upto 100.
ID: 70113 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-)

©2024 Astroinformatics Group