Welcome to MilkyWay@home

Posts by Keith Myers

1) Message boards : Number crunching : Selecting tasks for CPU (Message 70119)
Posted 2 days ago by ProfileKeith Myers
Post:
Create two different venues. One with cpu only and one with gpu only. Then run two BOINC clients with each one crunching either the cpu or gpu venue.
2) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70107)
Posted 18 days ago by ProfileKeith Myers
Post:
You are confusing task deadline and resource shares for the needs of the REC scheduler. You need to read up on that. The REC scheduler is the next highest arbiter of which tasks should run after task deadlines. Remember that the client works on a FIFO pipeline.

If you need to actually see which tasks the client will run next, invoke rr_simulation along with cpu_sched_deug in the Event Log.
3) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70105)
Posted 18 days ago by ProfileKeith Myers
Post:
Check how much cpu support is needed for each task type and make sure you have enough. Having only half of the load on a card with 0.5 share should only be transitional. It is making room to allow the full gpu tasks to start based on the REC needs. I have run both Einstein, Milkyway and GPUGrid on my cards with 0.5 shares for both Einstein and Milkyway and 1.0 share for GPUGrid and the cards and the scheduler behaved themselves. Many times I would have a Einstein and a Milkyway running on a card together with no issues. Only when a GPUGrid task was queued to run next did one of the 0.5 tasks drop off and none to replace it because the scheduler knew the GPUGrid task was next up.
4) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70095)
Posted 21 days ago by ProfileKeith Myers
Post:
According to the Developers there is ALOT of 32bit crap in there holding things back, ie showing the actual amount of memory on gpu's that have more than 4gb so projects can filter then out of app that crash when trying to run them, ie Einstein. The 64bit stuff is already in there, making one without the 32bit stuff in it has been in the works for awhile to make sure Boinc still works afterwards.


It's not just the memory they need to filter on, I've got cards with plenty memory that won't run gravity, because they lack the newer instruction set. What they should be doing is looking at the model of graphics card, and comparing it to a list of ones that the program has been tested on. Or just testing for certain instructions being available. I notice when you start Boinc that it has a big list of AVX, MMX, etc for CPUs. Does it check cards like that too?

The capabilities of the graphics cards are read from the vendors API stack. For our use that is either the CUDA API or the OpenCL API depending on which vendor the science applications use.

Already at a disadvantage on the memory capacity with Nvidia cards as mentioned previously because of incorrect API usage. And at the low level, each card is still restricted by the silicon and gpu firmware installed. You will never get a OpenCL 1.0 capable card perform a OpenCL 1.2 or 2.0 instruction.
5) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70094)
Posted 21 days ago by ProfileKeith Myers
Post:
Talking about two DIFFERENT things here. Boinc apps are NOT the science apps.


I know. I thought you or someone else mentioned they were planning on stopping the 32bit science apps too.

No, I did not say that. I said that David Anderson wants to eliminate the 32 bit clients. He controls the BOINC applications. Has nothing to do with any science application other than Nebula.
6) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70081)
Posted 23 days ago by ProfileKeith Myers
Post:
Talking about two DIFFERENT things here. Boinc apps are NOT the science apps.
7) Message boards : Number crunching : nbody: Trying to run on 2 cores, but downloading 4 core tasks (Message 70079)
Posted 23 days ago by ProfileKeith Myers
Post:
So I've decided to start running the Nbody tasks again. I have a 4 core CPU that has 2 GPUs. I want to limit Nbody to run on 2 cores maximum so my GPUs stay busy. I added the following to my app_config file (my whole app_config included, just in case, but the nbody section is what I added):

<app_config>

	<app>
		<name>milkyway</name>
		<gpu_versions>
			<gpu_usage>0.5</gpu_usage>
			<cpu_usage>0.5</cpu_usage>
		</gpu_versions>
	</app>
	<app_version>
		<app_name>milkyway_nbody</app_name>
			<plan_class>mt</plan_class>
			<cmdline>--nthreads 2</cmdline>
	</app_version>
</app_config>

However, when I reloaded my app_config, I have downloaded 4 CPU Nbody tasks. I'm missing something obvious, I just don't know what it is. Any ideas?

On a side note, I think I observed these 4 CPU Nbody tasks running while my 2 GPUs were also running...so 4 GPU tasks (using a sum of 2 CPUs), and a 4 CPU task. The Nbody task seemed to run slow, but once I suspended GPU tasks, the nbody sped up. I haven't had a chance to experiment with it more, but I'm curious if anyone else has noticed this.


N-body tasks are cpu only and by default are multi-threading while the gpu tasks are Seperation tasks and by default use some part of a gpu depending on the gpu.
0.5 times 4 equals 2 cores, that leaves zero cores for your gpu's to use so of course the cpu units will speed up if the gpu's stop.

Also specifing 0.5 cpu usage is only a suggestion, the app that the Project writes is the determining factor of how much of a cpu core it will use. Most do not use a full core so it's not usually a problem but in some cases it is and no app or whatever file can overwrite it.

Try this version of your app_config.

<app_config>

	<app>
		<name>milkyway</name>
		<gpu_versions>
			<gpu_usage>0.5</gpu_usage>
			<cpu_usage>0.5</cpu_usage>
		</gpu_versions>
	</app>
	<app_version>
		<app_name>milkyway_nbody</app_name>
                <max_concurrent>1</max_concurrent>
			<plan_class>mt</plan_class>
                        <avg_ncpus>2</avg_ncpus>
			<cmdline>--nthreads 2</cmdline>
        </app_version>
</app_config>
8) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70078)
Posted 23 days ago by ProfileKeith Myers
Post:
How will this make it better? Twice the effort going into 64 bit? Less untidy programming catering for both?


If they actually revisit the codebase and remove all the workarounds and jumps to handle 32 bit code, it would reduce the size of the applications and possibly speed them up.
Mainly talking about the BOINC applications like the client and the manager. The science apps are a different story. They said a long time ago that the 32 bit science apps are faster than the equivalent 64 bit app in some cases because the memory access is simpler.
9) Message boards : Number crunching : Twin CPUs and multi-core nbody tasks - success :-) (Message 70062)
Posted 24 days ago by ProfileKeith Myers
Post:
No, they are just dropping all the x86 32 bit versions.
10) Message boards : News : New Runs for MilkyWay@home Nbody Simulations (07/29/2020) (Message 70020)
Posted 6 Aug 2020 by ProfileKeith Myers
Post:
I have a 12 core (24 thread) CPU and am getting 16 thread tasks, so its only running one at a time. CPU utilization is very low for a couple minutes, and then peaks around 50% for the rest of the task. Is there an app_config that would allow specifying running 2 x 12 or 3 x 8 thread tasks? It would improve efficiency greatly.

Edit I think I figured it out.

<app_config>
<app>
<name>milkyway_nbody</name>
<max_concurrent>3</max_concurrent>
</app>
<app_version>
<app_name>milkyway_nbody</app_name>
<plan_class>mt</plan_class>
<avg_ncpus>8</avg_ncpus>
</app_version>
</app_config>

Actually the correct method of defining mt app is provided in the BOINC configuration document.
https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration

They provide an example.

[<app_version>
<app_name>Application_Name</app_name>
[<plan_class>mt</plan_class>]
[<avg_ncpus>x</avg_ncpus>]
[<ngpus>x</ngpus>]
[<cmdline>--nthreads 7</cmdline>]
</app_version>]

So if you want to run 3 tasks with 8 cores each your app_config should look something like this.

<app_config>
<app>
<name>milkyway_nbody</name>
<max_concurrent>3</max_concurrent>
</app>
<app_version>
<app_name>milkyway_nbody</app_name>
<plan_class>mt</plan_class>
<cmdline>--nthreads 8</cmdline>
</app_version>
</app_config>

The --nthreads parameter is the actual controller of how many cpu threads are utilized per each task.
11) Message boards : News : New Runs for MilkyWay@home Nbody Simulations (07/29/2020) (Message 70009)
Posted 1 Aug 2020 by ProfileKeith Myers
Post:
Have you configured your account to accept cpu and N-body work?
https://milkyway.cs.rpi.edu/milkyway/prefs.php?subset=project
12) Message boards : Number crunching : 3 x 2080 ti gpu cards (Message 69994)
Posted 17 Jul 2020 by ProfileKeith Myers
Post:
Will Milky Way run on RTX 2080 Ti cards? It did not the last time I was here.

Thank you,

Miklos

Yes, MilkyWay runs fine on Turing since BOINC got fixed to properly calculate the estimated times and the correct cores per proc value so that GLOPS could be correctly calculated. Fixed back in the end of 2019.
13) Message boards : Number crunching : Hard Drive Space Question (Message 69967)
Posted 26 Jun 2020 by ProfileKeith Myers
Post:
Hmmm, I am also connected to Einstein Project, but currently it only uses 1.42 GB of space on my hard drive. Should I use Maximum preference settings or Custom preference settings?

Don't worry, the drive space usage for Einstein will quickly grow once you start processing more work. Einstein uses the most space out of all my projects by a 10X factor. Tens of GB's needed for the recurring data sets.

Just set whatever you feel is necessary for minimum left over for the OS.
14) Message boards : Number crunching : Computation errors (Message 69953)
Posted 20 Jun 2020 by ProfileKeith Myers
Post:
Allowing your computers to be visible at the project does open them up to any hacking. It only shows what they are running when your host contacts the project at each scheduler connection and your current tasks. Simply provide the URL link to the errored work after changing your project preferences for your computers to be visible. That setting has no bearing on your physical computers in your network. Whatever prevention you have in place for your network is not compromised.
15) Message boards : Number crunching : cpu and gpu (Message 69920)
Posted 13 Jun 2020 by ProfileKeith Myers
Post:
Because on Nvidia cards, the science application is not as efficient as the science application on AMD cards. Due to differences in both the OpenCL API for both platforms and the difference is gpu architectures.
16) Message boards : MilkyWay@home Science : Nbody Science (Message 69901)
Posted 9 Jun 2020 by ProfileKeith Myers
Post:
Thanks for the update Eric. Much appreciated to be brought into the loop again. Looking forward to the new gpu appplications.
17) Questions and Answers : Unix/Linux : MW stopped using my nvidia GPU (Message 69890)
Posted 4 Jun 2020 by ProfileKeith Myers
Post:
You starved the gpus from running by taking away all the cpu support by running the nbody tasks without any limit. A gpu task needs at least some part of a cpu to feed it data. If all your cpu threads were busy with nbody, then the gpu tasks will be forced into waiting to run. No mystery here, BOINC did exactly what it was supposed to. If you want to run both types of work you need to limit the nbody tasks from taking all the cpu threads. Read the documentation pertaining to nbody mt configuration.
https://boinc.berkeley.edu/wiki/Client_configuration#Application_configuration
18) Questions and Answers : Unix/Linux : MW stopped using my nvidia GPU (Message 69887)
Posted 3 Jun 2020 by ProfileKeith Myers
Post:
Also, as an aside, how does one view the specific computer on the website? I cant seem to sort by computer id so searching for a particular computer in my set of computers is like searching for a needle in a haystack

Don't understand this at all. Login to MW, go to your account main page, click the computers link on the page. https://milkyway.cs.rpi.edu/milkyway/hosts_user.php
Voila! All your computers are listed, even with their assigned network names. Easy to figure out which computer is which.

If you are constantly running out of work and the 10 minute backoff bugs you too much, you can always run JStateson's modified client which removes that aggravation.
19) Message boards : Number crunching : Need help with linux and app_info (Message 69881)
Posted 1 Jun 2020 by ProfileKeith Myers
Post:
Well if you want to run 14 total cpu threads out of the 16 and assign two threads for the two gpus, that leaves you with 12 threads to run the cpu milkyway nbody tasks. So you should run this app_config.xml file.

<app_config>
<app>
  <name>milkyway_nbody</name>
  <max_concurrent>3</max_concurrent>
 </app>
<app_version>
   <app_name>milkyway_nbody</app_name>
   <plan_class>mt</plan_class>
   <avg_ncpus>4</avg_ncpus>
   <cmdline>--nthreads 4</cmdline>
</app_version>

<app>
    <name>milkyway</name>
    <gpu_versions>
       <gpu_usage>1.0</gpu_usage>
       <cpu_usage>1.0</cpu_usage>
    </gpu_versions>
</app>
</app_config>


This would run 3 concurrent nbody cpu tasks using 4 threads each and two gpu tasks running.
20) Message boards : Number crunching : Need help with linux and app_info (Message 69876)
Posted 30 May 2020 by ProfileKeith Myers
Post:
First question to answer is how many total cpu threads on the host do you want to commit to BOINC.

Second question is how many concurrent mt tasks do you want to run.

Third question is how many cpu threads per mt task do you want to commit to the task.

Fourth question is how many gpu tasks per card do you want to run. I advise to stick to a single task per Nvidia card unless they are very high end like a 2080 or 2080 Ti.

All of those configurations need to be put into an app_config.xml file for the project to control how you want to run the project on your hardware.


Next 20

©2020 Astroinformatics Group