Welcome to MilkyWay@home

Posts by Brickhead

21) Message boards : News : Any remaining major credit or application problems? (Message 49574)
Posted 25 Jun 2011 by Brickhead
Post:
In June, please. (But I guess the needs of the many outweigh the vacation plans of the few.)
22) Message boards : Number crunching : BOINCStats Server Drive - Last Lap 1,000 Euro's to go (Message 48606)
Posted 8 May 2011 by Brickhead
Post:
Small donation made.
23) Message boards : Number crunching : 11.4 Catalyst Production Version - Red Flag for 5970s (Message 48459)
Posted 3 May 2011 by Brickhead
Post:
All my single 5870s seem to work fine with cat 11.4. On the quad 6970 however, 11.4 was detrimental to both performance and stability - that one is much happier now that it's back on cat 11.3.
24) Message boards : News : ATI application updated to 0.60 (Message 47837)
Posted 14 Apr 2011 by Brickhead
Post:
On my Q9450 with Win7 64bit and a 5970 running 0.62, I'm pretty much seeing 98% GPU utilisation while running Aqua@home which is multithreaded.

Perhaps I should add that where I see a drop in GPU usage when there is competition for CPU time, is in Win7/64 with four 6970s. On my single-5870 XP/64 machines, GPU usage is as high as I could possibly want.
25) Message boards : News : ATI application updated to 0.60 (Message 47809)
Posted 13 Apr 2011 by Brickhead
Post:
I've gotten the notion that if the CPU isn't available when GPU needs it that GPU utilization goes down. CPU time for the job wouldn't increase but PPD would still go down.

That's what I've been seeing as well.

With all respect to Matt, I don't think Cluster Physik (who practically invented MW on ATI GPUs) wrote the following about CPU process priority for no reason:

p1: normal priority in idle priority class (below normal), this is recommended for BOINC GPU applications, but apears to be not enough to enable millisecond polling of the GPU with Vista
p2: normal priority in normal priority class, the default


0.60 and 0.62 work fine on Windows XP, but on Windows 7 they sometimes want CPU time they're denied by other CPU tasks, resulting in lower GPU use than is the case without other CPU tasks..
26) Message boards : News : ATI application updated again (Message 47634)
Posted 11 Apr 2011 by Brickhead
Post:
This seems to confirm what I've been finding. It seems that the slow happens with a high CPU load (at least in Windows. It seems to not happen for me in Linux). The time is about what it should be with a low CPU load, and quite a bit higher if the CPU load is light. This is still a problem to fix though.

Matt, I believe the way to sort this would be to make sure that the CPU part of the MW app runs in normal priority, or at least higher priority than the CPU-only tasks. (After first having made certain that the GPU app releases the CPU for a sane amount of time while waiting for the GPU to report back, of course.)

Matt, you actually need to change this. Apparently, the CPU part of the MW GPU app now runs with normal priority in idle priority class (aka below normal). This doesn't cut it (something Cluster Physik found out a long time ago), the CPU part now suffers from having to compete with CPU-only tasks (even though these run with idle priority). It needs to run with normal priority in normal priority class (aka normal), which was the default with 0.23 and older apps.
27) Message boards : News : ATI application updated again (Message 47617)
Posted 11 Apr 2011 by Brickhead
Post:
i now believe this somehow has to do with my running E@H on all 6 CPU cores. when i set BOINC to use only 85% of the processors (forcing BOINC to use only 5 of my 6 CPU cores), or when i suspend E@H altogether (freeing up my CPU entirely), GPU load jumps back up to a constant and steady 98-99%. accordingly, run times approach the lower end of the crunch-time ranges i've historically observed. so my revised "big picture" is this: when i run E@H on all 6 CPU cores, MW@H GPU crunching efficiency suffers a bit in the form of slightly increased run times. when i leave a single CPU core free (or if i suspend E@H altogether and leave all 6 cores free), MW@H GPU crunching efficiency doesn't suffer at all, and run times are slightly decreased.

This seems to confirm what I've been finding. It seems that the slow happens with a high CPU load (at least in Windows. It seems to not happen for me in Linux). The time is about what it should be with a low CPU load, and quite a bit higher if the CPU load is light. This is still a problem to fix though.

Matt, I believe the way to sort this would be to make sure that the CPU part of the MW app runs in normal priority, or at least higher priority than the CPU-only tasks. (After first having made certain that the GPU app releases the CPU for a sane amount of time while waiting for the GPU to report back, of course.)
28) Message boards : News : ATI application (v0.57) should be available now (Message 47518)
Posted 10 Apr 2011 by Brickhead
Post:
WUs all error out with Catalyst 10.7 XP32. Since AMD broke "classic" OpenGL in 10.8 on and can't be bothered to repair it, updating is not an option on this particular machine.

It seems they have in 11.3, old OpenGL games work after a clean install. I stand corrected.

And of course, MW works too now, although I need to find a replacement for the app_info setting to chop each task into 100 tiny bits per second and allow display updates to sneak in between them, I can't even see what I'm typing as the app behaves now (it's like using a terminal server through a link with 5 sec latency).
29) Message boards : News : ATI application (v0.57) should be available now (Message 47419)
Posted 10 Apr 2011 by Brickhead
Post:
Could people with errors post their driver versions and try updating them?

WUs all error out with Catalyst 10.7 XP32. Since AMD broke "classic" OpenGL in 10.8 on and can't be bothered to repair it, updating is not an option on this particular machine.
30) Message boards : News : sending out workunits (Message 47287)
Posted 9 Apr 2011 by Brickhead
Post:
Got some 25 tasks on a 1-GPU (4-CPU) machine (which all failed to download).


Are those 4 CPUs dual core? or can usually run to WUs at a time? (8 * 3) + 3 = 27 which would have you within the limits. But if not we need to figure out why you got too many workunits. :(

That would be 4 CPUs visible to BOINC: 1 socket, 4 cores, no HT. (And my preferences state use GPU only, in case the quota takes # of GPUs into consideration.)

http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=9522
31) Message boards : News : sending out workunits (Message 47279)
Posted 9 Apr 2011 by Brickhead
Post:
Got some 25 tasks on a 1-GPU (4-CPU) machine (which all failed to download).
32) Message boards : News : bad news (Message 47254)
Posted 9 Apr 2011 by Brickhead
Post:
Don't kick yourself too hard, Matt. There have been far worse headlines in the news lately.
33) Message boards : Number crunching : 4x6970 Quad Crossfire (Message 46960)
Posted 4 Apr 2011 by Brickhead
Post:
Hmm... Tap + hose + waterblock assembly + another hose + sink? Might work, except for mineral deposits and another municipal bill to worry about :)
34) Message boards : Number crunching : 4x6970 Quad Crossfire (Message 46955)
Posted 4 Apr 2011 by Brickhead
Post:
35) Message boards : Number crunching : cncguru 1,000,000 RAC Machine (Message 46935)
Posted 2 Apr 2011 by Brickhead
Post:
BUT I did already achieve 1 million RAC for a bit according to boincstats!!

My guess is that this was BoincStats' reinvented RAC (dubbed BS-RAC), which is a moving average instead of the decaying average every project uses. Given a steady daily production after a previous period of lower production, normal RAC will approach the new daily figure asymptotically, whereas BS-RAC will actually reach the new figure after the predefined number-of-days-to-average have passed.
36) Message boards : Number crunching : Optimized app for 6990? (Message 46929)
Posted 2 Apr 2011 by Brickhead
Post:
The second highest RAC computer is also running the stock app with 6970s.
37) Message boards : Number crunching : experiences of a newby - gpu clock memory clock temperature (Message 46705)
Posted 24 Mar 2011 by Brickhead
Post:
Memory clock has a noticeable effect on tasks that use the memory extensively, such as Collatz. For MW, one can just as well turn it down and save a few watts.
38) Message boards : Number crunching : 6990 - Who Is Going To Be First? (Message 46544)
Posted 11 Mar 2011 by Brickhead
Post:
ATI/AMD's dual-GPU cards have only one crossfire connector, so no more than two can be combined without extra monitors or dummy plugs. Besides, I prefer coolers that recycle as little hot air as possible inside the case. So I guess I'll stick with single-GPU cards a while longer.
39) Message boards : Number crunching : my 30 WU's have been "uploading" for almost 24 hours now (Message 46464)
Posted 6 Mar 2011 by Brickhead
Post:
I think that's BOINC's exponential back-off doing what it does best: Preventing communication (way past the time when the project is back up again). A manual retry of one of the pending transfers ought to wake it up, though.
40) Message boards : Number crunching : Does Crossfire or Sli means more WUs crunching ? (Message 46019)
Posted 6 Feb 2011 by Brickhead
Post:
Short answers:

* Compared to a single GPU: Yes.
* Compared to the same number of GPUs enabled without the use of CF or SLI: No.

BOINC has the same potential as with F@H. But remember, you don't need CF or SLI to use multiple GPUs (except for games and other 3D rendering).

To try and sort a misunderstanding from another thread, there are at least two ways to enable multiple GPUs:

1. Disable CF/SLI, connect a monitor or a dummy plug (connector with 3 resistors) to each, and miroor or extend the desktop onto all. This yields performance scaling as good as it gets (but games will use only one card).

2. Enable CF/SLI. BOINC will still see separate GPUs and scale performance well, possibly (probably!) with a small overhead not used for crunching.

Even though multiple cards offer close to 100% performance scaling in BOINC (no matter how you enable them), the statement that CF/SLI isn't beneficial to BOINC has merit. Compared to the same number of GPUs enabled separately (option 1 above), CF/SLI offers no performance gain. More like marginally less performance. But it used to be far worse...

I have no experience with BOINC on SLI, but CF has come a long way since I kept sending Cluster Physik my CF test reports (which he deserves credit for even making sense of). Back then, two GPUs in CF would yield LESS crunching performance than one single GPU would, mainly because one GPU would fail to step up from idle clocks. In a later Catalyst driver release, that problem was solved, and the performance was SLIGHTLY better than a single GPU. The surplus computing potential would go unused: Low GPU usage, low power consumption. Nowadays, all that has been thoroughly ironed out by ATI/AMD, and GPU usage in CF is roughly on par with single GPUs. In fact, I notice less difference in MW WU times than the normal variation between WUs (only there are more WUs crunched simultaneously).


Previous 20 · Next 20

©2022 Astroinformatics Group