41)
Message boards :
Number crunching :
Rate the following GPUs please
(Message 46005)
Posted 6 Feb 2011 by Brickhead Post: ...because it sounds cool... Admittedly, I failed to guess which reference you were comparing CF/SLI to. But I don't think that really warrants the insinuation that any decision to team GPUs is based solely on perceived 'coolness'. |
42)
Message boards :
Number crunching :
Rate the following GPUs please
(Message 46003)
Posted 6 Feb 2011 by Brickhead Post: I have seen the opposite. I have 3 GPU's in one box and 2 GPU's in 2 other boxes. I have seen NO gains in either SLI or CF. I have actually seen the opposite as it seems to take more overhead for the SLI/CF bridge. I can actually allot some CPU cores to CPU projects without penalty. Many have also experienced driver issues with some WU's in some projects. You can run however you want but SLI/CF isn't a magic tool for increased credit because FPS is meaningless in Boinc. If you haven't connected a monitor or a "dummy plug" to each GPU, then Crossfire is the only way to enable all the GPUs (edit: on ATI/AMD cards). Other than that, BOINC couldn't care less about CF. But it can only use enabled GPUs, so CF is the way to go for many of us. Here's an example that does use CF: http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=215206 |
43)
Message boards :
Number crunching :
Message from server Your CPU lacks SSE2
(Message 45780)
Posted 25 Jan 2011 by Brickhead Post: According to the Applications Page you need a SSE2 capable CPU for N-Body Simulation WUs. That's entirely possible, judging by the available computers. JUGA, one of your machines doesn't support SSE2: http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=212861 (I believe Family 6 Model 7 is the Pentium III Katmai) ...while the other one does: http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=212870 |
44)
Message boards :
News :
bypassing server set cache limits
(Message 45744)
Posted 24 Jan 2011 by Brickhead Post: You have two good points, Travis, but also two different ones. In order to prevent the clients from crunching old data, the report deadline ought to be short. Presently, that's 8 days. In order to minimise the number of work units the server needs to keep track of, you can limit the number of cached work units per core. Presently that limit is 6. My fastest rig (as an example) is allowed a cache of 72 WUs. That's equivalent to ~23 minutes. When that runs out, it gets work from a backup project. Hours and hours of work (much more than my set preferences suggest, for some reason). While this lot is worked on, new MW WUs usually trickle in after a few minutes, just sitting there getting older before BOINC is done with the other project's work. (This FIFO behaviour might change in a future release of the BOINC core client.)
A small per-core cache is good (not really, but there aren't many better ways) for limiting the server load. But for keeping crunched data recent, it's actually a little counter-productive. As long as the N-body WUs are CPU-only, how about shortening the deadline on the separation WUs (like 1/4 or 1/8 of the current value) and reserving them for GPUs? If you also increase the separation WUs' per-core limit by 1, GPU caches will run out noticeably less often. Admittedly this will increase the server load slightly when everyone behaves, but it will also enable BOINC's built-in mechanisms to work better towards discouraging ridiculous caches on those misbehaving clients. |
45)
Message boards :
Number crunching :
CPU usage
(Message 45704)
Posted 22 Jan 2011 by Brickhead Post: Ah, I see. Performance-wise, I think one multi-threaded task can be just as efficient as several single-threaded ones. In other words, it can run four times as fast on four cores than it would on one. However, I've seen at least one example where multi-threaded means incompatible: The GPU_ATI_5k science app over at DNETC. That one tries to use all available graphics processors, and although some have claimed success on two cards after a clean install of some driver versions, I've never gotten anything but errors on my four. My point is, when BOINC is so totally geared towards distributing tasks among several processors (be that GPUs or CPU cores), why do they bother? (Like... If you have four kids successfully riding to school each on their own bicycle, why would you even try to weld all those bicycles together?) Oh well, back on track, I don't think you'll se any performance hit from having all four cores serve one N-Body WU. |
46)
Message boards :
Number crunching :
CPU usage
(Message 45695)
Posted 22 Jan 2011 by Brickhead Post: There is a way, but I didn't understand what the problem is. If you were to limit CPU usage to 1 core (25% relative or 1 absolute in your preferences), surely that would limit WU processing? If, however, you want to prevent N-Body WUs from hindering GPU work, that's already taken care of: The small portion of CPU work that the GPU WUs need takes priority over CPU WUs. |
47)
Message boards :
Number crunching :
I don't know about ya'll but......
(Message 45461)
Posted 9 Jan 2011 by Brickhead Post: cncguru wrote: This has allowed my new HD6970's to really start producing! And at pretty decent clocks too, seemingly without TDP-related throttling. Liquid cooled? |
48)
Message boards :
Number crunching :
Software confusion with multiple GPUs
(Message 43320)
Posted 30 Oct 2010 by Brickhead Post: Seems like the science app is somewhat more enlightened than the BOINC core client: instructed by BOINC client to use device 1 My guess is that BOINC only detects the number of CAL devices and the type of the first one, then giving instructions that the science app must correct. I think a cc_config file can limit BOINC to use only one GPU, but I'm no expert there (have never used one). On the other hand the 5870 is more than capable of running two WUs simultaneously. (If you get them slightly staggered in time after a while, it's actually beneficial because it will keep the GPU busy when it would otherwise be "between WUs".) |
49)
Message boards :
Number crunching :
Software confusion with multiple GPUs
(Message 43315)
Posted 30 Oct 2010 by Brickhead Post: What is it you want to achieve - run only one WU at a time? The 5450 won't get any MW work whatever you do, according to the GPU Requirements. |
50)
Message boards :
News :
number of unvalidated results climbing
(Message 43309)
Posted 30 Oct 2010 by Brickhead Post: If I'm really lucky, the validator might (eventually) even get around to a couple of results that have been waiting since April ;) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=9521 |
51)
Message boards :
Number crunching :
Aaargh! Server out of new work!
(Message 39982)
Posted 27 May 2010 by Brickhead Post: Could it be that http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=171705 is soaking it all up? That host's task count has now increased to 2920 :( |
52)
Message boards :
Number crunching :
Hardware Poll
(Message 38549)
Posted 10 Apr 2010 by Brickhead Post: Asus P5Q Deluxe Asus P5Q Pro Turbo Asus P5K Pro |
53)
Message boards :
News :
Visualization/Screensaver Work
(Message 36543)
Posted 14 Feb 2010 by Brickhead Post: My choice is idea 2. I imagine a kind of Star Trek command screens where continuous data streams appear. Oooh, you mean the LCARS interface look? http://www.lcarscom.net/ |
54)
Message boards :
Number crunching :
Down for maintenance?
(Message 36389)
Posted 10 Feb 2010 by Brickhead Post: Strange thing is, the current outage started more than 5 hours *before* Travis made his latest news post. |
55)
Message boards :
News :
Finally fixed user of the day scripts :P
(Message 36150)
Posted 30 Jan 2010 by Brickhead Post: The chaos in Opera might have something to do with this website, like every other BOINC web I've checked, failing to validate over at W3C: http://validator.w3.org/check?uri=http%3A%2F%2Fmilkyway.cs.rpi.edu%2Fmilkyway%2F&charset=%28detect+automatically%29&doctype=Inline&group=0 But the UotD script does seem to work now :) |
56)
Message boards :
Number crunching :
HD4830 + HD4770 Performance stats
(Message 36141)
Posted 29 Jan 2010 by Brickhead Post: 4870 @ 800MHz (1 of 2 in CF) Actually, that one is a single card. (Fingers quicker than brain, sorry.) |
57)
Message boards :
Number crunching :
HD4830 + HD4770 Performance stats
(Message 36132)
Posted 29 Jan 2010 by Brickhead Post: The last (I think?) validated WUs from some of mine: 5850 @ 800MHz (single) GPU time: 102.461 seconds, wall clock time: 104.417 seconds 4890 @ 950MHz (1 of 2 in CF) GPU time: 150.716 seconds, wall clock time: 152.416 seconds 4890 @ 900MHz (1 of 2 in CF) GPU time: 158.782 seconds, wall clock time: 160.257 seconds 4870 @ 800MHz (1 of 2 in CF) GPU time: 180.025 seconds, wall clock time: 181.822 seconds 4850 @ 700MHz (1 of 2 in CF) GPU time: 206.234 seconds, wall clock time: 207.998 seconds |
58)
Message boards :
Number crunching :
NVIDIA GPU Bug?
(Message 35693)
Posted 15 Jan 2010 by Brickhead Post: It seems the "coming soon" part of the BOINC wiki has been replaced (albeit a little too late in this case). Projects with NVIDIA applications: But I think one ought to distinguish between BOINC (the platform) and Milkyway (the project). They are not one and the same. Furthermore, while it is unfortunate that the BOINC wiki didn't tell the whole story, it is generally better to check with the owner of the hardware requirements (the project, see above), than to assume all is well because the general info didn't mention any project specifics. Assumption is, after all, the mother of all ... Not that any of this makes the purchase in question any less frustrating, though :( |
59)
Message boards :
Number crunching :
GPU usage Question.
(Message 35607)
Posted 13 Jan 2010 by Brickhead Post: Use these instead, they're located under MilkyWay@home preferences. Use NVIDIA GPU if present (enforced by 6.10+ clients) Use ATI GPU if present (enforced by 6.10+ clients) As these are per-project values, you need to make a similar choice for any other GPU-enabled project you participate in. |
60)
Message boards :
Number crunching :
Shutting down & good bye !
(Message 33166)
Posted 9 Nov 2009 by Brickhead Post: Given the fact that you've decided to stop crunching, I think it's nice of you to make the effort to say goodbye to the rest of us. Happy trails! |
©2024 Astroinformatics Group