Welcome to MilkyWay@home

Posts by fractal

1) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64617)
Posted 3 Jun 2016 by fractal
Post:
Your tesla's time is probably suffering due to the same problem as other Nvidia cards, of MW with Nvidia cards not fully loading the GPU. Check your GPUs load.
Try running multiple WUs at once.

(I'm wondering whether to allow all Nvidia cards to run multiple WUs for the benchmark table..........)

I can try running multiple units but it looks like the card is fully loaded as it is
Thu Jun  2 18:02:52 2016
+------------------------------------------------------+
| NVIDIA-SMI 355.11     Driver Version: 355.11         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M2090         Off  | 0000:01:00.0     Off |                    0 |
| N/A   N/A    P0   167W / 225W |    220MiB /  5375MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      2008    C   ...n_1.02_x86_64-pc-linux-gnu__opencl_nvidia   209MiB |
+-----------------------------------------------------------------------------+

edit: Running two work units at a time shaves 9 seconds off the run time and keeps both CPU cores fully utilized but does not have any appreciable affect on the Tesla.
Thu Jun  2 18:21:18 2016
+------------------------------------------------------+
| NVIDIA-SMI 355.11     Driver Version: 355.11         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla M2090         Off  | 0000:01:00.0     Off |                    0 |
| N/A   N/A    P0   168W / 225W |    431MiB /  5375MiB |     99%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      2356    C   ...n_1.02_x86_64-pc-linux-gnu__opencl_nvidia   209MiB |
|    0      2364    C   ...n_1.02_x86_64-pc-linux-gnu__opencl_nvidia   209MiB |
+-----------------------------------------------------------------------------+
2) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 64576)
Posted 27 May 2016 by fractal
Post:
I tested the following on a host with a G1610 @ 2.60GHz running Linux with driver 355.11

GTX 650 Ti (stock) - Average Run: 519.6 sec, Average CPU: 18.4 sec.

Tesla M2090 (stock) - Average Run: 84.2 sec, Average CPU: 77.5 sec.

I will retry the Tesla with a faster CPU to see if the processor was impacting performance. I would have expected it to be somewhere between a 5850 and a 7950 based on its DP rating.
3) Message boards : Number crunching : Use only gpu (Message 57052)
Posted 27 Jan 2013 by fractal
Post:
Also, deselect "MilkyWay@Home N-Body Simulation". It claims to be a GPU application but is a CPU only application that uses all the cores on your machine. At least that is how it behaves under linux.
4) Message boards : News : Nbody 1.04 (Message 56759)
Posted 6 Jan 2013 by fractal
Post:
The 1.04 work units are very bad on a linux machine.

boinc     3439  0.2  0.3  64572 15496 ?        Ss   18:57   0:02 ./boinc --allow_remote_gui_rpc --daemon
boinc     7164 35.3  0.0  14940  1820 ?        RNl  18:57   7:05  \_ ../../projects/volunteer.cs.und.edu_subset_sum/SubsetSum_0.11_x86_64-pc-linux-gnu 52 27 190600788427830 2203961430
boinc     7165 34.3  0.0  14940  1828 ?        RNl  18:57   6:54  \_ ../../projects/volunteer.cs.und.edu_subset_sum/SubsetSum_0.11_x86_64-pc-linux-gnu 52 27 190605196350690 2203961430
boinc     7166 35.1  0.0  14940  1820 ?        RNl  18:57   7:03  \_ ../../projects/volunteer.cs.und.edu_subset_sum/SubsetSum_0.11_x86_64-pc-linux-gnu 52 27 189606801822900 2203961430
boinc     7167 33.9  0.0  14940  1824 ?        RNl  18:57   6:48  \_ ../../projects/volunteer.cs.und.edu_subset_sum/SubsetSum_0.11_x86_64-pc-linux-gnu 52 27 189646473128640 2203961430
boinc     7645  326  1.1  77160 46400 ?        RNl  19:05  40:07  \_ ../../projects/milkyway.cs.rpi.edu_milkyway/milkyway_nbody_1.04_x86_64-pc-linux-gnu_mt__opencl_amd_ati -f nbody_parameters.lua -h histogram.t


Yes, it is taking all of 3 1/4 cores from a 4 core machine leaving precious little for other work. Is this intentional behavior?
5) Message boards : Application Code Discussion : Recompiled Linux 32/64 apps (Message 41394)
Posted 11 Aug 2010 by fractal
Post:
PLEASE, PLEASE, PLEASE!!! make ATI GPU 0.20 app for linux x64. I'm pissed off using windows... Right now I'm gonna build a rig for MW, but if there is a chance to avoid this - it will perfect :-)

And furthermore - there is app for linux in Collatz. What's the problem to make the same trick for MW?


Please see http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=1475, once the code is available I plan on attempting to compile a version for GNU/Linux.

7 months later ... any change? My linux64/ati boxes have been parked on colatz for what seems like forever...
6) Message boards : Number crunching : Any 2/3 length ATI cards available for double precision? (Message 37984)
Posted 3 Apr 2010 by fractal
Post:
I measured one of my gt240's and it is 6.8 inches.

I measured one of my hd4770's and it was 8.25 inches. The PCI-E power plug extends further, but it would be above your heat sink. Not that it matters as the baseboard alone won't fit.

Full size cards are a pain in many cases when they run into the hard drives. The OP's motherboard is exceptionally crowded. Shorter cards are popular.

Have you considered using a riser? It does add to the cost, but this place offers a number of alternate ways to mount a video card to that slot. ( disclaimer, google found them, I have never bought from them.. )
7) Message boards : Number crunching : Linux/ATI client? (Message 37804)
Posted 27 Mar 2010 by fractal
Post:
Is a linux / ATI client in the works? I picked up a few ATI cards that are on the supported hardware list and installed them in some of my linux crunch boxes. They are processing collatz units right now. You already have windows / ati and ( linux, windows ) cuda. Are there any plans on a linux ATI client being released?
8) Message boards : Number crunching : credit comparison to other projects (Message 7447)
Posted 4 Dec 2008 by fractal
Post:
I have Lenovo t7200/winxp32 laptop I use to test various projects. I have never run SETI but have run a fair spectrum of other projects. I have a spreadsheet where I take the granted credit divided by cpu time to compute the PPD/core. Numbers are averaged over up to 10 WU's when available.

docking@home: 350 ppd/core
nqueens: 330 ppd/core
3x+1: 1200 ppd/core
sha-1: 450 ppd/core
simap: 500 ppd/core
superlink: 400 ppd/core
milkyway optimized client: 2600 ppd/core
milkyway 0.07: 875 ppd/core (two wu only so far, seeing similar numbers for a linux64 e4300)

assessment: credit appears generous.
9) Message boards : Number crunching : Hard to get new work ! (Message 1257)
Posted 2 Jan 2008 by fractal
Post:
well there goes that theory...

Can't make it too simple to figure out :P

The only correlation I can see is they all fail at a very similar byte count (0.32 kB). I have had 6/6 fail or 1/6 fail. Rarely do I get 6/6 succeed. Getting a full load of 20 usually takes a half dozen purges of the transfer list and tasks list and then an update.

But, such is the way with alpha projects. I am getting work while others who don't monitor their boxes aren't ;)
10) Message boards : Number crunching : Hard to get new work ! (Message 1218)
Posted 31 Dec 2007 by fractal
Post:
As banditwolf said, abort any WU that fail to complete downloading AND abort the transfer. As soon as I did that, I got my full 20 WU.
Yes, but you have to be attentive. It appears that one in 5 is currently failing at 30% downloaded with a http: error that never recovers and blocks additional work from the project.

I have cleared them three times already today on different machines on different networks so it does not appear to be on my side of things.

The good news is I can get about an hour's worth of work when I go through the process. The bad news is I have to go through the process every hour or so to keep the machines crunching.




©2024 Astroinformatics Group