Welcome to MilkyWay@home

Posts by hericks

1) Message boards : Number crunching : Is there a server-limited maximum at 100 CPU workunits? (Message 69427)
Posted 7 Jan 2020 by hericks

this has apparently been lifted to 300 but even that is sucked though the GPU within 2,5 hours.

I think at times where (halfway) modern PCs with a (halfway) modern GPU have more power than am IBM ASCI RED, worlds fasted Supercomputer at the 2000 millennia new years eve, the work units should be larger.

This would probably drive some old hardware out of the game, but that would certainly be also a more sustainable move.

2) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 69426)
Posted 7 Jan 2020 by hericks

2nd measurement:

After checking the error logs I saw that each package consists of several jobs that require some CPU work in between. Apparently some 1,6 seconds several times per package on my machine on the 40+ seconds packages and somewhat less on the 31-37 packages. That actually explains the difference,
It has something to do with the star count in that package. The fast ones had 10 stars, the slow ones some 40k odd.

I currently get 300 packages per batch and it took usually 3:08 to finish them. So 37.6 seconds per Unit effectively, that is reported to take 36,6 seconds. so there was one additional second latency in there somewhere.
With 2 concurrent tasks on the GPU the game looks completely different. Of cause it now takes longer to compute but the 300 unit batch completes in 2:28 hours. So 30 minutes faster and with an effective Average of 29,6 seconds per unit.

So it gains around 20% which is quite a lot. I have not assessed the power consumption though, which was with 150 w lower than expected. I would be very surprised if this would not go up by some 20% as well to around 180 W,

3) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 69423)
Posted 6 Jan 2020 by hericks

Happy New Year!

My values are between 31 and 43 seconds with a dual E-5 2630 v2 and a R9-280X, average out of 100 packages is 36.66 sec.

However the deviation between the packages from 31 sec to 43 is not random, or what I first thought, dependent on other stuff going on on the workstation.

All work units from de_modfit_86_bundle4_4s_south4s_bgset_* took more than 40 seconds, and only these, having 227.53 points per unit, while the 31 sec units came from de_modfit_14_bundle4_testing_3s4f_* (227.12 points) and the ones with usually 36-37 seconds came from de_modfit_14_bundle5_testing_4s3f_* (227.52 points). With some fluctuation.

So it seems that the 227odd work units are not very uniform. Some of them take 30% longer than others. I think to compare them, the packages need to come from the same bundle.

It seems that there work units generation for each of the bundles/projects is happening in parallel, at least, when sorting by workorder ID they are all mixed to some extend. But the best consecutive 5 units in this 100 units sample had 31.558 sec, and the worst 40.274.

So to compare, either more samples need to be compared, or the bundle needs to be mentioned, where they came from.

Also I checked power consumption, where crunching m@h with the GPU adds around 140-150 W for the R9-280x. This is way below the 250 W TDP. I assume this is also due to the tiny amount of memory used on the card.

Would be interesting how this looks for other platforms, like the Radeon VII.

Also has anyone tested a Tesla K40? Would be interested in how it performs.


©2023 Astroinformatics Group