Welcome to MilkyWay@home

Posts by Arivald Ha'gel

41) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 63503)
Posted 3 May 2015 by Arivald Ha'gel
Post:
My earlier posts explained it a little. At least what I do know.
I think that Volatage Overclocking is impossible now, it is locked by vendors/manufacturers for good.
As for Memory underclocking, I do have identical issue. Try not to get lower than 150MHz than what Core Clock is.
So with 1200MHz Core clock, you need to have Memory at about 1050MHz. With 1100MHz Core, Memory can not be set below 950MHz.
42) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 62073)
Posted 21 Jul 2014 by Arivald Ha'gel
Post:

GeForce GTX Titan is 1.5 times better than R280X in double precision, but at 3 times the cost.

Under sadly do very poorly in MW, see my 1st benchmarking thread.


Yeah. I wonder if GTX Titan can then do 5 tasks at the same time without crashing :) (it was stated that only 18% GPU utilization was seen).

Still... I bought R280X strictly because it have best double precision GFLOPS/W. Seems I wasn't wrong :)
43) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 62067)
Posted 21 Jul 2014 by Arivald Ha'gel
Post:
http://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

Looking for "Double precision" gives GFLOPS value for double precision calculations.

R280X have up to 1024 double precision GFLOPS.

GeForce GTX 760 Ti have about 100 GFLOPS (10 times less)...

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

GeForce GTX Titan is 1.5 times better than R280X in double precision, but at 3 times the cost.


And a Titan costs $999.99, while a 280x costs $299.99, while a 760 costs only about 200 bucks, all at Newegg. If I had about a billion bucks I would build a Quantum machine and not worry about costs, but in the real world that isn't a possibility.


GeForce GTX 760 gives 94 GFLOPS. As already said, 10 times less for 99$ less. My cards were cheaper than $299.99 since I bought second hand cards. I payed 250$.

If one needs a new card: Radeon R9 270 will be cheaper and will have better single precision and double precision GFLOPS than 760. Some projects also have problems with multiple NVidia cards.

However even I thought about buying a single nVidia card, since some projects do not support OpenCL (yet!). I indeed would buy 760, 770 or used 780, but I would only use it in single precision computing.

Right now I'm waiting for OpenCL application for Asteroids@Home :)
44) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 62065)
Posted 20 Jul 2014 by Arivald Ha'gel
Post:
http://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units

Looking for "Double precision" gives GFLOPS value for double precision calculations.

R280X have up to 1024 double precision GFLOPS.

GeForce GTX 760 Ti have about 100 GFLOPS (10 times less)...

http://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

GeForce GTX Titan is 1.5 times better than R280X in double precision, but at 3 times the cost.
45) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 62061)
Posted 19 Jul 2014 by Arivald Ha'gel
Post:
Radeon r9 280x do have best double precision available per watt. For milkyway it is the best.
46) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61995)
Posted 2 Jul 2014 by Arivald Ha'gel
Post:
I have a little over 200days dedicated to Seti@home classic.
When Seti switched to BOINC platform I stopped computing for several years (about 6), mostly because my laptop at the time was insufficient for such a task.
47) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61952)
Posted 26 Jun 2014 by Arivald Ha'gel
Post:
and if you click on my name you will see I started Seti on 17 Dec 1999


I even did crunch old Seti@Home workunits :)

A server is ALWAYS in charge, which is why I now run the 64bit version of Windows Home Server on one of my pc's.


I have router with DHCP server :) It's ok :)

I bet most people who only have 1 old rig don't crunch 24/7 & so the bill would be far less. Also I really doubt you'd be able to burn $100/mth on an old rig anyway, even crunching 24/7.


I have two rigs... Well... one notebook and one rig :) Notebook is no longer used to crunch.

Also a user may not be able to afford or want to spend a 'lump' some of $100 on an upgrade, where as the electricity cost would be spread over many months (assuming they don't run 24/7).


Least efficient PC in my first post crunches constantly.

Re banning certain CPUs, whilst I agree some old ones are massively in efficient, if the user has only 1 CPU then banning that CPU will ban them.


As far as I understand server configuration it will only ban work dedicated to CPU.

It would be interesting to know what % of MWs total crunching power is given by 'old' CPUs.


That is exactly what I have suggested - to check first.

Btw did you use TDP for CPU wattage? Did you know that's not the same as a CPUs power draw?


Yes and No. But yes... I assumed TDP is wattage. I do know it is not precise, but I see that TDP of my Radeon is in range of ~5% of power draw from the wall.
48) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61936)
Posted 23 Jun 2014 by Arivald Ha'gel
Post:

I think the 'flaw' in your logic is that you assume that everyone that crunches for Boinc, and has an ultra fast pc, wants to ALSO crunch for Milkyway.


I think that I never have had such a though.

As an example I have already mentioned Collatz Conjecture. There are many users and hosts that do work for this project and none other. Many are top notch. I'm an old BOINC user (Over 10 years now? For last 3 years and 4 months crunching every day), I use BOINCstats. I seem to know more than you think I do.
49) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61932)
Posted 22 Jun 2014 by Arivald Ha'gel
Post:
Your choice is not my choice.


That is exactly why no one is forcing people to running BOINC. That would be making people unable to choose.
Banning CPU (not PC with certain CPU), is not preventing the choice. It's just redirecting this CPU to a different project, where it can make more significant contribution. One can still run BOINC. One can still run BOINC for this project, just on a better PC.

oh, and if you want power efficiency , stop running Boinc and turn your system off.


Not really. Those calculations are scientifically relevant, or at least that is what I hope is true. For example Collatz Conjecture is IMHO not. Since I'm almost certain that the theory that they want to disprove is correct.

Supercomputers are not better in power efficiency, since currently they use consumer grade equipment, and they also require massive cooling/air ventilation/climate control systems.
Thus 1000 PC on the same level as mine will be more efficient than many of the TOP 500 supercomputers, and also will be more powerful.

Please use logic in post. Right now I saw just "flames"...
50) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61885)
Posted 12 Jun 2014 by Arivald Ha'gel
Post:
There is an option in boinc project configuration. That is a fact.
So... not never, of this I'm sure.

And I'm not saying that only best CPU/GPU should stay. That would mean that BOINC would be elite.

I never expected that three/four generation old, so 7 years old GPU will be considered elite... that is at least... surprising.

Also I'm not saying (and never will) say, that BOINC should ban CPU. I was saying about MilkyWay@Home. Please do not extrapolate from what I say/write. Since those are as I can see mostly incorrect extrapolations. Even more so, since I have said that those crunchers will just switch project.
51) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61882)
Posted 12 Jun 2014 by Arivald Ha'gel
Post:
As far as I understand single PC can have single User.
Single User can have multiple PC.

Then I'm talking about banning PC. Not User.
If the user changes PC, he will be able to participate.

Also... I'm talking about banning CPU. Not PCs.
User with Athlon 4400+ can just buy 3 generation old GPU (ATI HD Radeon 4890) or even 4 generation old GPU (ATI HD Radeon 38x0). Cost... marginal - less than 100$.

If someone doesn't have 100$ on hand. Why should he spend 100$ more each month on electricity?

Some CPUs should be banned. Cost of upgrade is less than monthly (or at most two month) cost of use.
52) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61875)
Posted 11 Jun 2014 by Arivald Ha'gel
Post:
suppose they had adopted your ideas and in 2 years they told YOU to hit the road because YOU no longer measure up, what would you do?


Also I think it should be about my EQUIPMENT. Not about me.

This should be clear. I'm not speaking about banning USER(s). I speak about banning certain Family/Model/Stepping of CPU.

It would not say someone to hit the road. I would just suggest that if (s)he would like to continue participation, (s)he should retrofit his/her rig or just switch to newer equipment. Even 2-3years newer will be good for another year or two, if we will go this road, and will cost nothing (most people will give 6 year old stationary equipment for pennies/free). And it will be more power efficient.
53) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61873)
Posted 11 Jun 2014 by Arivald Ha'gel
Post:

If a single core pc can finish a unit in 10 days, it is one less unit the rest of us have to crunch, they are helping, and in the big picture that is a good thing. Sure your i7 did a hundred units in those same 10 days, or a thousand units, but who cares you didn't have to do THAT unit too.


Just looking at this specific project - it is a good thing.
In the big picture it is stupid :)

Electricity to power my rig when crunching for MilkyWay@Home for a month costs ~1/25 of rig cost. But my equipment is top notch and one of the very best efficiency (as stated in first post).

(possibly second hand) rig with single two generation old GPU (ie. 5870?) would cost less than 10 times its monthly electricity cost, and will crunch 100 (?) times more work than such Athlon, even though I will take twice as much electricity. Oh well... it could be idle each second day to ease on electricity cost...

IMHO people sometimes needs slap in the back that says - "you're using 10 year old equipment which is NOT anymore effective for BOINC apps. Please spend 10 times current electricity bill, and upgrade to 100 times more efficient equipment. In the long run it will cost LESS!".
Athlon 4400+ was released in 2005. In the long run those people will pay far LESS for the same amount of credits if they will upgrade.

I would be able to do month worth of Athlon x64 X2 DC 4400+ work during 50 minutes crunching spree and I would use 300 times less electricity.

suppose they had adopted your ideas and in 2 years they told YOU to hit the road because YOU no longer measure up, what would you do?


I have most efficient GPU at this moment. I doubt that even in 5 years anyone will have 300 times more efficient equipment that will be available to anyone within reasonable budget.
But if it will happen, I WILL upgrade. As stated above, I use my brain. I would see advantage in spending less... I hate to pay more than I need to.
54) Message boards : Number crunching : How it works? (Message 61871)
Posted 11 Jun 2014 by Arivald Ha'gel
Post:
Results are uploaded, during reporting. Check your profile->tasks and click on completed task (left-most link). Program output will be there.
55) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61863)
Posted 10 Jun 2014 by Arivald Ha'gel
Post:
I can't see where I have said that it would apply for EVERY project.
Even in the topic subject there is a condition :)

I just see that most (90 < x < 98) CPUs are not efficient when it comes to calculating MilkyWay@Home (and most likely separation) tasks.

Those can be excluded from those sub-projects to free them to do work where it is needed - as you have said some projects will never have GPU support.

When it comes to DistrRTgen, one would notice that currently GPU support is broken :) bug fix pending :)

Also each of my GPU does have 4GB GDDR5 RAM... so I think that 99% of current tasks that fits on CPU, will easily fit in GPU :)

Like I have said, some old CPUs should be banned. If their efficiency is 300 times worser than my single GPU, it suggests that my PC is doing work of 600 such PCs... those PCs would be a gigantic waste of electricity...

IMHO currently only TOP AMD, Intel i7 (or i5) and some Xeon do have appropriate efficiency to be used effectively.

This can be done in a manner:
System picks slowest CPU every week.
Someone checks it's TDP and threads amount.
If it is deemed inefficient, ban it.

It should not be done much more often in order to not create congestion (since those CPUs will most likely migrate fast to other projects), or rapid decrease in project processing speed, or just to monitor the transition :)

It is for project to decide, but I think it should be done.
56) Message boards : Number crunching : CPU/GPU Comparison (do we need CPU apps when GPU app is available) (Message 61855)
Posted 9 Jun 2014 by Arivald Ha'gel
Post:
Hello,

I'd like to discuss the issue.
There are still many CPU only applications in BOINC world. Should the project allow CPU computations for applications that have their GPU counterparts?
CPUs are 5-10+ times less power efficient, thus it might be a good idea to redirect CPUs to projects that does not have GPU applications (yet or ever since some tasks are not that paralellizable).

My GPU (AMD Radeon R9 280X - 1.125GHz Core/975MHz Mem) takes 285W from the wall (system with two of those takes ~570W on full load of 2 R9 280X, no CPU load, everything else on low power). Taking into consideration Power Supply loss, some CPU wattage, SSD, fans, mainboard etc. I can assume that R9 280X is indeed rated for 250W max (and I use it all).

My GPU does MilkyWay@Home workunit in ~23.1s.

Energy requirements for single MW@H workunit.
R9 280X: 250W * 23.10s = 5.8kJ
i3-530: ( 73W / 4) * 9682.71s = 176.7kJ
i5-2400: ( 95W / 4) * 3129.85s = 74.3kJ
i5-2400: ( 95W / 4) * 3110.54s = 73.9kJ (shohhh-----h)
i5-3570K: ( 77W / 4) * 2762.16s = 53.2kJ (CANTV)
i7-3930K: (130W / 12) * 3521.26s = 38.2kJ (DutchDK)
i7-3930K: (130W / 12) * 4583.91s = 49.7kJ (ratibor)
C2D T7100: ( 35W / 4) * 7704.83s = 67.4kJ
C2Q Q6600: (105W / 4) * 5889.53s = 154.6kJ
C2Q Q6700: (105W / 4) * 4719.61s = 123.9kJ (Matt)
T5600: ( 34W / 2) * 7045.18s = 119.8kJ (paris)
E5400: ( 65W / 2) * 4933.48s = 160.3kJ
C2D E6400: ( 65W / 2) * 6317.05s = 205.3kJ (Night Owl)
E5620: ( 80W / 8) * 4575.28s = 45.7kJ (wiffle)
X5570: ( 95W / 16) * 3376.00s = 20.0kJ

Athlon x64 X2 DC 4400+: (110W / 2) * 31167.34s = 1714.2kJ (Byrnes)

Those are extracted from oldest not-yet-verified MilkyWay@Home workunits in my "to-validate" queue.
Last CPU is the prime example of wasting energy... (300+ times less efficient than my GPU).
Best example DutchDK is ~7 times less efficient than my GPU. Although it might be overclocked so it's TDP would be greater...

If MilkyWay@Home would not decide to remove CPU from computation then I would at least suggest removing certain CPU's. Possibly by using ProjectConfiguration element (<ban_cpu>regexp</ban_cpu>)
IMHO Athlon 4400+ should be forbidden to do any work for this project... :/
57) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 61837)
Posted 4 Jun 2014 by Arivald Ha'gel
Post:
I also see possible improvement on CPU side (for my PC - sadly I have top AMD CPU).
My MilkyWay@Home tasks takes 1.7-1s.9s CPU time. (23.1s total)
My Short Separation take about 4.7-5.2s CPU time (26.1s-27.1s total).
My Long Separation task takes about 8.2-8.6s CPU time (49.1-50.1s total).

(noticed .1 pattern here?)

For GenuineIntel Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz [Family 6 Model 58 Stepping 9] (4 processors)
(current TOP 2) times are:
MilkyWay@Home 1.1s-1.65s.
Short Separation 2.1s-2.3s.
Long Separation 3.3s-3.3s.

So with better CPU Long Separation runs would be most "profitable" R280X.
Unless it is about CPU power saving features... I do not run CPU tasks at all, so my CPU is mostly saving than spending power. And It may need some time to bump up voltage & frequency.
58) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 61836)
Posted 4 Jun 2014 by Arivald Ha'gel
Post:
I think that RAM on R9 280X cannot be underclocked below 150MHz below Core frequency... what I mean is that with:
Core 1100MHz, RAM cannot be lower than 950MHz. Well... perhaps it can, but on my MSI Afterburner when I tried that it went to 1500MHz...

With 1s boundary I meant that I have never seen unit completion time different than:
XX.YY where YY is a number from 04-20.
On current setup/freq I can see times 23.06 - 23.18... but ocasionally I see 24.08.
Like:
http://milkyway.cs.rpi.edu/milkyway/result.php?resultid=759506557
http://milkyway.cs.rpi.edu/milkyway/result.php?resultid=759506555

Workunit with 24.09s completion time is:
Integration time: 20.353996 s. Average time per iteration = 63.606238 ms
Integral 0 time = 20.700259 s
Running likelihood with 46395 stars
Likelihood time = 0.603571 s

20.70 + 0.60 = 21.3s

Workunit with 23.14 completion time is:
Integration time: 19.920776 s. Average time per iteration = 62.252425 ms
Integral 0 time = 20.267444 s
Running likelihood with 46395 stars
Likelihood time = 0.578617 s

20.2 + 0.57 = 20.77s

Difference is 530ms, but it is reported as almost 1s.
59) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 61834)
Posted 4 Jun 2014 by Arivald Ha'gel
Post:
Hello,

I was "forced" to once again change clocks due to mentioned clock instability.
Currently cards are:
1125MHz Core
975MHz Memory

And they are running good, with core clock stable throughout the day. If anyone do not know what I'm talking about I can prepare screenshot of MSI Afterburner with seen clock instability due to power limitation.

Times are unchanged:
759065145 567747654 4 Jun 2014, 6:39:16 UTC 4 Jun 2014, 6:54:56 UTC Completed and validated 23.09 1.81 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
759065143 567747652 4 Jun 2014, 6:39:16 UTC 4 Jun 2014, 6:54:56 UTC Completed and validated 23.09 1.79 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
759064567 567747216 4 Jun 2014, 6:38:09 UTC 4 Jun 2014, 6:53:50 UTC Completed and validated 23.09 1.84 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
759064566 567747215 4 Jun 2014, 6:38:09 UTC 4 Jun 2014, 6:53:50 UTC Completed and validated 23.09 1.84 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
759064565 567747214 4 Jun 2014, 6:38:09 UTC 4 Jun 2014, 6:54:56 UTC Completed and validated 23.14 1.78 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)

Seems that GPU/CPU is synchronized on 1s boundary.
This could be improved. I think...?
60) Message boards : Number crunching : New Benchmark Thread - times wanted for any hardware, CPU or GPU, old or new! (Message 61831)
Posted 3 Jun 2014 by Arivald Ha'gel
Post:
Hello,

I'm not mistaken :)

My cards are http://www.gigabyte.us/products/product-page.aspx?pid=4914#ov

Gigabyte Radeon R9 280X Windforce 3X Rev 2.0
I do underclock memory since it is not important for MilkyWay@Home, and this saves me some wattage, allows greater Core overclock cause of Power Limit, AND because of this it allows better cooling.

Setup:
AuthenticAMD AMD FX(tm)-8350 Eight-Core Processor [Family 21 Model 2 Stepping 0]
Microsoft Windows 7 Home Premium x64 Edition, Service Pack 1, (06.01.7601.00)
[2] AMD AMD Radeon HD 7870/7950/7970/R9 280X series (Tahiti) (3072MB) driver: 1.4.1848 OpenCL: 1.02

I have changed overclocking/underclocking to:
1150MHz Core, 1000MHz Memory with +20% Power Limit.
Current results are:
758340711 567225184 574003 3 Jun 2014, 8:37:56 UTC 3 Jun 2014, 8:55:51 UTC Completed and validated 23.09 1.83 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
758340186 567126374 574003 3 Jun 2014, 8:36:49 UTC 3 Jun 2014, 8:54:43 UTC Completed and validated 23.14 1.87 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
758340173 567212240 574003 3 Jun 2014, 8:36:49 UTC 3 Jun 2014, 8:54:43 UTC Completed and validated 23.10 1.90 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
758340162 566875173 574003 3 Jun 2014, 8:36:49 UTC 3 Jun 2014, 8:54:43 UTC Completed and validated 23.10 1.72 106.88 MilkyWay@Home v1.02 (opencl_amd_ati)
758340146 567222709 574003 3 Jun 2014, 8:36:49 UTC 3 Jun 2014, 8:53:36 UTC Completed and validated 23.09 1.86 106.88 MilkyWay@Home v1.02

It does NOT hold 1150MHz Core clock (More likely oscillates around 1140MHz) since Power Limit is too small, but it is impossible to go over +20% in current Afterburner. With higher than 1000MHz Memory clock, it possibly wouldn't even hold 1100MHz Core clock.
With default 1500MHz Memory clock it is hard to maintain 1050MHz Core clock stability.

IMHO with better CPU this card is capable of saving one more second.


Previous 20 · Next 20

©2024 Astroinformatics Group