Welcome to MilkyWay@home

Slower CPUs

Message boards : Number crunching : Slower CPUs
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
skivelitis
Avatar

Send message
Joined: 29 Jun 08
Posts: 12
Credit: 252,697
RAC: 0
Message 38564 - Posted: 11 Apr 2010, 2:07:24 UTC

Without starting up the whole credit thing again (if thats possible), is it the intention of the project to weed out the older, slower, hosts? When I first started crunching Milkyway, I could count on 17-20 credits/hr of CPU time. Lately, I am lucky to get 10. I am a huge astronomy buff so will continue to crunch anyway, but these long workunits always seem to run at high priority, even though I run 100% CPU 24/7.
ID: 38564 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 38574 - Posted: 11 Apr 2010, 8:53:24 UTC

I cannot speak for the project, only the Project Admin/Scientises can. I am another cruncher.

The project is significantly biased to the faster crunchers who use GPUs rather than CPUs. So the answer to your question takes in other hardware than the normal rig.
Go away, I was asleep


ID: 38574 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Zydor
Avatar

Send message
Joined: 24 Feb 09
Posts: 620
Credit: 100,587,625
RAC: 0
Message 38576 - Posted: 11 Apr 2010, 10:30:15 UTC - in response to Message 38564.  
Last modified: 11 Apr 2010, 10:34:34 UTC

Without starting up the whole credit thing again (if thats possible), is it the intention of the project to weed out the older, slower, hosts? When I first started crunching Milkyway, I could count on 17-20 credits/hr of CPU time. Lately, I am lucky to get 10.......


The credit levels have been reset (reduced) a couple of times since the Project started - all part of the settling in process of startup/optimising/real world acceptance of credit levels. Its not so much stopping older hosts - that would be a little suicidal re PR. More a real world settling of credit levels when the apps were fully optimised, and were generating such high levels of credits as to cause a near BOINC wide rebellion :) The latter being more caused by, at that time, no general awareness of how powerful and fast GPU based crunching is compared to pure CPU based. A few noses were put out of joint because of the lack of that general awareness. Now however, its generally realised the power of GPU crunching, so its settled somewhat.

As you indicate, credit levels are never easy to set. The Admin over at Aqua nailed it in my view, when he put as the opening comments on a particular Q&A article on frequent questions:


Why are credits so high? No comment

Why are credits so low? No comment


He had my heartfelt sympathy :)
Regards
Zy
ID: 38576 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
skivelitis
Avatar

Send message
Joined: 29 Jun 08
Posts: 12
Credit: 252,697
RAC: 0
Message 38580 - Posted: 11 Apr 2010, 12:16:09 UTC

Thanks for you replies. I hope this project doesn't become like COSMO where I can no longer crunch. Their issue is RAM, this one may become report deadlines.
ID: 38580 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 38586 - Posted: 11 Apr 2010, 14:03:14 UTC
Last modified: 11 Apr 2010, 14:41:09 UTC

Reporting deadlines should never be an issue here for 3 reasons -

1. The project only issues a maximum of 6 WUs per core as your cache;
2. The cache setting for BOINC Manager should be set to <the 3 day reporting deadline;
3. The time taken to crunch a WU will determine the cache BM allows, so slower CPU only rigs are catered for.

No problems then!
Go away, I was asleep


ID: 38586 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
STE\/E

Send message
Joined: 29 Aug 07
Posts: 486
Credit: 576,523,610
RAC: 34,363
Message 38588 - Posted: 11 Apr 2010, 14:12:09 UTC - in response to Message 38586.  

Reporting deadlines should never be an issue here for 3 reasons -

1. The project on;y issues a maximum of 6 WUs per core as your cache;
2. The cache setting for BOINC Manager should be set to <the 3 day reporting deadline;
3. The time taken to crunch a WU will determine the cache BNM allows, so slower CPU only rigs are catered for.

No problems then!


Unless your trying to run to many Project's on a slower PC & your Contact time is set to high then there can be problems ...

STE\/E
ID: 38588 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
skivelitis
Avatar

Send message
Joined: 29 Jun 08
Posts: 12
Credit: 252,697
RAC: 0
Message 38595 - Posted: 11 Apr 2010, 14:42:08 UTC

Must be too many projects running, and DEFINATELY a slower computer! The project also never gives me more than 1 wu at a time, and it still seems to spend a lot of time running high priority. Other projects do not.
ID: 38595 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 38596 - Posted: 11 Apr 2010, 14:42:41 UTC

Very true, especially when BM's debt code chimes in. But, the originator of the thread seemed only to be speaking about MW.
Go away, I was asleep


ID: 38596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Bruce
Avatar

Send message
Joined: 28 Apr 08
Posts: 1415
Credit: 2,716,428
RAC: 0
Message 38685 - Posted: 13 Apr 2010, 16:30:27 UTC - in response to Message 38595.  

Must be too many projects running, and DEFINATELY a slower computer! The project also never gives me more than 1 wu at a time, and it still seems to spend a lot of time running high priority. Other projects do not.


Have you tried one of the optimized applications? They really do speed things up considerably.
ID: 38685 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Purple Rabbit
Avatar

Send message
Joined: 9 Nov 08
Posts: 44
Credit: 128,043,914
RAC: 0
Message 38696 - Posted: 13 Apr 2010, 22:35:49 UTC
Last modified: 13 Apr 2010, 23:26:20 UTC

For what it's worth I have two Linux Celeron 1.3 GHz computers running Spinhenge and Milkyway (optimized app) 24/7. Only two projects, but together. The computers complain if I add more. Neither project ever runs in high priority.

A Milkyway task completes in 2+ days running with Spinhenge. The non-optimized app won't even come close to making it. When the deadline was 3 days this was occasionally iffy. Now that's it's 8 days everything is fine. I usually have one MW task to start then about halfway through the first one I get another. Never more than two though. I usually have 5 or 6 Spinhenge tasks in the queue at the same time.

These venerable dinosaurs are rated (by BOINC benchmark) at about 4.85 credits/cobblestones/rocks/clams per hour. These guys/guyettes (I can't tell which) are a little ahead of the benchmark on MW. I figure they're getting the best they can.

Of course 2+ days for 213 credits/cobblestone/etc. compared to 2 minutes is a bit disconcerting, but the poor things are doing their best. At least they can do something successfully.

And no, I'm not wasting power. The squirrel works for peanuts, but he does look a bit ragged if I don't reboot often. He wants second and third shift squirrels, but I'm think of offering him walnuts instead.

If it lights up, I'll find something to do with it :)
ID: 38696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 38708 - Posted: 14 Apr 2010, 8:42:27 UTC - in response to Message 38696.  

Not to leak out information before we're fully ready... but we do have some things in works for the CPU volunteers.

We do know that we're a little GPU heavy here -- that's mainly because our application scales so well to GPUs, getting close to full utilization of them, which means 100x+ speedups on the ATI GPUs. Because of this the CPUs are getting left in the dust.

We're still a few weeks (maybe more) off from releasing the "new" new application, but for some tidbits about what we'll be trying to do -- this is going to be a completely different application from what we're running right now. Currently, we're using statistical sampling methods to separate out the Sagittarius Dwarf Galaxy (and other tidal galaxies) from the Milky Way's halo.

What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :)

Anyways, at least until we (or someone else) come out with a super fast GPU version of that, it will be CPU only, giving the CPUs something new and interesting to crunch.

Again, there should be a lot more information about this once we get closer to putting alpha applications out in the wild.
ID: 38708 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 38715 - Posted: 14 Apr 2010, 13:03:55 UTC - in response to Message 38708.  

What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :)

Anyways, at least until we (or someone else) come out with a super fast GPU version of that, it will be CPU only, giving the CPUs something new and interesting to crunch.

Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs. Maybe not that extremely well like the current code, but still. The downside is that it will be a harder port to GPUs. Maybe one should consider OpenCL (quite similar to CUDA) for it to have a single application for nvidia and ATI (and I think there is even an OpenCL compiler for Cell).
ID: 38715 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile cenit

Send message
Joined: 16 Mar 09
Posts: 58
Credit: 1,129,612
RAC: 0
Message 38718 - Posted: 14 Apr 2010, 15:38:53 UTC - in response to Message 38715.  

What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :)

Anyways, at least until we (or someone else) come out with a super fast GPU version of that, it will be CPU only, giving the CPUs something new and interesting to crunch.

Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs. Maybe not that extremely well like the current code, but still. The downside is that it will be a harder port to GPUs. Maybe one should consider OpenCL (quite similar to CUDA) for it to have a single application for nvidia and ATI (and I think there is even an OpenCL compiler for Cell).


there're many n-body simulations written in opencl, some of them also available on internet. It will be interesting to see your version and how it will compare to those versions.

ps: opencl should be interesting also on cpu, as it's a relative inexpensive way to go multithread. AMD's SDK works better on cpus than on ATi gpus!
ID: 38718 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 38720 - Posted: 14 Apr 2010, 15:58:32 UTC - in response to Message 38718.  

there're many n-body simulations written in opencl, some of them also available on internet.

That's why I wrote:
Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs.

;)

A basic n-body simulation can be done with a single kernel, but one can make it arbitrary complicated of course.
ID: 38720 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile arkayn
Avatar

Send message
Joined: 14 Feb 09
Posts: 999
Credit: 74,932,619
RAC: 0
Message 38739 - Posted: 14 Apr 2010, 22:14:53 UTC

I know that with the current 2.01 SDK from ATI, our OpenCL Astropulse app generates garbage on the 5xxx series but works fine on the 4xxx and down cards.
ID: 38739 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 38750 - Posted: 15 Apr 2010, 4:04:41 UTC - in response to Message 38720.  

there're many n-body simulations written in opencl, some of them also available on internet.

That's why I wrote:
Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs.

;)

A basic n-body simulation can be done with a single kernel, but one can make it arbitrary complicated of course.


Once we've moved over to the new application (hopefully this week if nothing else breaks), i'll be spending most of my time on the n-body code, so I'll be able to let everyone know a lot more in a bit :)
ID: 38750 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 38937 - Posted: 20 Apr 2010, 23:24:44 UTC - in response to Message 38750.  
Last modified: 20 Apr 2010, 23:25:44 UTC

there're many n-body simulations written in opencl, some of them also available on internet.

That's why I wrote:
Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs.

;)

A basic n-body simulation can be done with a single kernel, but one can make it arbitrary complicated of course.


Once we've moved over to the new application (hopefully this week if nothing else breaks), i'll be spending most of my time on the n-body code, so I'll be able to let everyone know a lot more in a bit :)

Just for your information, some theoreticians here are also interested in such a code (they would use it for atomic clusters in strong laser fields, that means one would calculate the Coulomb forces instead of gravity). I've put together some basic test code for that (only single precision so far). It is virtually the same as one would use for a galaxy with gravitational forces (can be easily converted by exchanging the electric charges against masses as the source of the interaction as the both potentials scale 1/r with the distance).

So far the speed appears to be okay even without any fancy optimizations. On a HD3870 it needs about 0.65 seconds per time step with 65536 particles (that means about 4.2 billion force evaluations with currently 36 flops each, I keep track of the potential at each particle's place and use Kahan summation there, that means one could trade 12 additions per force evaluation for some precision loss). With 131072 particles it takes roughly 2.5 seconds, that means it scales perfectly with N^2 as expected. Even a 780G integrated graphics appears to be quite competetive with a CPU core.

Energy conservation and correctness looks quite okay to me (tested with some simulated "classic" hydrogen atoms, i.e. electrons circulating around protons). I'm using the velocity Verlet integration scheme and a softened potential V ~ 1/sqrt(r^2 + eps). Appears to be enough, but I will talk to my colleages if they want something better (error function potential is the exact potential of a Gaussian charge or mass distribution, but unfortunately not given by a simple analytical expression).
ID: 38937 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 38972 - Posted: 21 Apr 2010, 12:19:56 UTC

>1/r with distance
1/r^2? :) Perhaps things have changed since I was an undergrad.
Cheers,

PeterV

.
ID: 38972 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 38975 - Posted: 21 Apr 2010, 13:48:57 UTC - in response to Message 38972.  

>1/r with distance
1/r^2? :) Perhaps things have changed since I was an undergrad.

The potentials go 1/r, the forces (derivatives of the potentials) scale with 1/r^2 ;)
ID: 38975 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Ivor Cogdell

Send message
Joined: 9 Feb 08
Posts: 9
Credit: 473,130
RAC: 0
Message 39020 - Posted: 22 Apr 2010, 20:38:26 UTC

Hi Gang,
Here is my setup:

22/04/2010 19:08:10 Starting BOINC client version 6.10.18 for windows_intelx86
22/04/2010 19:08:10 log flags: file_xfer, sched_ops, task
22/04/2010 19:08:10 Libraries: libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3
22/04/2010 19:08:10 Data directory: D:\Program Files\BOINC\projects
22/04/2010 19:08:10 Running under account Ivor
22/04/2010 19:08:11 Processor: 1 GenuineIntel Intel(R) Pentium(R) 4 CPU 1500MHz [x86 Family 15 Model 0 Stepping 10]
22/04/2010 19:08:11 Processor features: fpu tsc sse sse2 mmx
22/04/2010 19:08:11 OS: Microsoft Windows XP: Home x86 Edition, Service Pack 3, (05.01.2600.00)
22/04/2010 19:08:11 Memory: 2.00 GB physical, 3.85 GB virtual
22/04/2010 19:08:11 Disk: 465.75 GB total, 332.78 GB free
22/04/2010 19:08:11 Local time is UTC +1 hours
22/04/2010 19:08:11 No usable GPUs found
22/04/2010 19:08:11 Not using a proxy
22/04/2010 19:08:11 Einstein@Home URL http://einstein.phys.uwm.edu/; Computer ID 2273795; resource share 22
22/04/2010 19:08:11 Milkyway@home URL http://milkyway.cs.rpi.edu/milkyway/; Computer ID 156307; resource share 50
22/04/2010 19:08:11 SETI@home URL http://setiathome.berkeley.edu/; Computer ID 5271424; resource share 24
22/04/2010 19:08:11 Einstein@Home General prefs: from Einstein@Home (last modified 04-Apr-2009 13:15:11)
22/04/2010 19:08:11 Einstein@Home Computer location: home
22/04/2010 19:08:11 Einstein@Home General prefs: no separate prefs for home; using your defaults
22/04/2010 19:08:11 Reading preferences override file
22/04/2010 19:08:11 Preferences limit memory usage when active to 1473.69MB
22/04/2010 19:08:11 Preferences limit memory usage when idle to 2026.33MB
22/04/2010 19:08:11 Preferences limit disk usage to 10.00GB

At the moment, I find my system is unable to keep up with the demand for procesing for MilkyWay@home. On average, I have completed two thirds of it (in high priority status, even with Einstein@Home suspended completely) when my deadline runs out and I have to Abort, 30-ish hours of processing wasted, which is a shame as I want to be able to contribute. Is there any way of flagging for an extended deadline or a smaller WU ?
Ivor
My Webpage
ID: 39020 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Number crunching : Slower CPUs

©2024 Astroinformatics Group