Message boards :
Number crunching :
Slower CPUs
Message board moderation
Author | Message |
---|---|
Send message Joined: 29 Jun 08 Posts: 12 Credit: 252,697 RAC: 0 ![]() ![]() |
Without starting up the whole credit thing again (if thats possible), is it the intention of the project to weed out the older, slower, hosts? When I first started crunching Milkyway, I could count on 17-20 credits/hr of CPU time. Lately, I am lucky to get 10. I am a huge astronomy buff so will continue to crunch anyway, but these long workunits always seem to run at high priority, even though I run 100% CPU 24/7. ![]() |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 ![]() ![]() |
I cannot speak for the project, only the Project Admin/Scientises can. I am another cruncher. The project is significantly biased to the faster crunchers who use GPUs rather than CPUs. So the answer to your question takes in other hardware than the normal rig. Go away, I was asleep ![]() ![]() |
![]() Send message Joined: 24 Feb 09 Posts: 620 Credit: 100,587,625 RAC: 0 ![]() ![]() |
Without starting up the whole credit thing again (if thats possible), is it the intention of the project to weed out the older, slower, hosts? When I first started crunching Milkyway, I could count on 17-20 credits/hr of CPU time. Lately, I am lucky to get 10....... The credit levels have been reset (reduced) a couple of times since the Project started - all part of the settling in process of startup/optimising/real world acceptance of credit levels. Its not so much stopping older hosts - that would be a little suicidal re PR. More a real world settling of credit levels when the apps were fully optimised, and were generating such high levels of credits as to cause a near BOINC wide rebellion :) The latter being more caused by, at that time, no general awareness of how powerful and fast GPU based crunching is compared to pure CPU based. A few noses were put out of joint because of the lack of that general awareness. Now however, its generally realised the power of GPU crunching, so its settled somewhat. As you indicate, credit levels are never easy to set. The Admin over at Aqua nailed it in my view, when he put as the opening comments on a particular Q&A article on frequent questions: Why are credits so high? No comment Why are credits so low? No comment He had my heartfelt sympathy :) Regards Zy |
Send message Joined: 29 Jun 08 Posts: 12 Credit: 252,697 RAC: 0 ![]() ![]() |
Thanks for you replies. I hope this project doesn't become like COSMO where I can no longer crunch. Their issue is RAM, this one may become report deadlines. ![]() |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 ![]() ![]() |
Reporting deadlines should never be an issue here for 3 reasons - 1. The project only issues a maximum of 6 WUs per core as your cache; 2. The cache setting for BOINC Manager should be set to <the 3 day reporting deadline; 3. The time taken to crunch a WU will determine the cache BM allows, so slower CPU only rigs are catered for. No problems then! Go away, I was asleep ![]() ![]() |
Send message Joined: 29 Aug 07 Posts: 486 Credit: 575,454,207 RAC: 0 ![]() ![]() |
Reporting deadlines should never be an issue here for 3 reasons - Unless your trying to run to many Project's on a slower PC & your Contact time is set to high then there can be problems ... STE\/E |
Send message Joined: 29 Jun 08 Posts: 12 Credit: 252,697 RAC: 0 ![]() ![]() |
Must be too many projects running, and DEFINATELY a slower computer! The project also never gives me more than 1 wu at a time, and it still seems to spend a lot of time running high priority. Other projects do not. ![]() |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 ![]() ![]() |
Very true, especially when BM's debt code chimes in. But, the originator of the thread seemed only to be speaking about MW. Go away, I was asleep ![]() ![]() |
![]() ![]() Send message Joined: 28 Apr 08 Posts: 1415 Credit: 2,716,428 RAC: 0 ![]() ![]() |
Must be too many projects running, and DEFINATELY a slower computer! The project also never gives me more than 1 wu at a time, and it still seems to spend a lot of time running high priority. Other projects do not. Have you tried one of the optimized applications? They really do speed things up considerably. |
![]() ![]() Send message Joined: 9 Nov 08 Posts: 44 Credit: 128,043,914 RAC: 0 ![]() ![]() |
For what it's worth I have two Linux Celeron 1.3 GHz computers running Spinhenge and Milkyway (optimized app) 24/7. Only two projects, but together. The computers complain if I add more. Neither project ever runs in high priority. A Milkyway task completes in 2+ days running with Spinhenge. The non-optimized app won't even come close to making it. When the deadline was 3 days this was occasionally iffy. Now that's it's 8 days everything is fine. I usually have one MW task to start then about halfway through the first one I get another. Never more than two though. I usually have 5 or 6 Spinhenge tasks in the queue at the same time. These venerable dinosaurs are rated (by BOINC benchmark) at about 4.85 credits/cobblestones/rocks/clams per hour. These guys/guyettes (I can't tell which) are a little ahead of the benchmark on MW. I figure they're getting the best they can. Of course 2+ days for 213 credits/cobblestone/etc. compared to 2 minutes is a bit disconcerting, but the poor things are doing their best. At least they can do something successfully. And no, I'm not wasting power. The squirrel works for peanuts, but he does look a bit ragged if I don't reboot often. He wants second and third shift squirrels, but I'm think of offering him walnuts instead. If it lights up, I'll find something to do with it :) |
![]() Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 ![]() ![]() |
Not to leak out information before we're fully ready... but we do have some things in works for the CPU volunteers. We do know that we're a little GPU heavy here -- that's mainly because our application scales so well to GPUs, getting close to full utilization of them, which means 100x+ speedups on the ATI GPUs. Because of this the CPUs are getting left in the dust. We're still a few weeks (maybe more) off from releasing the "new" new application, but for some tidbits about what we'll be trying to do -- this is going to be a completely different application from what we're running right now. Currently, we're using statistical sampling methods to separate out the Sagittarius Dwarf Galaxy (and other tidal galaxies) from the Milky Way's halo. What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :) Anyways, at least until we (or someone else) come out with a super fast GPU version of that, it will be CPU only, giving the CPUs something new and interesting to crunch. Again, there should be a lot more information about this once we get closer to putting alpha applications out in the wild. ![]() |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 ![]() ![]() |
What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :) Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs. Maybe not that extremely well like the current code, but still. The downside is that it will be a harder port to GPUs. Maybe one should consider OpenCL (quite similar to CUDA) for it to have a single application for nvidia and ATI (and I think there is even an OpenCL compiler for Cell). |
![]() Send message Joined: 16 Mar 09 Posts: 58 Credit: 1,129,612 RAC: 0 ![]() ![]() |
What we have in the works is running n-body simulations. Essentially we'll simulate a bunch of stars, with different masses and velocities, and simulate their movement around a simulated milky way. We'll be trying to find what initial parameters generate a galaxy that looks like our observed data of the Milky Way. This should generate some really interesting results :) there're many n-body simulations written in opencl, some of them also available on internet. It will be interesting to see your version and how it will compare to those versions. ps: opencl should be interesting also on cpu, as it's a relative inexpensive way to go multithread. AMD's SDK works better on cpus than on ATi gpus! |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 ![]() ![]() |
there're many n-body simulations written in opencl, some of them also available on internet. That's why I wrote: Depending on what you are going to do exactly, but n-body simulations with a lot of stars scale acceptable on GPUs. ;) A basic n-body simulation can be done with a single kernel, but one can make it arbitrary complicated of course. |
![]() ![]() Send message Joined: 14 Feb 09 Posts: 999 Credit: 74,932,619 RAC: 0 ![]() ![]() |
I know that with the current 2.01 SDK from ATI, our OpenCL Astropulse app generates garbage on the 5xxx series but works fine on the 4xxx and down cards. ![]() |
![]() Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 ![]() ![]() |
there're many n-body simulations written in opencl, some of them also available on internet. Once we've moved over to the new application (hopefully this week if nothing else breaks), i'll be spending most of my time on the n-body code, so I'll be able to let everyone know a lot more in a bit :) ![]() |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 ![]() ![]() |
there're many n-body simulations written in opencl, some of them also available on internet. Just for your information, some theoreticians here are also interested in such a code (they would use it for atomic clusters in strong laser fields, that means one would calculate the Coulomb forces instead of gravity). I've put together some basic test code for that (only single precision so far). It is virtually the same as one would use for a galaxy with gravitational forces (can be easily converted by exchanging the electric charges against masses as the source of the interaction as the both potentials scale 1/r with the distance). So far the speed appears to be okay even without any fancy optimizations. On a HD3870 it needs about 0.65 seconds per time step with 65536 particles (that means about 4.2 billion force evaluations with currently 36 flops each, I keep track of the potential at each particle's place and use Kahan summation there, that means one could trade 12 additions per force evaluation for some precision loss). With 131072 particles it takes roughly 2.5 seconds, that means it scales perfectly with N^2 as expected. Even a 780G integrated graphics appears to be quite competetive with a CPU core. Energy conservation and correctness looks quite okay to me (tested with some simulated "classic" hydrogen atoms, i.e. electrons circulating around protons). I'm using the velocity Verlet integration scheme and a softened potential V ~ 1/sqrt(r^2 + eps). Appears to be enough, but I will talk to my colleages if they want something better (error function potential is the exact potential of a Gaussian charge or mass distribution, but unfortunately not given by a simple analytical expression). |
![]() ![]() Send message Joined: 26 Jan 09 Posts: 589 Credit: 497,834,261 RAC: 0 ![]() ![]() |
|
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 ![]() ![]() |
>1/r with distance The potentials go 1/r, the forces (derivatives of the potentials) scale with 1/r^2 ;) |
![]() Send message Joined: 9 Feb 08 Posts: 9 Credit: 473,130 RAC: 0 ![]() ![]() |
Hi Gang, Here is my setup: 22/04/2010 19:08:10 Starting BOINC client version 6.10.18 for windows_intelx86 22/04/2010 19:08:10 log flags: file_xfer, sched_ops, task 22/04/2010 19:08:10 Libraries: libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 22/04/2010 19:08:10 Data directory: D:\Program Files\BOINC\projects 22/04/2010 19:08:10 Running under account Ivor 22/04/2010 19:08:11 Processor: 1 GenuineIntel Intel(R) Pentium(R) 4 CPU 1500MHz [x86 Family 15 Model 0 Stepping 10] 22/04/2010 19:08:11 Processor features: fpu tsc sse sse2 mmx 22/04/2010 19:08:11 OS: Microsoft Windows XP: Home x86 Edition, Service Pack 3, (05.01.2600.00) 22/04/2010 19:08:11 Memory: 2.00 GB physical, 3.85 GB virtual 22/04/2010 19:08:11 Disk: 465.75 GB total, 332.78 GB free 22/04/2010 19:08:11 Local time is UTC +1 hours 22/04/2010 19:08:11 No usable GPUs found 22/04/2010 19:08:11 Not using a proxy 22/04/2010 19:08:11 Einstein@Home URL http://einstein.phys.uwm.edu/; Computer ID 2273795; resource share 22 22/04/2010 19:08:11 Milkyway@home URL http://milkyway.cs.rpi.edu/milkyway/; Computer ID 156307; resource share 50 22/04/2010 19:08:11 SETI@home URL http://setiathome.berkeley.edu/; Computer ID 5271424; resource share 24 22/04/2010 19:08:11 Einstein@Home General prefs: from Einstein@Home (last modified 04-Apr-2009 13:15:11) 22/04/2010 19:08:11 Einstein@Home Computer location: home 22/04/2010 19:08:11 Einstein@Home General prefs: no separate prefs for home; using your defaults 22/04/2010 19:08:11 Reading preferences override file 22/04/2010 19:08:11 Preferences limit memory usage when active to 1473.69MB 22/04/2010 19:08:11 Preferences limit memory usage when idle to 2026.33MB 22/04/2010 19:08:11 Preferences limit disk usage to 10.00GB At the moment, I find my system is unable to keep up with the demand for procesing for MilkyWay@home. On average, I have completed two thirds of it (in high priority status, even with Einstein@Home suspended completely) when my deadline runs out and I have to Abort, 30-ish hours of processing wasted, which is a shame as I want to be able to contribute. Is there any way of flagging for an extended deadline or a smaller WU ? Ivor My Webpage ![]() |
©2023 Astroinformatics Group