Message boards :
Number crunching :
N-Body blues
Message board moderation
Author | Message |
---|---|
Send message Joined: 19 Jan 13 Posts: 2 Credit: 152,077 RAC: 0 |
I've only started using BOINC a couple of days ago after several years gap from being an earlyish seti@home person and I'm still getting used to the new client, method of credit and all the wonderful projects now using the platform. Unfortunately I've just had to suspend processing of the N-Body Simulation 1.04 as I'm beginning to receive tasks with estimates between 96 and 132 hours. I know that's nowhere near the level some others have reported on these forums, but the thought of spending all that time processing with the possibility of an error just doesn't appeal. I don't care much about the credits I'd lose as a result, it just seems like such a waste of my processor time to risk it. Are these sorts of complexities of task expected or anomalous for N-Body Simulations? My GPU doesn't have the necessary double precision extension , so I'm not sure if that's the problem? |
Send message Joined: 8 Apr 10 Posts: 25 Credit: 268,525 RAC: 0 |
Hi! I'm new here also ... The nbody WU's are ranging up to 86000+ hours of est time. If you look they cane take up to 15-20 days actual ... if they don't error out (like mine did after 120Hrsw CPU ... and no credit:-) The "regular W@H wu's seem to give an inflated est time (mine in the 200's of hrs) but seem to run in 4-6 hrs with a typical credit of about 215 cobbs. I'm here for the "fun of it" ... actually ... I needed a heavy doubleprecision work load I could but on half of my CPU's ... this seems to work fine. by the way ... the nbody WU's are multi-threaded and each WU will run on up to 4 CPU's at the same time I'm not doing nbody WU's for now .... they seen unstable to me. Ed F |
Send message Joined: 8 May 09 Posts: 3321 Credit: 520,521,562 RAC: 27,203 |
I've only started using BOINC a couple of days ago after several years gap from being an earlyish seti@home person and I'm still getting used to the new client, method of credit and all the wonderful projects now using the platform. Yes that IS the problem with your gpu, try Moo, Collatz or PrimeGrid or even DistRTgen. Collatz will take almost any gpu, while the others are bit more picky. As for the NBody units I do not run them as I use my gpu's on projects that support gpu's, saving my cpu's for projects that don't. Here is a link to a page FULL of Distributed Computing Projects, the Boinc ones are noted: http://www.distributedcomputing.info/projects.html If you click on one it will take you to a page with a brief description about the project. |
Send message Joined: 19 Jul 10 Posts: 594 Credit: 18,961,495 RAC: 5,498 |
Unfortunately I've just had to suspend processing of the N-Body Simulation 1.04 as I'm beginning to receive tasks with estimates between 96 and 132 hours. I know that's nowhere near the level some others have reported on these forums, but the thought of spending all that time processing with the possibility of an error just doesn't appeal. I don't care much about the credits I'd lose as a result, it just seems like such a waste of my processor time to risk it. There are projects with a lot longer WUs out there, so even if the estimate would be correct, 5 days is not really long. Also from what others reported here, the estimates might be wrong. That's simply the way it is, there are shot and long WUs, all of them are valuable for the science, so we have to crunch them. And if such WU crashes... well, shit happens. My GPU doesn't have the necessary double precision extension , so I'm not sure if that's the problem? There are no n-body GPU applications for Windows, so it's not your card. You should also let few (>10) WUs complete, than the estimates should become better (in case they are wrong now). Regarding your GPU... the results of WUProp@Home might help you to find a good project for it. Since it's a CUDA capable card, you should not need to waste it's power on something like Collatz, you could do some real science with it instead. There are many projects with CUDA applications, most if not all of them do not need double precision. |
Send message Joined: 7 Jun 08 Posts: 464 Credit: 56,639,936 RAC: 0 |
All very true on a routine production run. You pays your nickel and you takes your chances. However, this time around is a beta test for the new app, so you can't really fault people for cutting off nBody (or even the whole project) when they feel they don't have the time to spend horsing around with their configuration, or babysitting their host(s) to make sure as many can go through as possible. Plus, there hasn't been a peep from the app project team since the beginning of the month when this run started. Given the length of run time involved with some types and subtypes of tasks they should be paying a little more attention to the validation issues. |
Send message Joined: 19 Jan 13 Posts: 2 Credit: 152,077 RAC: 0 |
Thanks for all the helpful replies. At least that lets me know I'm not doing anything wrong , which makes it easier to decide what to do. That's a great link Mikey, I'll be sure to check out some of the other projects listed there, too. |
Send message Joined: 15 Sep 12 Posts: 20 Credit: 105,341,548 RAC: 0 |
I just aborted two with ridiculous run times but unfortunately didn't realise they were running in the first place. Will I get any credit for wasting so much CPU time ( over 300 hrs !!! ) http://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=295130394 http://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=294433850 also both those units are still out there wasting someone elses time. Can you not perform a server side abort ? |
©2024 Astroinformatics Group