Message boards :
Number crunching :
Server Updates and Status
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 7 · Next
Author | Message |
---|---|
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
So whats fair about putting all the people that have spent good money on GPU cards to get an advantage(Which is availble to everyone)into a league of their own? What about one for people that don't run the optimised client? Not everyone knows how to install it. That's how I read it too, so I was throwing that out there to try to quell the uprising. If we're right about that, if the app does 100x more work, then tasks will get 2800 credit (approx) rather than 28, or 1800 instead of 18. Both are just examples and should not be taken to be any kind of official word on it until things are made more clear, meaning someone asks Travis if that's the plan. |
Send message Joined: 9 Nov 07 Posts: 151 Credit: 8,391,608 RAC: 0 |
That's how I read it too, so I was throwing that out there to try to quell the uprising. OK, now that the alcohol has worn off and I've had some sleep....... This is what I really (really) like about the message-boards, something can be read/interpreted in two or more ways, yet as soon as it doesn't fit into someone else's idea, it's immediately despised and rejected! An OPINION cannot be wrong - it's an opinion. Let's wait and see! Bye Bye from this thread, and I'll keep on crunchin' |
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
I have a few machines running - Q6600, E6400, a P4 2.8Ghz, and today I got a 4850 and a PSU to stick into an X2-4400. I'd prefer the credits I garnish from the GPU client to go towards my total credit score for MilkyWay, not a separate GPU table. On a good day my current output is 12k ish without the GPU. Technology moves on, and as such we haven't seen splits in 'Top Participant' tables yet. Just because the GPU is such a bonus to this project. It's not a new concept, it's just Moore's Law getting a bigger bump than normal. Also, take some of the other projects in the BOINCsphere. There are those that run 6-8 different apps, depending on which research group has stuff that needs crunching. We don't see different tables for those. These WUs, whether CPU or GPU, are all under the same umbrella - MilkyWay@Home. We are all here, crunching for Milkyway@Home. As a result, we all have a MilkyWay@Home score - be it P2 300Mhz, E6400, 3850s, etc. And I'm all for more work and bigger WUs, Travis. I'm happy to donate my CPU (and now GPU) time to this project. |
Send message Joined: 22 Feb 08 Posts: 260 Credit: 57,387,048 RAC: 0 |
I'm not sure if we can do this having GPU milkyway as just a separate application or not. That's good question, as BOINC doesn't recognise ATI cards as coprocessors. For the backend a ATI-gpu-cruched Wu must be looking like a cpu-crunched WU... mic. |
Send message Joined: 12 Oct 07 Posts: 77 Credit: 404,471,187 RAC: 0 |
That's good question, as BOINC doesn't recognise ATI cards as coprocessors. That won't matter if they split to 2 different projects. All they need to do is reduce the daily WU/CPU limit down to 500 at both projects. For GPU milkyway there'll be 500 WUs that are 1000 times bigger than current that will keep GPUs very happy but will probably make running them on a CPU miss the deadline. Good for GPUs, bad for CPUs And for CPU milkyway you'll have 500 WUs which will keep CPU machines busy all day but anyone running a GPU will run out PDQ. Good for CPUs, bad for GPUs. That seems to work both ways. |
Send message Joined: 18 Feb 09 Posts: 158 Credit: 110,699,054 RAC: 0 |
That's good question, as BOINC doesn't recognise ATI cards as coprocessors. My i7 can crunch more than 500 MW WUs in 24 hours... |
Send message Joined: 28 Apr 08 Posts: 1415 Credit: 2,716,428 RAC: 0 |
That machine really goes eh! I had a peek at the i7 it does a WU in about 13 mins COOL thats about 900 a day! thats a GOOD machine. Wish I had the $$$ to get my own. |
Send message Joined: 12 Oct 07 Posts: 77 Credit: 404,471,187 RAC: 0 |
My i7 can crunch more than 500 MW WUs in 24 hours... Per core? |
Send message Joined: 28 Apr 08 Posts: 1415 Credit: 2,716,428 RAC: 0 |
My i7 can crunch more than 500 MW WUs in 24 hours... no thats total from 8 cores aprox 110 wu's a day per core. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
My i7 can crunch more than 500 MW WUs in 24 hours... My Penny (using CPUs only) takes 10.6 minutes for a 29.5 CS for a ps_s22_ WUs. That is 135 WUs per core per day, or 540 for my quad. I just need a few more cores and a couple of ATI Radeon HD4890 2 crossfire graphics cards, and I am sure the settled RAC after 6 weeks would be about 215K Go away, I was asleep |
Send message Joined: 1 Dec 08 Posts: 139 Credit: 8,721,208 RAC: 0 |
Renata and Neal Chantrill wrote: I'll pop over to seti and tell them that they need to split their tables as they do different calculations too. Excellent point. SETI has two types of Astropulse, plus "Enhanced", yet all the credits are earned on one project. |
Send message Joined: 1 Dec 08 Posts: 139 Credit: 8,721,208 RAC: 0 |
Lord Tedric wrote: I'm not really bothered either way! The majority of users want fairness and I think some sort of level playing field to see how they compare! Well, I think it's basically not possible across projects, and quite possible within the same project. Even if the work is somewhat different, the people developing the app can equalize the credits based on difficulty As for a "level playing field", how can there be such a thing? For instance, on this project, my Opteron 170 outperforms my P4s, even the Xeons (which have a higher clock speed, to boot). When C2Ds came out, they absolutely annilated my Opty in terms of performance, especially on the KWSN SETI optimized apps, which take advantage of their large L2 cache. There might be some projects where AMDs can still keep up with C2Qs (though I don't know of any), and I doubt AMD has anything that can run with an i7. My approach (which suits other things I do with my computers), is to have more machines rather than faster ones. Where's the fairness in that? Though it took a while to accumulate enough of them to be able to outrun a single C2D, I did finally do so. Note that while my rank in this project, measured by TC or RAC is what I consider respectable, it is quite a bit lower when compared to people with the same number of machines. |
Send message Joined: 4 Aug 08 Posts: 46 Credit: 8,255,900 RAC: 0 |
Testing, testing. 1...2....3.. Is this thing on? :) -jim |
Send message Joined: 28 Apr 08 Posts: 1415 Credit: 2,716,428 RAC: 0 |
Server broken! Red stuff all over the place! :-( |
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
|
Send message Joined: 16 Nov 07 Posts: 23 Credit: 4,774,710 RAC: 0 |
Looks like MW is almost back again... Just the server that "can't attach shared memory"... 5 more minutes maybe?!?! |
Send message Joined: 16 Nov 07 Posts: 23 Credit: 4,774,710 RAC: 0 |
At least the web pages are up :) Progress is progress. How 'bout that, 2 posts in the same second. Nice... |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
I wonder what caused the server outrage. It must have been bad for Travis and Dave to need to loose another week end to get things online. Looks like it's all get up and go. We just need the splitters to give us some work to do. But, the demand will suck the moisture from the WUs because of the length of time the servers went away for. A lot of us used FreeHAL as a fill in, as Einstein is down as well, and, in my case, Malaria is still formulating their next stage. As Paul says - live long and crunch! Go away, I was asleep |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Well at least this time my other project, Rosetta stayed up with no problems! I kept on crunching. :D Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,524,931 RAC: 15 |
If RPI is concerned about bandwidth, then additional funding might not solve their concern. The approach of creating much larger GPU work units should help a bit with the bandwidth issue though. The thing is, the combination of optimized CPU applications (and your implementation of much of the optimization in the regular application), plus the hyper efficient optimized GPU applications PLUS the doubling of your user population in the past three months suggests to me that the real issue is a case of being *too* successful. That is sure to have major bandwidth and general server load issues attached to it.
|
©2024 Astroinformatics Group