Message boards :
Number crunching :
credit comparison to other projects
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 15 · Next
Author | Message |
---|---|
Send message Joined: 3 Aug 08 Posts: 89 Credit: 255,801 RAC: 0 |
|
Send message Joined: 19 Nov 07 Posts: 29 Credit: 3,353,124 RAC: 0 |
The better optimized SETI client uses the implemented SSE technology in Intel cpus,this sse tech offers more advantages to the software developers or to programmers who optimize apps. SSE is supported since AMD Athlon XP. SSE2 is supported since Opteron and Athlon 64. AMD introduced a subset of SSE3 in revision E (Venice and San Diego) of their Athlon 64 CPUs. Please use "Reply" or "Quote" buttons on posts, instead of "reply to this thread". Keep the posts linked together ("X is a reply to Y"). |
Send message Joined: 22 Mar 08 Posts: 90 Credit: 501,728 RAC: 0 |
OK, The first full day of crunching the new opp. app. I dropped from about 9k using milsop's app. to about 1.5k with the new app. Yep, I think a little adjustment is in order. I dont expect to make as much as with Milsop's app. but I'd like to see about half as much anyway. A clear conscience is usually the sign of a bad memory |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Sure enough, the CPCS here is now: Average credit per CPU second 0.017187, or 61.8732 cr/hr, which is very close to the observed 1 credit / minute. So, despite the chart at BOINC Combined Statistics claiming that MW is paying at a rate of "2.5x SETI", for Labbie's Q9450 @ 3.36 GHz, SETI is actually the overall "better paying" project. I do not presume to speak for Labbie, but if credits are reduced here, I can see where someone in a similar position would think that it's an unfair reduction, since more is already available at SETI. The problem is in how SETI has chose to do their optimized applications, making them not the stock application. The stock application does include some optimizations, but not all optimizations. If you really want true "cross-project parity", then what needs to be done is the same degree of optimization in all stock applications then baseline credits against those roughly equivalent stock applications, then encourage all projects into open-sourcing their code to make fully optimized versions available and/or test fully optimized applications in the main project. This means that resources would need to be available from the project side to test the applications. Since SETI is constantly saying that they don't have enough available resources (time or money) to handle that kind of true effort towards "cross-project parity", it is somewhat laughable that David Anderson feels that everyone should reference his project as the "baseline", and in particular that the "baseline" should be the stock application, which is kept artificially low due to the afore mentioned shortage of resources to do the right thing... Bottom line: People need to stop believing the numbers at BOINC Combined Statistics (and at other sites) mean anything other than some composite comparision against some unknown composite class of host. Unless the host lists are identical between projects, that composite value is different for each project. It doesn't mean squat when you start comparing real applications and all the possible variations of issues with the projects (processor penalties, OS penalties, old clients making requests for zero credit, etc, etc, etc...) @ Travis and Dave - Run your project the way you see fit. Encourage the Cross-Project Parity crusaders to see the errors of their ways and to come up with a better plan. |
Send message Joined: 9 Sep 08 Posts: 96 Credit: 336,443,946 RAC: 0 |
[quote] Hear-Hear... |
Send message Joined: 22 Mar 08 Posts: 90 Credit: 501,728 RAC: 0 |
|
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
[quote] Unfortunately, this doesn't seem like a very easy problem (or else someone would have come up with a good solution by now). So in light of that, we'd like to keep or credit in line with other projects. If it's too low we'll raise it, if it's too high we'll lower it. |
Send message Joined: 21 Nov 08 Posts: 90 Credit: 2,601 RAC: 0 |
Theres no single answer to any of this because it all depends and whatever seems fair to one group of users will cause screams of outrage from another. As far as I can see best is to look at a list of projects and push a pin along the line somewhere. |
Send message Joined: 22 Feb 08 Posts: 260 Credit: 57,387,048 RAC: 0 |
[quote] Thanks for making that clear. (again...) I think it's better to at least try to keep some kind of parity than deliberately breaking it. mic. |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
I am getting an avg of 30/hour with the new wu's. The old wu's & milksop's app I could get around 100/hour max. This is on a P4 2.66. I think the credits are avg or higher for older systems, but seems low on new fast ones (esp with the credit cap), which are becoming the majority (if they aren't already). Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
Just throwing in my thoughts based on looking at the non-purged results, time taken for the credit given and translating these back to MW credit per hour for each of my PCs (a) Oldest PC - a dual P3 @933MHz under Win2K Pro. Average time per CPU for the 39.84 CS WU is 13,400 seconds. So, the credit per hour per CPU = 39.84 CS in 13,400/3600 = 3.72 hours for 39.84 = 10.7 CS per hour per CPU. (b) Next old PC - a dual Prestonia Xeon @2.8GHz under XP SP3 Pro (32 bit). Average time for a 39.84 CS WU = 6,700 seconds. So, the credit per hour per CPU = 39.84 CS in 6,700/3600 = 1.86 hours for 39.84 = 21.4 CS per hour per CPU. (c) Oldest Core 2 Quad - a QX6,700 @ 3.0GHz under XP SP3 Pro (32 bit). Average time for a 39.84 CS WU = 3,170 seconds. So, the credit per hour per CPU = 39.84 CS in 3,170/3600 = 0.88 hours for 39.84 = 45.2 CS per hour per CPU. (d) Newest Core 2 Quad - a QX9,650 @ 3.85GHz under XP SP3 Pro (32 bit). Average time for a 39.84 CS WU = 2,020 seconds. So, the credit per hour per CPU = 39.84 CS in 2,020/3600 = 0.56 hours for 39.84 = 71.1 CS per hour per CPU. So, the question which should be asked is what is the median point computer specification crunching MW? Whether this be AMD powered or Intel powered. If the median spec computer can be identified and the credit given by the servers arranged to meet the average given by a range of projects - say 50 CS per hour per CPU - then that should be the fair credit level. Obviously, as the MW stock client changes and PC are upgraded this credit given should be adjusted, because the average computer specification will move. But, this could be adjusted once in each 6 to 12 months time period. |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Just throwing in my thoughts based on looking at the non-purged results, time taken for the credit given and translating these back to MW credit per hour for each of my PCs Nice range given. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Again, therein lies the rub. As I demonstrated, for Labbie's Q9450, SETI offers higher credit potential than here. However, if you look at speedimic's 2.8GHz Xeon and then drill down into the SETI and MW project stats for that specific system, you'll discover the following: SETI - Average credit per CPU second 0.003885 MW@H - Average credit per CPU second 0.014103 So, from two different users, you have two completely different perspectives. As you said, it isn't an "easy problem" to solve. The problem is getting project admins to understand that moving credits up and down isn't a real solution because it causes just as much "unfairness" as it purports to solve... |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Here is my new solution: Give out candy instead. :P Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
Rather than the approach I just gave, 2 posts down. I see the only other way of ensuring a rough cross project parity is to - (a) Record the total number of hosts crunching MW at a specific point in time and the total CS output of these hosts. (b) Let the project run for a good time, say between 4 and 7 days. (c) Record the number of hosts and the CS output of all of them at the end of the designated time period. Note if the number of hosts have changed between the start and end of the time slot being measured, and use an arithmetic mean for approximation. Note the difference between the finishing number of CS and that from the start. {b]This will give the total number of CS produced by the average number of hosts during that time[/b]. Now just divide the total number of CS recorded by the number of hosts and the number of hours run from the start to the end of the time period. This will produce the average CS per hour per host (irrespective of the number of cores of CPUs). That average can then be compared with an equivalent average for other projects (if this is known). This method averages the host differences, and gets around the point Brian made. |
Send message Joined: 6 Apr 08 Posts: 2018 Credit: 100,142,856 RAC: 0 |
|
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Here is my new solution: Trick or Treat? |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 |
We've all been tricked in to lots and lots of posting. |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
We've all been tricked in to lots and lots of posting. Where's Misfit when you need him the most? |
Send message Joined: 21 Nov 08 Posts: 90 Credit: 2,601 RAC: 0 |
The sort that some people will say is delicious, some will say sucks rhino, some will say rots our teeth, some will say causes hyperactivity...... |
©2024 Astroinformatics Group