Welcome to MilkyWay@home

WU abuse

Message boards : Number crunching : WU abuse
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · Next

AuthorMessage
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16882 - Posted: 26 Mar 2009, 1:24:34 UTC - in response to Message 16358.  

Pwrguru wrote:
The simple answer would be to just do away with all points....Then we would see how many people would still be here when the dust settles........


Yes, the "simple answer" would be to cut off our noses despite our faces, in complete ignorance of basic human nature, all for the sake of what is supposed to be a moral ideal, but is utterly unattainable in any case.

ID: 16882 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16884 - Posted: 26 Mar 2009, 2:01:59 UTC - in response to Message 16421.  

Debs wrote:
I don't see what is wrong with a carefully thought out script. A script that is written to hammer the server more than it needs to is something else.


I agree totally.

Debs wrote:
I run a script that controls a number of projects, and uploads completed workunits every 40 minutes


Seems reasonable. My fastest box (which isn't all that fast) tends to fail to upload some of its completed WUs. I really need to spend some time upgrading the client, because that is probably the problem. The slower boxes rarely seem to run out of WUs for this project, unless there is basically none being sent out by the server for a while. I do see "stranded", complete WU's here and there, though that seems to occur more frequently with the Windows boxes.

I found it "dry" this evening, with about 6 completed WUs just sitting there, and nothing else to work on because I had all my other projects on that box NNT. I went ahead and allowed tasks on some other projects, which is good, because it's been about an hour and I still don't have any new tasks.


Debs wrote:
If I knew how to detect whether a specific project is waiting to upload or has run out of work, or better still how to tell how many tasks are in my queue waiting to start, I would write the sriptto check every so often whether a project is running short of work, and I would only connect at that time.


I use boinc_cmd on my linux server box, which is strictly CLI. It displays all kinds of WU state values and whatnot. My guess is it's just reading some XML files somewhere - that's how this all seems to work.

Debs wrote:
And yes, I AM a credit "whore". I just don't have the hardware yet to reach the top 1000 here or in BOINC combined :)


If only. I can't seem to break back into 5000 overall, though BOINC All Project Stats says I'm ranked 749th by TC and 515th based on RAC for this project.

And, yes, I volunteer my CPU cycles to projects that I believe in, and collect credits (as intangible as they may be) in return. If that makes me a "credit whore", so be it. If I am, I think I'm in some very good company in that regard.

ID: 16884 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16885 - Posted: 26 Mar 2009, 2:08:36 UTC - in response to Message 16474.  

KWSN imcrazynow wrote:
From what I see the slower systems have no problems running out of work.


I think it's fair to say "very few problems running out of work". It has happened recently, even on one of my slower boxes. I do think that there's definitely some correlation between how fast your machine can process a WU and how busy it's going to stay, in terms of real time.

ID: 16885 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16886 - Posted: 26 Mar 2009, 2:15:23 UTC - in response to Message 16543.  

Brian Silvers wrote:
I think what a lot of people have trouble conceptualizing is that there can be hundreds or even thousands of requests per second. If there are 500 tasks available and 300 requests for an average of 2 tasks per second, some are going to be told that there aren't any available, while others get their 2 that they requested. That's just simple math.

caferace wrote:
Brian, perhaps if you had a system and and a current BOINC client that would fall into the parameters you seem to enjoy theorizing about things might be far more clear to you. As it is, I can tell you that much faster machines than your AMD 3700+ have issues that are exponentially outside your experience with BOINC and MW.


Oh, wise one, please enlighten us with the mighty power of the wisdom borne of your vastly superior hardware!
ID: 16886 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16887 - Posted: 26 Mar 2009, 2:24:28 UTC - in response to Message 16612.  

Brian Silvers wrote:
I tend to doubt just a second hard disk will alleviate enough issues to get it to where people are not clamoring about not having work, but I could be wrong.


Actually, I remember being taught that moving your LDAP data to another spindle could result in a large throughput increase in AD services on a domain controller. I see this situation as possibly at least partially analagous, with a couple of disparate processes beating one drive (or array) to death with seeks all over the platter.

BTW, that almost certainly is a SCSI drive.

As for people "not clamoring about not having work"; I don't hold out much hope of that ever happening.

ID: 16887 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16888 - Posted: 26 Mar 2009, 2:30:01 UTC - in response to Message 16763.  

Lord Tedric wrote:
As far as Crunchers are concerned, most projects are about credits, that's why they complain about it!

Here we go again....

ID: 16888 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 16889 - Posted: 26 Mar 2009, 2:30:48 UTC

I just love it when people can't argue with facts and get abusive instead!
ID: 16889 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16890 - Posted: 26 Mar 2009, 2:33:45 UTC - in response to Message 16771.  

Paul D. Buck wrote:
I complain because the credit system is flawed and the payment is inherently unfair ...


It's at times like these that I wish I was a lot better at math, so I could prove it. Intuitively, I don't think "payment" of credits is likely to ever be "fair" any more than anything else can be in this world.

ID: 16890 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16891 - Posted: 26 Mar 2009, 2:36:05 UTC - in response to Message 16778.  

John Galt 007 wrote:
OK, maybe 500 is a bit low...1k per core would get you about 200k credit per day on an i7, or any 8 CPU box. UL1 is doing 193k RAC with the top host. That should keep his PC fed...


Can you explain to me what interest is being served by imposing artificial limits?

ID: 16891 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16892 - Posted: 26 Mar 2009, 2:53:37 UTC - in response to Message 16794.  
Last modified: 26 Mar 2009, 3:24:43 UTC

Paul D. Buck wrote:
I do argue for cross project parity and I do not buy the argument that it is impossible. Well, it is impossible if the "powers that be" don't even want to look at the issue. But there is no technical reason that we could not fix this.


There is a VERY simple and fundamental technical reason that "we could not fix this". Different projects process at different speeds on different CPUs. Some projects run considerably faster on Intel Core2 CPUs (especially optimized apps that can take advantage of their large L2 caches). Some projects run much faster on AMD CPUs. So where's your baseline? What do you use as the basis for your "parity"?

Yes, I know that SETI is the proverbial "500 pound gorilla" here, and even they seem to leave a little slack in this regard, because (I'm guessing) they know how impossible true parity would be to implement.

And all this doesn't even include GPUs. What happens when they have both CUDA and ATI apps for this project? I would literally bet the ranch that, when they do, the WU throughput isn't even going to be close - one is going to flat blow the other one away (I won't predict which).

Even if they were about even, what's to keep someone from building an I7 box (or even a dual I7 box) with three Tesla 1060s and a 395 in it? Or four 4870s?

Well, what except for the fact that they would never have enough WUs to feed it LOL. But you get my point - such a box would have astronomical credits/hour, to the extent that it had work to do.

Until you can provide at least a tenable theory as to how your vaunted "parity" could be implemented, how about you stop accusing some conspiratorial "powers that be" for the lack of it?

[edit]
Tell you what. If you can't solve the "problem" how about you:
1) Explain in detail just exactly what "cross project parity" is. That is, if we were to have it, how would we know?
2) Explain to a poor, old, ignorant credit whore like myself why the inequity you perceive - this "unfairness" - bothers you so much? Why do you consider it a problem to begin with? If people choose to donate their resources to projects for their own reasons, and one of those reasons (perhaps the ONLY reason) is credit, how is that a problem for you? Would penalizing that person (or even eliminating credit altogether - as has been suggested here) benefit you in any way?

And I have questions for all of you "I'm only doing this for the Science" types: If you really are, and you don't care at all about credit, then why do you care how much (or how little) credit we credit whores receive? Why do you resort to tactics like a whole team threatening to quit a project because it grants "too much credit"? "Too much", according to whom?
[/edit]
ID: 16892 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 16893 - Posted: 26 Mar 2009, 2:55:28 UTC - in response to Message 16890.  

Paul D. Buck wrote:
I complain because the credit system is flawed and the payment is inherently unfair ...


It's at times like these that I wish I was a lot better at math, so I could prove it. Intuitively, I don't think "payment" of credits is likely to ever be "fair" any more than anything else can be in this world.

As a rule I think it's pretty fair.

Over the years I have had available to me various intel based platforms. I've always believed that the credit granted as the generational improvements occured have been in line what was expected when compared to the older platforms. But I've always wanted more, more, More, MORE MORE MORE I TELL YOU!

Ahem...sorry about that.

Yes, some people have influenced the amount of credit they have been granted in same way...but with the quorum system it does help limit this manipulation. I just wish that with using the benchmark system BOINC had got it working right between Linux and 'doze. I note LHC still tends to grant about the least amount of credit for this reason.

All up the credit issue is just a side show with bright lights to get people in and crucnhing. The underlining issue is still the science, but the credits sure do get people in and crunching! For example,

1. Would I be doing MW if the credits were purely in line with other projects? Probably not.

2. Why not? I'm not sure of what we are actually doing (mind you I haven't really looked into it since the credits are so good).

3. Am I still doing other projects? Yes.

4. Why? Because they have good technical merit.

5. Why don't I put all my computer resources to those projects? Because I like the credits this project bring in and not all the projects I want to crunch have work at the moment.

6. When those projects have work again will I crunch them? Yes. I have BOINC set up with a resource share that should favour those projects once they have work.

7. Will I miss the MW credits once BOINC starts getting work from those projects? No and yes. I will still have MW with some resource share and in any case no other BOINC project has support for an ATI card (yet).

But this is off subject...LOL.

ID: 16893 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16896 - Posted: 26 Mar 2009, 3:11:34 UTC - in response to Message 16893.  

The Gas Giant wrote:
Over the years I have had available to me various intel based platforms. I've always believed that the credit granted as the generational improvements occured have been in line what was expected when compared to the older platforms. But I've always wanted more, more, More, MORE MORE MORE I TELL YOU!


That brings up another point - how do you establish parity between different generations of CPUs from the same manufacturer. Many apps take advantage of specialized instruction sets (SSE3, etc.) that older CPUs don't have, and sometimes the resultant performance increase is dramatic. What shall we do to enforce "parity"? Refuse to use these specialized instruction sets, which could result in doing more science, all in the interest of "fairness"? You want to give people running older/less powerful CPUs enough credit so it's worth their electricity to run it (I had a Celeron box that I finally decided wasn't productive enough to warrant the electricty it used and heat it generated). On the other hand, especially with optimizations, this is going to result in the latest CPUs generating a LOT of credit.

The answer is you don't establish parity because it is impossible to do so.

I guess I'm kind of dense, as I just don't see who this hurts or how it's a problem.

ID: 16896 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 16898 - Posted: 26 Mar 2009, 4:38:12 UTC
Last modified: 26 Mar 2009, 4:39:34 UTC

Re: Gas Giant
No credit for anything that won't run on an 8088! None of this new-fangled 8086 nonsense! :)
Or maybe even Z-80...

I'll stop my script just as soon as that PC stops running out of work.
Cheers,

PeterV

.
ID: 16898 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16900 - Posted: 26 Mar 2009, 4:54:59 UTC - in response to Message 16898.  

Re: Gas Giant
No credit for anything that won't run on an 8088! None of this new-fangled 8086 nonsense! :)
Or maybe even Z-80...


Uhhh - the 8086 predates the 8088. The latter was created because RAM cost so blooming much that it was too costly to have a 16 bit memory bus in machines like the original IBM PC.

And the Z80 was pretty much an 8080 with double the compliment of registers - so it could run anything an 8080 could, but you could write code specific to the Z80 to take advantage of all the extra registers.

Interestingly enough, a contemporary machine (the Tandy 2000) which was MSDOS compatible but not fully PC Compatible, had the vastly-superior truly 16-bit 80186 processor, which even had some functions moved from microcode to hardware, which allowed them to run faster.
ID: 16900 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16903 - Posted: 26 Mar 2009, 5:58:19 UTC - in response to Message 16889.  

The Gas Giant wrote:
I just love it when people can't argue with facts and get abusive instead!

Yes, I have found that there is a certain sort that resorts to ad hominem attacks.

ID: 16903 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 16905 - Posted: 26 Mar 2009, 6:31:32 UTC - in response to Message 16896.  

The Gas Giant wrote:
Over the years I have had available to me various intel based platforms. I've always believed that the credit granted as the generational improvements occured have been in line what was expected when compared to the older platforms. But I've always wanted more, more, More, MORE MORE MORE I TELL YOU!


That brings up another point - how do you establish parity between different generations of CPUs from the same manufacturer. Many apps take advantage of specialized instruction sets (SSE3, etc.) that older CPUs don't have, and sometimes the resultant performance increase is dramatic. What shall we do to enforce "parity"? Refuse to use these specialized instruction sets, which could result in doing more science, all in the interest of "fairness"? You want to give people running older/less powerful CPUs enough credit so it's worth their electricity to run it (I had a Celeron box that I finally decided wasn't productive enough to warrant the electricty it used and heat it generated). On the other hand, especially with optimizations, this is going to result in the latest CPUs generating a LOT of credit.

The answer is you don't establish parity because it is impossible to do so.

I guess I'm kind of dense, as I just don't see who this hurts or how it's a problem.

I think you think I said something I didn't say or imply up there, but it's interesting that you brought up SSE optimisations. I see the SETI SSE3 optimisation app heats my CPU more than what the optimised MW app does.

Overall my position is that if a project releases a stock app that grants X credits and also releases the source code and a 3rd party optimises the code, then the credit granted should still be X for the optimised app. It is unimportant by how much the code is optimised and the credit should not be adjusted / reduced.

Now if the project then releases another stock app that is partially optimised then purely based on benchmarks or FLOPS counting alone the credit granted for wu's completed utilising that app will be proportionally less. Let's call that Y. (This is what LHC@home did a few years ago). If a 3rd party is still able to release an optimised app that completes it faster, then the granted credit should still be Y.

My old 3.0GHz P4 with HT is running SETI and MW utilising optimised apps and can get a RAC of 800 on SETI and 1400 on MW. It was doing a RAC closer to 1400 on SETI before they incorporated optimisations in their stock code. If it did a project that does not have 3rd party optimised apps (like Malaria Control or LHC@home) it would only do a RAC of around 400. It does not do projects that do not have a 3rd party optimised app available anymore.

The point is, I get a pretty similar RAC for differnt projects on the same machine when utilising stock project apps. This is how it should be. A faster machine will get a higher RAC and I have that as well. My quad core also gets pretty similar project RACs when running stock apps - but is proportionally faster than my old P4. This is how it should be.

Project stock applications should give similar credits per hour on the same machine no matter what project you do.

Mind you it does start to get ugly once a project starts releasing different stock apps that are optimised for the different instruction sets and distribute that app as their stock app. If/when that happens (I know there was talk about it) the granted credit should be based around the worst performing app they have.

Now we can discuss what I have said or what I hope to have said.

Live long and BOINC!

Paul.
ID: 16905 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 16910 - Posted: 26 Mar 2009, 8:07:07 UTC - in response to Message 16896.  

The answer is you don't establish parity because it is impossible to do so.

I guess I'm kind of dense, as I just don't see who this hurts or how it's a problem.

Someone else asked this question and I posted a link to the analysis I did in 2007 and is still available in the UBW. That analysis is a little dated but built on all the analysis that we had done in prior years including the BOINC Beta.

That said, the only things that are impossible are those that we decide we are not even going to attempt to do ... things never tried are always impossible.

As to who it hurts? Well, just look at the rhetoric here and tell me that there is not hurt ...

Had we addressed these issues in BOINC Beta when they were small and manageable we would not have seen the long running wars over these topics. In just the recent past Travis indicated that the only people that he felt made sense were those that agreed with him on reduction of credit awards. Yet, the supposed "pressure" to have MW lower its award could just as easily have been solved by those projects with below average awards raising theirs ...

Now, after all the yelling is over about MW being too high have you seen any "pressure" from the developers and project admins for those projects that are awarding credit at below average rates? No, you don't ... because there isn't any ... if awarding too much is so evil, why isn't awarding too little equally evil?

Some of these debates do get into the semblance of arguing the merits of religion and I am not particularly interested in that ... but, the original definition of how credit was supposed to work had at its core the idea / ideal that there would be parity across projects. The use of the synthetic benchmark was supposed to get us there even though we proved in Beta that the benchmark was neither stable enough or accurate enough across OS to be usable. Even then I proposed the use of standardized tasks to make the measurements ...
ID: 16910 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16946 - Posted: 26 Mar 2009, 19:04:06 UTC - in response to Message 16905.  
Last modified: 26 Mar 2009, 19:46:26 UTC

The Gas Giant wrote:
I think you think I said something I didn't say or imply up there,


Probably not. It's sombody else that's advocating for cross-project parity, and other people who are doing this "only for the science"


[snip]

The Gas Giant wrote:
Now if the project then releases another stock app that is partially optimised then purely based on benchmarks or FLOPS counting alone the credit granted for wu's completed utilising that app will be proportionally less. Let's call that Y. (This is what LHC@home did a few years ago). If a 3rd party is still able to release an optimised app that completes it faster, then the granted credit should still be Y.


We can agree to disagree here, or perhaps I don't understand. If the stock app gets some optimizations so it runs faster, that means that the credits have to be lowered accordingly, but if only the optimized app does, then it's OK as it is?

So more efficient software gets penalized (but only if it's the stock app), but more efficient hardware (or hardware made more efficient because of enhanced instruction sets) isn't?

My feeling is that work is work. SETI did two "credit devaluations" in short order, and they pretty much lost me after the second one. I was doing decent RAC on an old quad PIII Xeon box before they started fooling with the credits. So now SETI doesn't get many cycles from me.


The Gas Giant wrote:
My old 3.0GHz P4 with HT is running SETI and MW utilising optimised apps and can get a RAC of 800 on SETI and 1400 on MW. It was doing a RAC closer to 1400 on SETI before they incorporated optimisations in their stock code.


...and then devalued the credits granted for the same amount of work


The Gas Giant wrote:
If it did a project that does not have 3rd party optimised apps (like Malaria Control or LHC@home) it would only do a RAC of around 400. It does not do projects that do not have a 3rd party optimised app available anymore.


So much for "parity"

[snip]

The Gas Giant wrote:
Project stock applications should give similar credits per hour on the same machine no matter what project you do.


Not possible across the board. Some projects run a lot faster on AMD than Intel, or the other way around.

The Gas Giant wrote:
Mind you it does start to get ugly once a project starts releasing different stock apps that are optimised for the different instruction sets and distribute that app as their stock app. If/when that happens (I know there was talk about it) the granted credit should be based around the worst performing app they have.


If only. It appears to me that they generally try to achieve "parity" on some of the better performers, and the old boxes be hanged. That way, the faster boxes aren't earning "too much" credit.

The Gas Giant wrote:
Now we can discuss what I have said or what I hope to have said.


I'm sorry I gave that impression. It seems like you have similar views on cross-project parity as I do, and you seem more interested in a kind of intra-project parity, which I agree is totally doable (not necessarily advisable, but at least doable).
ID: 16946 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Lloyd M.

Send message
Joined: 1 Dec 08
Posts: 139
Credit: 8,721,208
RAC: 0
Message 16947 - Posted: 26 Mar 2009, 19:44:29 UTC - in response to Message 16910.  

Paul D. Buck wrote:
That said, the only things that are impossible are those that we decide we are not even going to attempt to do ... things never tried are always impossible.


Yeah, sure. Try telling that to the alchemists.

Let me put it this way - the resources that would have to be expended (to overcome technical barriers, probably on an ongoing basis) and the political considerations that would have to be overcome to achieve anything like cross-project parity as I understand it would be so prohibitive that the likelihood of this ever being achieved is vanishingly small.

Paul D. Buck wrote:
As to who it hurts? Well, just look at the rhetoric here and tell me that there is not hurt ...


Let me be more precise: where is the injury? As long as there aren't projects granting so much credit as to devalue the whole idea for everyone (a sort of credit inflation), who is being harmed by this?

I'll grant that some people are angry about this. Some have even threatened to take their ball and go to a different project (so to speak) en masse. I guess I'm just too dense to see what they're angry about.

Paul D. Buck wrote:
Had we addressed these issues in BOINC Beta when they were small and manageable we would not have seen the long running wars over these topics.


Assuming you could have made up for the performance disparities between AMD and Intel (depending on the project), and also somehow been prescient so you could see the coming emergence of GPU processing, and also managed to compensate for the inevitable performance dispartiy between nVidia and ATI, depending on the project (and, yes, I'm sure that eventually there will be more than one project that supports both).

Yeah, right. If you say so.

Paul D. Buck wrote:
In just the recent past Travis indicated that the only people that he felt made sense were those that agreed with him on reduction of credit awards. Yet, the supposed "pressure" to have MW lower its award could just as easily have been solved by those projects with below average awards raising theirs ...


Except that it is much easier to pressure one high-granting outlier than a bunch of low-granting projects that are somewhat the norm.

Also, how likely is it that an entire team will threaten to quit because the amount of credit granted is too low? As mercenary as some of us are accused of being, I just don't see that happening.

Yet, what did happen on this very project was an entire team threatened to quit because MW was granting "too much" credit.

For my part, if I don't think the credit is worth the electricity I have to spend to get it, I move on. I don't even bother protesting credit being too low. I don't see that as any of my business, anyway. This is a free-will, volunteer relationship, and I can use whatever criteria I see fit to choose which projects I devote what resources to.

Paul D. Buck wrote:
Now, after all the yelling is over about MW being too high have you seen any "pressure" from the developers and project admins for those projects that are awarding credit at below average rates? No, you don't ... because there isn't any ... if awarding too much is so evil, why isn't awarding too little equally evil?


Because the people that try to convince us that too much credit is evil are supposedly doing this "strictly for the science", so credit matters only if there is too much of it (so it's appealing for credit whores, who allegedly care only about credit, and don't give a hoot about the science). Also, their rationale is that projects that grant "too much" credit poach opportunistic credit whore volunteers from the "morally pure" projects who toe the line of the mighty "500 pound gorilla" project. Of course, this flies in the face of the actual facts, which show no such tendency, and some people simply let theory and feelings override the facts.

Paul D. Buck wrote:
Some of these debates do get into the semblance of arguing the merits of religion and I am not particularly interested in that ...


I respect that. I would say it's closer to politics, as I see some of the same traits (sometimes even the same terms, like "fairness") being played out here.

Of course, if anything; in some ways, this is even more dangerous ground to tread than religion.

Paul D. Buck wrote:
but, the original definition of how credit was supposed to work had at its core the idea / ideal that there would be parity across projects.

Well, many people considered Communism to be an "ideal", and that proved to be impossible to implement in the real world.

I'm not saying that cross-project parity is a Communistic idea or even a statist idea. I am saying that certain thing that people hold as ideals are simply unattainable in the real world.

ID: 16947 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 16950 - Posted: 26 Mar 2009, 20:19:22 UTC

Lloyd,

Not going to quote you as that would get a super long post.

At THIS time, yes it is getting smaller, the chances that is ... but, if the project types would wake up and smell the coffee they would realize that if we fixed this once and for all, then they would not have to fiddle so much with credit issues, nor invest the time in the disputes.

Again, my history was in systems engineering with other seemingly intractable problems being the order of the day and they were solved, as this could be solved...

As to the harm ... well the projects and the people ... just in one of your immediate posts you indicated that the two credit deflations that occurred in SaH lost you as a participant. Well, welcome to the new world because the concept of deflation is now being built into the system with Dr. Korpela's magical tuning tool ... as more and more projects adopt his concept, the worse things will get. The saddest part of all this is it is being done more or less in the dark of the night ... but that merely delays the day of reckoning. Things like this hurt BOINC implicitly... also, the survey done by UCB indicated that a fair percentage of people quit BOINC over credit issues including the fact that it was not understandable ...

As far as the differences what most don't realize is the real underlying problem is that the definitions were bad and then the measurements were bad because the termonology and technology were used and applied inappropriately ...

As far as being prescient, yes, I did predict GPU processing because that was the next logical extension of desktop computing. Again, this is what I used to do for a living ... and since 1975 I have owned, built, and used computers. I used to have well over a thousand books on them, the systems, languages, etc ... it was my stock reading. One only had to look at the history of mainframes to see the tide of history.

The anger is about the unfairness. A recent set of expiriments have shown that even dogs know when things are unfair ... if they know it, I would think that crunchers would be able to figure it out too ...

Ok, so why aren't they applying pressure to the ONE worst awarding project. Here again we are back to that issue of fairness, the developers and project types seem to have this aversion to awards that grow or increase as they should ... almost like it costs them real money to issue the CS scores.

IF they want to quit ... let them go... as history is showing, there is more than enough other participants to take up the slack...

As to your last point, well, since all they have been doing is slapping bandaids onto the problem and not addressing the fundamental issues, yes, it is not possible. One of the reasons that Travis has to invest so much time and energy into this topic. Were it a rationally designed system there would not be these issues ...

I will also point out that now we have several projects that are issuing "badges" using multiple incompatible definitions ... new controversy coming to a desktop near you...
ID: 16950 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · Next

Message boards : Number crunching : WU abuse

©2024 Astroinformatics Group