Message boards :
Number crunching :
Please check this host
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
Travis please don't go to a quorum of 2 if it doesn't affect the science as it is a waste of cpu time...seems you have other tools to adjust for the cheaters :) whats a good value for credit typically, if it's fixed? |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Take an average across different os and cpu's of "claimed credit" in benchmarks and then throw out the high and the low and voila credit granted for a given project batch of results length.As you change the parameters making work units longer in your genetic search of course you will have to adjust upwards. |
Send message Joined: 7 Oct 07 Posts: 4 Credit: 9,887,161 RAC: 0 |
Thats probably a good idea. The work units are (for the most part) fixed size, so fixed credit might be the way to go. Currently, the amount of work done is based off two things: 1. the size of the volume, and 2. the number of stars. In My Opinion - Yes As well as QMC@HOME or Cosmology@Home |
Send message Joined: 7 Oct 07 Posts: 4 Credit: 9,887,161 RAC: 0 |
Take an average across different os and cpu's of "claimed credit" in benchmarks... But exclude cheaters, of course. ;-) |
Send message Joined: 10 Nov 07 Posts: 96 Credit: 29,931,027 RAC: 0 |
Take an average across different os and cpu's of "claimed credit" in benchmarks and then throw out the high and the low and voila credit granted for a given project batch of results length. A more representative average could be obtained by filtering out results from clients whose benchmarking methods are known to be unreliable (some of these platform-specific) or user-adjustable, if that would be possible to implement. I would hope that all participants in an alpha-testing project are prepared for all manner of anomalies WRT credit and stats, up to and including retroactive adjustments or cancelled credit (albeit only as emergency measures). Start with conservative estimates, then adjust as may be required to maintain approximate parity with the benchmark-based measures (which, in theory, implies parity with other projects), and nobody should have grounds for complaint. Rosetta@home seems to have implemented a consensus-over-WUs method, where a running average is kept for repeated runs of the same model (or sufficiently similar ones): the first few hosts to process tasks from a given ‘batch’ may be granted, shall we say, idiosyncratic amounts of credit, but over time one would expect the grants to converge on a fair value. Obviously this depends on having sufficiently similar batches, and I guess it requires additional fields in the database records. |
Send message Joined: 28 Aug 07 Posts: 49 Credit: 556,559 RAC: 0 |
Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple. |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple. Hey Acmefrog the #s you suggest look a little low to me thats why I didn't give any values...if you look at your or my rac and hosts the sample size on os and cpu is way too small..hence my suggestion...linux,win,amd,intel,and Boinc client are all variables...do you feel confident that you have a great enough cross-section to give a good #? I don't |
Send message Joined: 10 Nov 07 Posts: 96 Credit: 29,931,027 RAC: 0 |
Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple. I don’t know what the variables involved may be—I could be comparing apples to oranges—but that seems very high. My G5’s recent tasks have each taken just under 3.5 minutes and earned 0.63 or 0.64 of a cobblestone, for an average production of about 11 CS/h, quite typical for this system on other projects. (I’ve also had a small number that take double the time, earning double the credit.) At 2 CS/WU this host would be getting about 35 CS/h, much more than it does on SETI@home using a third-party optimized application, let alone on any other project with a stock app. If the WUs vary that much—by a factor of two, three, or more—a fixed-credit system won’t be suitable unless the variations are predictable and can be quickly assessed by the servers, e.g. from the number of stars or time-steps in a simulation. |
Send message Joined: 28 Aug 07 Posts: 49 Credit: 556,559 RAC: 0 |
Looking at yours, you have one that does crunch around 2 but others that crunch at a lower rate. The others people's computers that I have looked at seem to depend upon how fast a WU was crunched. The variation might be from the data (number of stars or something) but I think my guess is fairly accurate (not a scientific sampling). Maybe the credit value would have to change depending upon the type of WU but to me it seems the most common value popping up would be in the range I mentioned. A credit either way would not bother me. It is the pc that are claiming 25-250 credit per WU that bug me. |
Send message Joined: 8 Oct 07 Posts: 289 Credit: 3,690,838 RAC: 0 |
A credit either way would not bother me. It is the pc that are claiming 25-250 credit per WU that bug me. Yes I agree |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Would it be posible to put a limit on how much credit each computer may claim per day until a credit value it figured out? How about 6,000 credits, (2000 wu's x 3 credits per wu). That might be somewhere to start. It would cut down on the 100,000+ RAC for some people. |
Send message Joined: 21 Nov 07 Posts: 52 Credit: 1,756,052 RAC: 0 |
Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple. Odysseus, my G5s claim and receive an average of 21-23 cr/hour with SETI@Home and Einstein@Home. This seems to be fair and is in-line with the faster Windows or Linux machines. Other projects with very little participation of Mac-users grant less than that due to badly optimized Mac-clients (e.g. Rosetta@Home, etc.) |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Would it also be possible to have a quorum of 1 unless credit > 5 or 10 (or any number), then resend untill credit < 10. That would help with cheating and help eliminate extra work. |
Send message Joined: 24 Oct 07 Posts: 22 Credit: 130,021 RAC: 0 |
maybe the best solution would be to blacklist their accounts from all projects. that would hurt "highscore" cheaters most. |
Send message Joined: 29 Aug 07 Posts: 115 Credit: 502,662,458 RAC: 3,243 |
IIRC, 5.10.1x reports immediately, but 5.10.2x does not. 5.10.up-to-13 reports results immediately, if the "Connect to network about every" is set to zero. |
Send message Joined: 30 Aug 07 Posts: 2046 Credit: 26,480 RAC: 0 |
Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple. Ironically, I do all my development on a mac and our linux/unix machines at school. The only windows box i have access to is a much older thinkpad :P Currently, it seems that macs are crunching the numbers the fastest, unfortunately the way credit is being granted right now isn't as good as it should be. once i get the new validator up and running this should be a lot better. We're hoping to get access to a 64 bit windows machine so having that binary available should speed things up for the windows users. |
Send message Joined: 10 Nov 07 Posts: 96 Credit: 29,931,027 RAC: 0 |
my G5s claim and receive an average of 21-23 cr/hour with SETI@Home and Einstein@Home. This seems to be fair and is in-line with the faster Windows or Linux machines. Other projects with very little participation of Mac-users grant less than that due to badly optimized Mac-clients (e.g. Rosetta@Home, etc.) This G5 has been getting about 16 CS/h on E@h recently, but it’s been higher from other batches in this run; the S5R3 WUs seem to vary more by this measure than previous runs’ did. IME the E@h apps for PPC have always had above-average productivity. On S@h it gets about 25 CS/h, but that’s with a custom, processor-specific app; on S@h Beta it gets about the same as most other projects, around 11 CS/h. I don’t run Rosetta on this system, but from Ralph it does about average. (I have noticed that Rosetta doesn’t do quite as well as some other projects on my partner’s Core2 iMac.) Anyway, I certainly won’t complain if the tasks that take this host 3.5 minutes start earning two credits each—I might even increase the project’s resource share. ;) |
Send message Joined: 28 Aug 07 Posts: 49 Credit: 556,559 RAC: 0 |
two is better than one in my opinion but I can live with one as long as people are on a level playing field. |
©2024 Astroinformatics Group