Welcome to MilkyWay@home

Please check this host

Message boards : Number crunching : Please check this host
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 590 - Posted: 25 Nov 2007, 23:27:55 UTC - in response to Message 588.  

Travis please don't go to a quorum of 2 if it doesn't affect the science as it is a waste of cpu time...seems you have other tools to adjust for the cheaters :)


whats a good value for credit typically, if it's fixed?
ID: 590 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 591 - Posted: 25 Nov 2007, 23:41:07 UTC
Last modified: 25 Nov 2007, 23:47:03 UTC

Take an average across different os and cpu's of "claimed credit" in benchmarks and then throw out the high and the low and voila credit granted for a given project batch of results length.As you change the parameters making work units longer in your genetic search of course you will have to adjust upwards.
ID: 591 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jenik

Send message
Joined: 7 Oct 07
Posts: 4
Credit: 9,887,161
RAC: 0
Message 593 - Posted: 25 Nov 2007, 23:47:52 UTC - in response to Message 586.  

Thats probably a good idea. The work units are (for the most part) fixed size, so fixed credit might be the way to go. Currently, the amount of work done is based off two things: 1. the size of the volume, and 2. the number of stars.

Between a quorum of 2 and a way of calculating credit not based off boinc's benchmarks, maybe that will fix the problem?


In My Opinion - Yes
As well as QMC@HOME or Cosmology@Home
ID: 593 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jenik

Send message
Joined: 7 Oct 07
Posts: 4
Credit: 9,887,161
RAC: 0
Message 594 - Posted: 25 Nov 2007, 23:55:40 UTC - in response to Message 591.  

Take an average across different os and cpu's of "claimed credit" in benchmarks...

But exclude cheaters, of course. ;-)
ID: 594 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Odysseus

Send message
Joined: 10 Nov 07
Posts: 96
Credit: 29,931,027
RAC: 0
Message 596 - Posted: 26 Nov 2007, 0:44:49 UTC - in response to Message 591.  

Take an average across different os and cpu's of "claimed credit" in benchmarks and then throw out the high and the low and voila credit granted for a given project batch of results length.

A more representative average could be obtained by filtering out results from clients whose benchmarking methods are known to be unreliable (some of these platform-specific) or user-adjustable, if that would be possible to implement.

I would hope that all participants in an alpha-testing project are prepared for all manner of anomalies WRT credit and stats, up to and including retroactive adjustments or cancelled credit (albeit only as emergency measures). Start with conservative estimates, then adjust as may be required to maintain approximate parity with the benchmark-based measures (which, in theory, implies parity with other projects), and nobody should have grounds for complaint.

Rosetta@home seems to have implemented a consensus-over-WUs method, where a running average is kept for repeated runs of the same model (or sufficiently similar ones): the first few hosts to process tasks from a given ‘batch’ may be granted, shall we say, idiosyncratic amounts of credit, but over time one would expect the grants to converge on a fair value. Obviously this depends on having sufficiently similar batches, and I guess it requires additional fields in the database records.

ID: 596 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [B^S] Acmefrog
Avatar

Send message
Joined: 28 Aug 07
Posts: 49
Credit: 556,559
RAC: 0
Message 597 - Posted: 26 Nov 2007, 1:52:08 UTC

Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple.
ID: 597 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 598 - Posted: 26 Nov 2007, 2:03:17 UTC - in response to Message 597.  

Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple.



Hey Acmefrog the #s you suggest look a little low to me thats why I didn't give any values...if you look at your or my rac and hosts the sample size on os and cpu is way too small..hence my suggestion...linux,win,amd,intel,and Boinc client are all variables...do you feel confident that you have a great enough cross-section to give a good #? I don't
ID: 598 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Odysseus

Send message
Joined: 10 Nov 07
Posts: 96
Credit: 29,931,027
RAC: 0
Message 599 - Posted: 26 Nov 2007, 4:51:43 UTC - in response to Message 597.  
Last modified: 26 Nov 2007, 4:55:16 UTC

Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple.

I don’t know what the variables involved may be—I could be comparing apples to oranges—but that seems very high. My G5’s recent tasks have each taken just under 3.5 minutes and earned 0.63 or 0.64 of a cobblestone, for an average production of about 11 CS/h, quite typical for this system on other projects. (I’ve also had a small number that take double the time, earning double the credit.) At 2 CS/WU this host would be getting about 35 CS/h, much more than it does on SETI@home using a third-party optimized application, let alone on any other project with a stock app.

If the WUs vary that much—by a factor of two, three, or more—a fixed-credit system won’t be suitable unless the variations are predictable and can be quickly assessed by the servers, e.g. from the number of stars or time-steps in a simulation.

ID: 599 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [B^S] Acmefrog
Avatar

Send message
Joined: 28 Aug 07
Posts: 49
Credit: 556,559
RAC: 0
Message 600 - Posted: 26 Nov 2007, 6:38:43 UTC - in response to Message 598.  
Last modified: 26 Nov 2007, 6:40:03 UTC


Hey Acmefrog the #s you suggest look a little low to me thats why I didn't give any values...if you look at your or my rac and hosts the sample size on os and cpu is way too small..hence my suggestion...linux,win,amd,intel,and Boinc client are all variables...do you feel confident that you have a great enough cross-section to give a good #? I don't

Looking at yours, you have one that does crunch around 2 but others that crunch at a lower rate. The others people's computers that I have looked at seem to depend upon how fast a WU was crunched. The variation might be from the data (number of stars or something) but I think my guess is fairly accurate (not a scientific sampling). Maybe the credit value would have to change depending upon the type of WU but to me it seems the most common value popping up would be in the range I mentioned. A credit either way would not bother me. It is the pc that are claiming 25-250 credit per WU that bug me.
ID: 600 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Jayargh
Avatar

Send message
Joined: 8 Oct 07
Posts: 289
Credit: 3,690,838
RAC: 0
Message 601 - Posted: 26 Nov 2007, 14:06:53 UTC - in response to Message 600.  

A credit either way would not bother me. It is the pc that are claiming 25-250 credit per WU that bug me.



Yes I agree
ID: 601 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 604 - Posted: 26 Nov 2007, 16:00:15 UTC

Would it be posible to put a limit on how much credit each computer may claim per day until a credit value it figured out? How about 6,000 credits, (2000 wu's x 3 credits per wu). That might be somewhere to start. It would cut down on the 100,000+ RAC for some people.
ID: 604 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Martin P.

Send message
Joined: 21 Nov 07
Posts: 52
Credit: 1,756,052
RAC: 0
Message 605 - Posted: 26 Nov 2007, 16:05:19 UTC - in response to Message 599.  

Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple.

I don’t know what the variables involved may be—I could be comparing apples to oranges—but that seems very high. My G5’s recent tasks have each taken just under 3.5 minutes and earned 0.63 or 0.64 of a cobblestone, for an average production of about 11 CS/h, quite typical for this system on other projects. (I’ve also had a small number that take double the time, earning double the credit.) At 2 CS/WU this host would be getting about 35 CS/h, much more than it does on SETI@home using a third-party optimized application, let alone on any other project with a stock app.

If the WUs vary that much—by a factor of two, three, or more—a fixed-credit system won’t be suitable unless the variations are predictable and can be quickly assessed by the servers, e.g. from the number of stars or time-steps in a simulation.



Odysseus,

my G5s claim and receive an average of 21-23 cr/hour with SETI@Home and Einstein@Home. This seems to be fair and is in-line with the faster Windows or Linux machines. Other projects with very little participation of Mac-users grant less than that due to badly optimized Mac-clients (e.g. Rosetta@Home, etc.)

ID: 605 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 606 - Posted: 26 Nov 2007, 16:07:17 UTC

Would it also be possible to have a quorum of 1 unless credit > 5 or 10 (or any number), then resend untill credit < 10. That would help with cheating and help eliminate extra work.
ID: 606 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile agony

Send message
Joined: 24 Oct 07
Posts: 22
Credit: 130,021
RAC: 0
Message 608 - Posted: 26 Nov 2007, 17:19:04 UTC

maybe the best solution would be to blacklist their accounts from all projects.
that would hurt "highscore" cheaters most.
ID: 608 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
zombie67 [MM]
Avatar

Send message
Joined: 29 Aug 07
Posts: 115
Credit: 502,662,458
RAC: 3,243
Message 610 - Posted: 26 Nov 2007, 17:55:08 UTC - in response to Message 572.  

IIRC, 5.10.1x reports immediately, but 5.10.2x does not.

All up to 5.10.13 report the same day when asking for new work.

5.10.14 and above report after 24 hours, or when requesting more work when the queue is empty, or when done manually and further following the normal rules of contact.

No client reports immediately. That was a command line option in the 4.xx version, long since deprecated. (i.e. no longer in the code)


5.10.up-to-13 reports results immediately, if the "Connect to network about every" is set to zero.

ID: 610 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 613 - Posted: 26 Nov 2007, 18:29:20 UTC - in response to Message 605.  

Most of the reliable results that I have seen are granted credit of 1.50-1.80 per WU. I would suggest to make it somewhere in this range or just make the value 2 to be nice and simple.

I don’t know what the variables involved may be—I could be comparing apples to oranges—but that seems very high. My G5’s recent tasks have each taken just under 3.5 minutes and earned 0.63 or 0.64 of a cobblestone, for an average production of about 11 CS/h, quite typical for this system on other projects. (I’ve also had a small number that take double the time, earning double the credit.) At 2 CS/WU this host would be getting about 35 CS/h, much more than it does on SETI@home using a third-party optimized application, let alone on any other project with a stock app.

If the WUs vary that much—by a factor of two, three, or more—a fixed-credit system won’t be suitable unless the variations are predictable and can be quickly assessed by the servers, e.g. from the number of stars or time-steps in a simulation.



Odysseus,

my G5s claim and receive an average of 21-23 cr/hour with SETI@Home and Einstein@Home. This seems to be fair and is in-line with the faster Windows or Linux machines. Other projects with very little participation of Mac-users grant less than that due to badly optimized Mac-clients (e.g. Rosetta@Home, etc.)


Ironically, I do all my development on a mac and our linux/unix machines at school. The only windows box i have access to is a much older thinkpad :P Currently, it seems that macs are crunching the numbers the fastest, unfortunately the way credit is being granted right now isn't as good as it should be. once i get the new validator up and running this should be a lot better.

We're hoping to get access to a 64 bit windows machine so having that binary available should speed things up for the windows users.
ID: 613 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Odysseus

Send message
Joined: 10 Nov 07
Posts: 96
Credit: 29,931,027
RAC: 0
Message 625 - Posted: 27 Nov 2007, 3:21:15 UTC - in response to Message 605.  
Last modified: 27 Nov 2007, 3:23:25 UTC

my G5s claim and receive an average of 21-23 cr/hour with SETI@Home and Einstein@Home. This seems to be fair and is in-line with the faster Windows or Linux machines. Other projects with very little participation of Mac-users grant less than that due to badly optimized Mac-clients (e.g. Rosetta@Home, etc.)

This G5 has been getting about 16 CS/h on E@h recently, but it’s been higher from other batches in this run; the S5R3 WUs seem to vary more by this measure than previous runs’ did. IME the E@h apps for PPC have always had above-average productivity. On S@h it gets about 25 CS/h, but that’s with a custom, processor-specific app; on S@h Beta it gets about the same as most other projects, around 11 CS/h. I don’t run Rosetta on this system, but from Ralph it does about average. (I have noticed that Rosetta doesn’t do quite as well as some other projects on my partner’s Core2 iMac.)

Anyway, I certainly won’t complain if the tasks that take this host 3.5 minutes start earning two credits each—I might even increase the project’s resource share. ;)

ID: 625 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [B^S] Acmefrog
Avatar

Send message
Joined: 28 Aug 07
Posts: 49
Credit: 556,559
RAC: 0
Message 636 - Posted: 27 Nov 2007, 7:28:14 UTC

two is better than one in my opinion but I can live with one as long as people are on a level playing field.
ID: 636 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2

Message boards : Number crunching : Please check this host

©2024 Astroinformatics Group