Welcome to MilkyWay@home

Credit lowering

Message boards : Number crunching : Credit lowering
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 · Next

AuthorMessage
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 31027 - Posted: 16 Sep 2009, 1:02:43 UTC - in response to Message 31000.  

Top-posting this, because it's no use in going point-by-point....

As I said, you complain about being so readily dismissed, yet your post is just sweeping aside nearly everything I said very quickly and without much thought... Perhaps CPs explaination of how things are tightly grouped together here will help, or perhaps not...

I would suggest that David Anderson isn't the only one who isn't open to new ideas... Instead of babies and bathwater, it seems as though someone is willing to go down with the sinking ship...

Hmm, yes, well ...

FLOP counts ... well, there are problems there too ...

Which FLOP gets counted how?

Just as an example ... the MW tasks are MW tasks are MW tasks ... yet how many FLOPS does it take? The CPU non-optimized application takes "x" FLOPS, the optimized apps take something less and the GPU apps are in a whole 'nother country ... and you can't necessarily compare apples to peaches there either because if the apps are truly matched to the GPU they will not necessarily have any relationship to each other ...

All I have been saying is that there is no need to jettison the original design intent. Yes, according to your lights it would be justifiable and perhaps worth it. Perhaps just as many of us disagree with your desire. Just because UCB is too lazy to solve the problem and too self-centered and controlling to let others solve issues does not mean the problem is unsolvable ...

In your approach no one would be able to trust the system either and the accounting and conversions would become another nightmare that is as unneeded as a facebook interface to BOINC...

And just because you don't use the current system (flawed as it is) to measure and contrast progress vis a vis various projects does not mean others do not ... flawed as they are the stats are still the only tool I have to help me decide allocations of work effort amongst the projects... I grant you that the fact that the system is so flawed and that there is no interest in fixing it does not help, but, it is the only yardstick I have ... so it is the one I will use ...

So, I see the trees of the 50+ projects I contribute too, and I look at the forest even though it is on fire ... so I could suggest that you do the same looking for the forest instead of only looking at the leaf on one tree ... :)

Another adage that comes to mind is about babys and bath water ... though I have to admit that the bath water has always sounded more attractive to me ...

ID: 31027 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 31029 - Posted: 16 Sep 2009, 1:30:29 UTC - in response to Message 31026.  
Last modified: 16 Sep 2009, 2:22:52 UTC

Tested the CPU version 0.20_X87 against 0.19_SSE on MP2200:

0.19_SSE - 3h 35m
0.20_X87 ~ 7h (stopped after 42m at 10%)

Back to 0.19_SSE
Any chance for a 0.20_SSE that runs in the time range of the 0.19_SSE?

Strange thing that. I've tested it only on a Phenom up to now and the 0.20_x87 app is about 19% faster as the old 0.19_SSE variant on this CPU. But I just started it on a AthlonXP 1800+ and the 0.20_x87 really takes twice as long as 0.19_SSE app. Really strange that. The SSE variant shows about the expected scaling between the AXP and the Phenom, but the x87 version completely chokes on an AthlonXP. Don't ask me for a sensible reason. I will see how the Microsoft compiler will do compared to the Intel compiler used so far. My ad hoc assumption for now would be that the Intel compiler simply slows down the AthlonXP, but doesn't do it for newer CPUs.

Edit:
Just using the MS compiler proved to be unsuccesful. It is a bit faster than the Intel compiled version on the AthlonXP (13%, not the missing factor of two) and 5% slower on the Phenom.
But maybe it is just some accuracy option I've chosen slowing it down so much. Relaxing it a bit the Microsoft compiled versions doesn't reduce it's runtime on the Phenom by more than 5% or so (MS compiler tied with Intel's one now), but on the AthlonXP it gets A LOT faster, exactly twice as fast as before. Don't ask me why exactly this happens.
In the end, that version is ending up 13% faster on the AthlonXP than the old 0.19 version, completely in line with all the other 0.20 compilations. The sacrifice is a tiny bit of accuracy. It still generates the exact same output as all the other versions on the short WUs I used to test it on, so it should be okay. But a slight bitter taste remains, as with larger WUs there could be some (minor) deviations to the output of the other 0.20 versions.

But I will probably provide you with the new version, as it should be still significantly better than the 0.19 version. Just let me package and upload the new version.
ID: 31029 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31034 - Posted: 16 Sep 2009, 3:16:26 UTC - in response to Message 31027.  

Top-posting this, because it's no use in going point-by-point....

As I said, you complain about being so readily dismissed, yet your post is just sweeping aside nearly everything I said very quickly and without much thought... Perhaps CPs explaination of how things are tightly grouped together here will help, or perhaps not...

I would suggest that David Anderson isn't the only one who isn't open to new ideas... Instead of babies and bathwater, it seems as though someone is willing to go down with the sinking ship...

Of course I will defer to CP's expertise... it was my understanding that the block architecture of the two GPU systems was significantly different enough to allow for the generalization I made. In the case of MW I was clearly wrong. Again, the understanding I had was that the way the two systems laid out the data and allowed its manipulation was too different to allow for virtually identical code ...

On another point he indicates a delta of 5% which is not huge, but it is also not completely insignificant ... but, I would still note that unless the code base does not contain significant variations in the layout of the operations there is no way to necessarily do a one to one comparison of a GPU FLOP to a CPU FLOP ... which was the only point I was trying to make ...

As to sweeping aside your arguments, well, I could make the same claim, but won't ... I actually spent several hours in the bath thinking about this, and prior to this day I have been thinking about the award systems since the days of SETI CLassic (I actually also tried to participate in CPDN classic as well as well as a couple other DC projects of the day). So, how many years to I have to think about a system that I don't think will work before I have thought on it enough? And during BOINC Beta I spent a couple months doing nothing but thinking about and writing about the benchmarking and credit system and debating these issues ad nauseum ...

As best I can see all you are suggesting is a variation of one MW task equals one credit in isolation from all other projects. There is no significant difference if it is 100 credits or any other number if it is in isolation from all other projects. As far as this being a new idea, well, it isn't ... it is as old as DC where each DC system has its own awards that are completely isolated from all other systems. From the WCG system rooted in UD to Folding@Home, to DIMES, to you name it ...

BOINC was supposed to be different. Not 50 isolated projects with isolated goals and accounting and random assignments of awards ... but one system where the participant could join multiple projects and would have a fair assignment of award for effort provided regardless of the project to which that effort would be provided.

Just as I am wedded to the original design goal that you should be able to run multiple projects on BOINC; yes, I am wedded to a fair, equitable, and cross-project credit system.

Back to part of your new and improved system your proposal is that the stat sites should be the ones that would create the "exchange" rates ... so now we take unconstrained project awards with multiple multipliers pass the numbers on to multiple stat sites that will each decide how to report and coalesce the values to come up with cross-project equivalence? How does that make things simpler and easier to understand? Now we also need stat site correction factors because sure as there are little green apples the exchange rates will be different across the stat sites ...

But, the long and short of it is that it looks like you are not going to convince me and vice versa ... you think I am too stuck in the sand to consider "new" ideas (when it isn't IMNSHO) and I think you are corrupted by the current BOINC culture where the answer is to run from problems rather than to apply some engineering and solve them ...
ID: 31034 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31048 - Posted: 16 Sep 2009, 4:36:13 UTC

Actually, I did find the original design proposal/criteria/transition plan.

A portion (without the formatting, not going to spend the time):


Why is SETI@home switching to BOINC?

Several reasons:
BOINC transparently and securely downloads new application versions. This lets us upgrade and extend SETI@home without requiring you to download and install new software. It will make it easy for us to integrate new algorithms, such as analyzing our 8 bit/sample reobservation data, or looking for other types of radio signals such as short broadband pulses.
BOINC has a more flexible data architecture than SETI@home Classic. Data can be transferred to and from multiple servers, and can remain resident on PC disks. In the future, we'll use these capabilities to search for ET signals in a much larger radio frequency range.
BOINC distributes work based on host parameters. Work units requiring 512 MB of RAM, for example, will only be sent to hosts having at least that much RAM. This lets us use BOINC for a wider range of computations than the 'one size fits all' SETI@home Classic.
Other distributed computing projects are also using BOINC, and you can share your computer time among projects of your choosing.
Can I run both versions at once?

No; if you do this, SETI@home/BOINC won't get any CPU time because it runs at a lower priority. You must uninstall SETI@home Classic before running SETI@home/BOINC.

What will happen to my workunit totals?

BOINC projects can have workunits of many different lengths, so BOINC keeps track of your computer's work in terms of actual computation performed rather than number of workunits.

Because of this change, SETI@home accounts will have separate old and new work totals. The old total is the workunit total from the current SETI@home. It won't change, and a section of our web site will show the final leaderboards based on old work totals. New work unit totals will start from zero.


There is a link out to "Computation Credit"


Each project gives you credit for the computations your computers perform for it. BOINC's unit of credit, the Cobblestone 1, is 1/100 day of CPU time on a reference computer that does

1,000 double-precision MIPS based on the Whetstone benchmark.
1,000 VAX MIPS based on the Dhrystone benchmark.
Eventually, credit may reflect network transfer and disk storage as well as computation.
How credit is determined

When your computer completes a result, BOINC determines an amount of claimed credit in one of two ways:
In general, the claimed credit is the result's CPU time multiplied by the CPU benchmarks as measured by the BOINC software. NOTE: the BOINC software is not optimized for specific processors. Its benchmark numbers may be lower than those produced by other programs.
Some applications determine claimed credit themselves, and report it to BOINC. This would be the case, for example, with applications that use graphics coprocessors or other non-CPU hardware.
Claimed credit is reported to a project when your computer communicates with its server. The granted credit that you receive may be different from the claimed credit, and there may be a delay of a few hours or days before it is granted. This is because some BOINC projects grant credit only after results have been validated.

Recent Average Credit

Projects maintains two counts of granted credit:

Total credit: The total number of Cobblestones performed and granted.
Recent average credit: The average number of Cobblestones per day granted recently. This average decreases by a factor of two every week, according to the algorithm given below.
Both quantities (total and recent average) are maintained for each user, host and team.


Lastly, though not explicitly stated in either of these, but was part of much of the glossy advertising was that with UCB doing the "heavy lifting" on the BOINC architecture it was supposed to free projects from having to mess with the internals of BOINC and they could concentrate on the science they are trying to do ... yet both MW and Collatz are spending (or others are in support of the project) trying to get ATI support to work ...

The real problem is not that I don't think on this enough, it keeps me awake at night ... :(
ID: 31048 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 31053 - Posted: 16 Sep 2009, 5:30:46 UTC - in response to Message 31034.  


As best I can see all you are suggesting is a variation of one MW task equals one credit in isolation from all other projects.


Not only one MW task in isolation, but 1 SETI, 1 Einstein, 1 of any project. The only tricky ones would be CPDN or Rosetta (user-variable runtime), but the beauty of it is it is no longer a BOINC function to monitor and balance credit. All credit decisions become the responsibility of each individual project. If one project wants to go wild and offer 1 Googol credits for 1 task, that's fine and dandy, because their task no longer has any officially implied equivalence in value to a task in another project.

So long as you keep wanting to make tasks of different projects equivalent to each other, you're not going to "get" what I'm saying... :(


BOINC was supposed to be different. Not 50 isolated projects with isolated goals and accounting and random assignments of awards ... but one system where the participant could join multiple projects and would have a fair assignment of award for effort provided regardless of the project to which that effort would be provided.


We've spent how many years with this system, only to have the same thing happen over and over and over??? There comes a point in time where continually clinging onto a "in the beginning"-type of ideology is just going to keep things stagnant.


Just as I am wedded to the original design goal that you should be able to run multiple projects on BOINC; yes, I am wedded to a fair, equitable, and cross-project credit system.


:sigh:

You can have that and more, but first you need to let go of trying to force it at the BOINC-wide level. Let each project handle their own credit granting, then let an independent entity or group of independent entities work out exchange rates / normalization tables.


Back to part of your new and improved system your proposal is that the stat sites should be the ones that would create the "exchange" rates ... so now we take unconstrained project awards with multiple multipliers pass the numbers on to multiple stat sites that will each decide how to report and coalesce the values to come up with cross-project equivalence? How does that make things simpler and easier to understand?


Shaka, when the walls fell...

The whole point is if you push it out to the stat sites, the constant bickering is THEIR BABY. The BOINC dev team no longer has to concern themselves with it. The projects will only have to concern themselves with their own internal metrics, not keep track of minutea of other projects and whimsical "self-calibrating" floats of cobblestones.

Now we also need stat site correction factors because sure as there are little green apples the exchange rates will be different across the stat sites ...


That's no longer the problem of BOINC or the ******SCIENCE PROJECTS*******

When you figure out that I'm sick and tired of scientists trying to mediate squabbles over intangible "warm fuzzy" credit issues, diverting their time and energy away from actual science, let me know...

-Brian
ID: 31053 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31063 - Posted: 16 Sep 2009, 11:38:00 UTC - in response to Message 31053.  

That's no longer the problem of BOINC or the ******SCIENCE PROJECTS*******

When you figure out that I'm sick and tired of scientists trying to mediate squabbles over intangible "warm fuzzy" credit issues, diverting their time and energy away from actual science, let me know...

And that is the point you miss. You conflate the UCB team as scientists doing research. Heck they are not even researching BOINC and how it works ... Though DA has roots in the scientific community he is no longer acting as a scientist doing research. A critical point you ignore.

UCB as developer of BOINC is not involved in science. Not their job. Their job is to develop BOINC, one part of which is the "warm fuzzy" stuff. I agree it is not the responsibility of the projects to do BOINC development, and never should have been put on Travis and the other projects. UCB has continually abdicated in their responsibility to do robust development and thus they, the projects, have been forced into the breach.

The main complaint I would have of your solution is that it is not a solution at all, but, once again, an abdication of responsibility of a major design element of BOINC and throwing up our hands and begging someone else to clean up the mess ...

In that the credit system is one of the core elements of BOINC it is, has been, and still should be, the responsibility of the BOINC development team.

Following your argument to its logical end the addition of ATI support increases the statistics and that too should be pushed out to the stat sites to develop ... eventually UCB as the developers of BOINC will have no responsibilities at all ... a great comfort to them I am sure ...

Anyway, as I said before, you simply want to avoid the problem, I want UCB to man up and fix it along with all the other problems they have been avoiding lo these many years ...
ID: 31063 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 31072 - Posted: 16 Sep 2009, 14:51:46 UTC - in response to Message 31063.  
Last modified: 16 Sep 2009, 15:03:12 UTC

That's no longer the problem of BOINC or the ******SCIENCE PROJECTS*******

When you figure out that I'm sick and tired of scientists trying to mediate squabbles over intangible "warm fuzzy" credit issues, diverting their time and energy away from actual science, let me know...

And that is the point you miss. You conflate the UCB team as scientists doing research. Heck they are not even researching BOINC and how it works ... Though DA has roots in the scientific community he is no longer acting as a scientist doing research. A critical point you ignore.


You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB.

Again, dismissing out of hand...

The projects would only be responsible for their own project. The BOINC developers would only be responsible for developing the BOINC software. It does put a bit more responsibility on the projects, as they would need to allocate more database space to house the credit tables, which would include categories such as "cummulative", "Generation 1", "Generation 2", etc... Each time a new application would be released, the project would freeze credit for that generation, reset credit, start a new generation, including figuring out the normalization factor for the current generation so that the cummulative bucket goes up appropriately.

After that, all that is needed is for the projects to issue their statistics dumps like they do today, with whatever modifications are needed to give the statistics sites the ability to come up with the exchange rates for both current and cummulative... If one wanted to go really wild and crazy, projects could also issue dumps for each generation. Stat sites could then provide far greater detail on how a particular individual performed within each credit generation, or just go for the cummulative stats. Cummulative would be the direct equivalent to / replacement of the current BOINC-wide value.

What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail...
ID: 31072 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31073 - Posted: 16 Sep 2009, 15:36:55 UTC - in response to Message 31072.  

You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB.

Which are the very people I agree should not be in the business of doing credit nonsense. But the point you did not read is that I don't agree that it should absolve DA and Rom and anyone else core to the development of BOINC ... which is not doing any science at all ... these are the people I am reffering to ... UCB is not SaH and SaH is not UCB ...

Again, dismissing out of hand...

Which you also keep accusing me of when I am not dismissing it, or any other statements or arguments "out of hand". I am pointing out the flaws in your logic. That you don't yet recognize the flaws does not mean that they are not there ... there are years of discussion about the one credit per task idea which, I think we agree, is pretty much what this boils down to ... and other than the proponents this idea has garnered virtually no support because of the complexities that have to be added on as you have noted to have it even begin to make sense historically...

Face it, the problem started with a flawed design that UCB only made token efforts to correct and then dropped. It has been exacerbated by the deflationary modes put into the code along with the development of GPU computing and multiple core systems. The truth of the matter is that this problem, like many of the BOINC problems could be solved fairly handily if UCB would take their thumb out and go to work in a disciplined manner.

The projects would only be responsible for their own project. The BOINC developers would only be responsible for developing the BOINC software. It does put a bit more responsibility on the projects, as they would need to allocate more database space to house the credit tables, which would include categories such as "cummulative", "Generation 1", "Generation 2", etc... Each time a new application would be released, the project would freeze credit for that generation, reset credit, start a new generation, including figuring out the normalization factor for the current generation so that the cummulative bucket goes up appropriately.

After that, all that is needed is for the projects to issue their statistics dumps like they do today, with whatever modifications are needed to give the statistics sites the ability to come up with the exchange rates for both current and cummulative... If one wanted to go really wild and crazy, projects could also issue dumps for each generation. Stat sites could then provide far greater detail on how a particular individual performed within each credit generation, or just go for the cummulative stats. Cummulative would be the direct equivalent to / replacement of the current BOINC-wide value.

Which design you just explain puts burden back on the projects. First for more space, second for updates each and every time they change the application. Not an insignificant burden. Instead of centralizing the credit fix you have just made it a long term maintenance task for each and every project in isolation removing one of the benefits of BOINC, centralized development of the core features ...

What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail...

In that I am not a statistician, though I like my credit scores as much if not more than the next guy does not mean that I cannot see that this proposal would not help the situation become less complicated and more fair. It would, indeed, likely please some that are super hung up on stats, but the point of BOINC is to bring it into the reach of the common guy ... not stat freaks ...

So, yes, I disagree with lots of bad ideas ... sometimes even my own ... and this is a bad idea ... it solves nothing, adds burdens where it should not, and does not make the system simpler or easier to understand and use ...
ID: 31073 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 31075 - Posted: 16 Sep 2009, 18:05:34 UTC - in response to Message 31073.  
Last modified: 16 Sep 2009, 18:13:53 UTC

You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB.

Which are the very people I agree should not be in the business of doing credit nonsense. But the point you did not read is that I don't agree that it should absolve DA and Rom and anyone else core to the development of BOINC ... which is not doing any science at all ... these are the people I am reffering to ... UCB is not SaH and SaH is not UCB ...


DA, Rom, et al, have shown they're incapable of dealing with it.

If a plumber has had 10 attempts to fix a leaky faucet in your home, but hasn't done it, do you insist that they keep coming until they fix it, or do you ask for your money back and hire someone else? There comes a point where you have to go with someone else, IMO...



Which design you just explain puts burden back on the projects. First for more space, second for updates each and every time they change the application. Not an insignificant burden. Instead of centralizing the credit fix you have just made it a long term maintenance task for each and every project in isolation removing one of the benefits of BOINC, centralized development of the core features ...


They already have to worry about recalibrating to random adjustments from SETI. This puts the changes completely in their control, with metrics that they can be certain of, thus in that regard there is no material change from the current situation as for the work required, with a plus of an increase in the reliability of the data used for the recalibration. As for the DB space, they already are maintaining a cummulative table. All that would need to be done are the tables for each generation of applications.

What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail...

In that I am not a statistician, though I like my credit scores as much if not more than the next guy does not mean that I cannot see that this proposal would not help the situation become less complicated and more fair. It would, indeed, likely please some that are super hung up on stats, but the point of BOINC is to bring it into the reach of the common guy ... not stat freaks ...


Credit, and comparing credit across projects, is about people who care about statistics... "The common guy" is either ambivalent to this situation or is upset because of the continual random lowering for the sake of lowering.

Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration...

On that note, I will not have anything further to say to you if you respond...so the last word is yours if you wish to take it...
ID: 31075 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31081 - Posted: 16 Sep 2009, 21:43:00 UTC - in response to Message 31075.  

Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration...

On that note, I will not have anything further to say to you if you respond...so the last word is yours if you wish to take it...

And I have a high regard for you ...

I am saddened that this is your take away ... because I mislike a bad idea of yours suddenly I am the only source of good ideas? Well, so be it ...

It is not that I am the only one with good ideas, it is that this one is not a good idea or a solution.

Historically there have been several proposals to repairing this issue, and there have been several ideas about fixes that I could support, mine with the calibration concepts that addresses more than just the credit problems; the original system, fixed to make it work; and a couple of others that I would have to chase about to nail down the specifics again.

If you do come up with an idea that is worthy of consideration I would be more than happy to support it ... this one just isn't it.

Oh, and I do agree that DA and company should not be trusted with a wet paper bag full of garbage. BUT, like it or not, for the moment DA has a stranglehold on BOINC development. Which is one of the reasons I have been so critical of his "leadership" ... I mean face it, even if your idea was a good one it too would not be implemented even if you wrote the code to make it work ... he would not allow it to be incorporated into the baseline ... I asked and considered writing a rebuttal letter to his latest submission for a grant but I knew in the end that he does not have the intellectual honesty to include such in his package so in the end did not ... but the best thing for the long term health of BOINC would be for it to get out from under his grip ... then the projects would have to cooperate in getting a rational development process ...
ID: 31081 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Len LE/GE

Send message
Joined: 8 Feb 08
Posts: 261
Credit: 104,050,322
RAC: 0
Message 31086 - Posted: 16 Sep 2009, 23:09:03 UTC - in response to Message 31029.  

Tested the CPU version 0.20_X87 against 0.19_SSE on MP2200:

0.19_SSE - 3h 35m
0.20_X87 ~ 7h (stopped after 42m at 10%)

Back to 0.19_SSE
Any chance for a 0.20_SSE that runs in the time range of the 0.19_SSE?


Strange thing that. I've tested it only on a Phenom up to now and the 0.20_x87 app is about 19% faster as the old 0.19_SSE variant on this CPU. But I just started it on a AthlonXP 1800+ and the 0.20_x87 really takes twice as long as 0.19_SSE app. Really strange that.
...

...
In the end, that version is ending up 13% faster on the AthlonXP than the old 0.19 version, completely in line with all the other 0.20 compilations. The sacrifice is a tiny bit of accuracy. It still generates the exact same output as all the other versions on the short WUs I used to test it on, so it should be okay. But a slight bitter taste remains, as with larger WUs there could be some (minor) deviations to the output of the other 0.20 versions.


Waited for a few finished (and validated) WUs and the faster times are impressive!
On the MP2200 I now see ~16% short run times for the 0.20_SSE than with the 0.19_SSE. *thumbs up*
ID: 31086 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 31090 - Posted: 17 Sep 2009, 0:24:18 UTC

For any one solution to have an effect would take all of the projects (and most likely Boinc staff) to make or add the changes. To me any change could only be in addition to the current system, since many like/want the credits. It would be nice to know in comparison how much actual work was done to compare to other users. The closest thing now is the credits which can be used within a project to have a rough comparison.


credit lowering: I gave a couple wu's a try yesterday. My XP p4 using sse2 went from 80 to 75 min, a 9% increase. It is nice, but doesn't equal the large drop in credits.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 31090 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
E B

Send message
Joined: 15 Aug 09
Posts: 7
Credit: 218,896
RAC: 0
Message 31359 - Posted: 23 Sep 2009, 0:20:22 UTC

this is what i see.first if i was them id what the biggest fastest comp out thier,and anything that slows the wu,s up gets less credit.example,less then 24hr internet time,surfing the net,not dedicated computer,up time,ect.they go from 74,54,39 depending on how much of it is thiers to use.the time it takes to do a wu is not whats its about but the speed it takes to do it,just my opinion.thanks
ID: 31359 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Misfit
Avatar

Send message
Joined: 27 Aug 07
Posts: 915
Credit: 1,503,319
RAC: 0
Message 31367 - Posted: 23 Sep 2009, 4:01:54 UTC - in response to Message 30884.  

David Anderson is trying to adhere to something that is fundamentally flawed, and so are you by following him. You people from the project side need to step up and tell him that he's got no clothes on...

-Brian

He sure doesn't.
me@rescam.org
ID: 31367 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Mr. Hankey

Send message
Joined: 9 Apr 09
Posts: 10
Credit: 117,669,581
RAC: 0
Message 31368 - Posted: 23 Sep 2009, 5:29:02 UTC - in response to Message 31081.  

Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration...

On that note, I will not have anything further to say to you if you respond...so the last word is yours if you wish to take it...

And I have a high regard for you ...

I am saddened that this is your take away ... because I mislike a bad idea of yours suddenly I am the only source of good ideas? Well, so be it ...

It is not that I am the only one with good ideas, it is that this one is not a good idea or a solution.

Historically there have been several proposals to repairing this issue, and there have been several ideas about fixes that I could support, mine with the calibration concepts that addresses more than just the credit problems; the original system, fixed to make it work; and a couple of others that I would have to chase about to nail down the specifics again.

If you do come up with an idea that is worthy of consideration I would be more than happy to support it ... this one just isn't it.

Oh, and I do agree that DA and company should not be trusted with a wet paper bag full of garbage. BUT, like it or not, for the moment DA has a stranglehold on BOINC development. Which is one of the reasons I have been so critical of his "leadership" ... I mean face it, even if your idea was a good one it too would not be implemented even if you wrote the code to make it work ... he would not allow it to be incorporated into the baseline ... I asked and considered writing a rebuttal letter to his latest submission for a grant but I knew in the end that he does not have the intellectual honesty to include such in his package so in the end did not ... but the best thing for the long term health of BOINC would be for it to get out from under his grip ... then the projects would have to cooperate in getting a rational development process ...


So just to chime in here with a question, what is preventing a Fork of the BOINC project itself? If the credit parity and the critical but ignored bugs could be fixed, projects and stat sites could move over to the BBOINC (Better BOINC) project. The grip of DA would be gone and he would be free to be king of his BOINC project and its only citizen. I mean there would have to be a compelling benefit to the science projects to get critical mass. This would be much like a merging of ideas, return to the original ideas of BOINC and remove responsibility from DA.
ID: 31368 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31369 - Posted: 23 Sep 2009, 8:52:49 UTC - in response to Message 31368.  

So just to chime in here with a question, what is preventing a Fork of the BOINC project itself? If the credit parity and the critical but ignored bugs could be fixed, projects and stat sites could move over to the BBOINC (Better BOINC) project. The grip of DA would be gone and he would be free to be king of his BOINC project and its only citizen. I mean there would have to be a compelling benefit to the science projects to get critical mass. This would be much like a merging of ideas, return to the original ideas of BOINC and remove responsibility from DA.

Nothing, and it has been done ...

The problem is that there aren't enough people interested to make this a viable option. I would love to help but for one I am not a C or even a C++ coder ... for another, do you track BOINC for compatibility or start to digress to make for a better system? Both choices have advantages and disadvantages ...

One of the realities and traps we are in is that the actual user base is quite small. Though 1.7M some people have signed up for a BOINC account there are only about 280,000 active users ... of that number most are just interested and capable of running the client ...

I am not saying that it could not be done ... but just as a practical matter you would need 10-20 really good people to make a go of it ... and they would have to be really into it ... it is not just that BOINC is so big, but that there is so much that is messed up ...

And those projects that are heavily invested in BOINC? It would take a powerful argument to cause them to shift. And, likely though this is not a bad thing necessarily, is that the end result would be two systems that would gradually diverge with people falling into one camp or the other. Much like there are those that are Folding@Home fanatics who argue that their thing is better than BOINC and the BOINCers ...
ID: 31369 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Chris S
Avatar

Send message
Joined: 20 Sep 08
Posts: 1391
Credit: 203,563,566
RAC: 0
Message 31370 - Posted: 23 Sep 2009, 10:09:40 UTC

If I remember correctly, Seti Classic started in 1999, Boinc started in 2002, and Seti migrated to Boinc in 2004. Seti proved to Berkeley that the concept of distributed computing utilising the general public was a viable proposition, and the point of Boinc was to consolidate and build upon that, and develop a common infrastructure for any project to use.

Looking at it from that point of view, as an overall umbrella, Boinc has been quite a success. The problem seems to me to be one of where having proved it all works, the developers seem to have lost interest in the fine tuning to finish it all off.

ID: 31370 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31394 - Posted: 23 Sep 2009, 20:10:38 UTC - in response to Message 31370.  

If I remember correctly, Seti Classic started in 1999, Boinc started in 2002, and Seti migrated to Boinc in 2004. Seti proved to Berkeley that the concept of distributed computing utilising the general public was a viable proposition, and the point of Boinc was to consolidate and build upon that, and develop a common infrastructure for any project to use.

Looking at it from that point of view, as an overall umbrella, Boinc has been quite a success. The problem seems to me to be one of where having proved it all works, the developers seem to have lost interest in the fine tuning to finish it all off.

I agree with the first part, especially the point of it being quite a success.

I differ on the last point. I do not think that it is that they have lost interest, it is more that as gifted amateurs they managed to get this far and think that their judgement is superior to everyone else. Some would also say that I am just as bad thinking that I always know more than everyone else making me equally arrogant. Which is a fair point, except, that we have never really tested that out ...

What we do know is that UCB, or specifically Dr. Anderson as the HMFICC is very willing to take the credit for all the good in BOINC and not at all willing to take the blame for that which is wrong. Nor is he/UCB (however you wish to allocate blame/credit) that willing to take outside advice ... again me aside, he does not even listen well to people like JM VII on the resource scheduler ...

So, I don't think it is a lack of interest in finishing it off ... it is a lack of interest in listening to the participant community or even the project types ... there is a recent example where I suggested a setting for projects like CPDN and Orbit with long running tasks and one of the heavies from CPDN endorsed the idea and DA said no ... so here we are still manually micromanaging the downloading of work from CPDN so that we don't get 10-20 CPDN tasks (I got 8 some time ago) when what we want and should have is only one ...

Like I said if it was just me they were not listening too that would be fine ... I could easily live with that ... but they listen to no one ...
ID: 31394 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 31396 - Posted: 23 Sep 2009, 20:21:19 UTC - in response to Message 31394.  

This could be where the needed improvements in the Boinc Manager will be done and distributed by outside sources such as Crunch3r and others.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 31396 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Chris S
Avatar

Send message
Joined: 20 Sep 08
Posts: 1391
Credit: 203,563,566
RAC: 0
Message 31409 - Posted: 23 Sep 2009, 22:01:15 UTC

This could be where the needed improvements in the Boinc Manager will be done and distributed by outside sources such as Crunch3r and others.


Well, if people developed 3rd party versions of the Boinc Manager in the way you suggest, would Berkeley allow them to download work? Would they be blocked?
ID: 31409 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 · Next

Message boards : Number crunching : Credit lowering

©2024 Astroinformatics Group