Message boards :
Number crunching :
Credit lowering
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 · Next
Author | Message |
---|---|
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Top-posting this, because it's no use in going point-by-point.... As I said, you complain about being so readily dismissed, yet your post is just sweeping aside nearly everything I said very quickly and without much thought... Perhaps CPs explaination of how things are tightly grouped together here will help, or perhaps not... I would suggest that David Anderson isn't the only one who isn't open to new ideas... Instead of babies and bathwater, it seems as though someone is willing to go down with the sinking ship... Hmm, yes, well ... |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
Tested the CPU version 0.20_X87 against 0.19_SSE on MP2200: Strange thing that. I've tested it only on a Phenom up to now and the 0.20_x87 app is about 19% faster as the old 0.19_SSE variant on this CPU. But I just started it on a AthlonXP 1800+ and the 0.20_x87 really takes twice as long as 0.19_SSE app. Really strange that. The SSE variant shows about the expected scaling between the AXP and the Phenom, but the x87 version completely chokes on an AthlonXP. Don't ask me for a sensible reason. I will see how the Microsoft compiler will do compared to the Intel compiler used so far. My ad hoc assumption for now would be that the Intel compiler simply slows down the AthlonXP, but doesn't do it for newer CPUs. Edit: Just using the MS compiler proved to be unsuccesful. It is a bit faster than the Intel compiled version on the AthlonXP (13%, not the missing factor of two) and 5% slower on the Phenom. But maybe it is just some accuracy option I've chosen slowing it down so much. Relaxing it a bit the Microsoft compiled versions doesn't reduce it's runtime on the Phenom by more than 5% or so (MS compiler tied with Intel's one now), but on the AthlonXP it gets A LOT faster, exactly twice as fast as before. Don't ask me why exactly this happens. In the end, that version is ending up 13% faster on the AthlonXP than the old 0.19 version, completely in line with all the other 0.20 compilations. The sacrifice is a tiny bit of accuracy. It still generates the exact same output as all the other versions on the short WUs I used to test it on, so it should be okay. But a slight bitter taste remains, as with larger WUs there could be some (minor) deviations to the output of the other 0.20 versions. But I will probably provide you with the new version, as it should be still significantly better than the 0.19 version. Just let me package and upload the new version. |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
Top-posting this, because it's no use in going point-by-point.... Of course I will defer to CP's expertise... it was my understanding that the block architecture of the two GPU systems was significantly different enough to allow for the generalization I made. In the case of MW I was clearly wrong. Again, the understanding I had was that the way the two systems laid out the data and allowed its manipulation was too different to allow for virtually identical code ... On another point he indicates a delta of 5% which is not huge, but it is also not completely insignificant ... but, I would still note that unless the code base does not contain significant variations in the layout of the operations there is no way to necessarily do a one to one comparison of a GPU FLOP to a CPU FLOP ... which was the only point I was trying to make ... As to sweeping aside your arguments, well, I could make the same claim, but won't ... I actually spent several hours in the bath thinking about this, and prior to this day I have been thinking about the award systems since the days of SETI CLassic (I actually also tried to participate in CPDN classic as well as well as a couple other DC projects of the day). So, how many years to I have to think about a system that I don't think will work before I have thought on it enough? And during BOINC Beta I spent a couple months doing nothing but thinking about and writing about the benchmarking and credit system and debating these issues ad nauseum ... As best I can see all you are suggesting is a variation of one MW task equals one credit in isolation from all other projects. There is no significant difference if it is 100 credits or any other number if it is in isolation from all other projects. As far as this being a new idea, well, it isn't ... it is as old as DC where each DC system has its own awards that are completely isolated from all other systems. From the WCG system rooted in UD to Folding@Home, to DIMES, to you name it ... BOINC was supposed to be different. Not 50 isolated projects with isolated goals and accounting and random assignments of awards ... but one system where the participant could join multiple projects and would have a fair assignment of award for effort provided regardless of the project to which that effort would be provided. Just as I am wedded to the original design goal that you should be able to run multiple projects on BOINC; yes, I am wedded to a fair, equitable, and cross-project credit system. Back to part of your new and improved system your proposal is that the stat sites should be the ones that would create the "exchange" rates ... so now we take unconstrained project awards with multiple multipliers pass the numbers on to multiple stat sites that will each decide how to report and coalesce the values to come up with cross-project equivalence? How does that make things simpler and easier to understand? Now we also need stat site correction factors because sure as there are little green apples the exchange rates will be different across the stat sites ... But, the long and short of it is that it looks like you are not going to convince me and vice versa ... you think I am too stuck in the sand to consider "new" ideas (when it isn't IMNSHO) and I think you are corrupted by the current BOINC culture where the answer is to run from problems rather than to apply some engineering and solve them ... |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
Actually, I did find the original design proposal/criteria/transition plan. A portion (without the formatting, not going to spend the time):
There is a link out to "Computation Credit"
Lastly, though not explicitly stated in either of these, but was part of much of the glossy advertising was that with UCB doing the "heavy lifting" on the BOINC architecture it was supposed to free projects from having to mess with the internals of BOINC and they could concentrate on the science they are trying to do ... yet both MW and Collatz are spending (or others are in support of the project) trying to get ATI support to work ... The real problem is not that I don't think on this enough, it keeps me awake at night ... :( |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Not only one MW task in isolation, but 1 SETI, 1 Einstein, 1 of any project. The only tricky ones would be CPDN or Rosetta (user-variable runtime), but the beauty of it is it is no longer a BOINC function to monitor and balance credit. All credit decisions become the responsibility of each individual project. If one project wants to go wild and offer 1 Googol credits for 1 task, that's fine and dandy, because their task no longer has any officially implied equivalence in value to a task in another project. So long as you keep wanting to make tasks of different projects equivalent to each other, you're not going to "get" what I'm saying... :(
We've spent how many years with this system, only to have the same thing happen over and over and over??? There comes a point in time where continually clinging onto a "in the beginning"-type of ideology is just going to keep things stagnant.
:sigh: You can have that and more, but first you need to let go of trying to force it at the BOINC-wide level. Let each project handle their own credit granting, then let an independent entity or group of independent entities work out exchange rates / normalization tables.
Shaka, when the walls fell... The whole point is if you push it out to the stat sites, the constant bickering is THEIR BABY. The BOINC dev team no longer has to concern themselves with it. The projects will only have to concern themselves with their own internal metrics, not keep track of minutea of other projects and whimsical "self-calibrating" floats of cobblestones. Now we also need stat site correction factors because sure as there are little green apples the exchange rates will be different across the stat sites ... That's no longer the problem of BOINC or the ******SCIENCE PROJECTS******* When you figure out that I'm sick and tired of scientists trying to mediate squabbles over intangible "warm fuzzy" credit issues, diverting their time and energy away from actual science, let me know... -Brian |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
That's no longer the problem of BOINC or the ******SCIENCE PROJECTS******* And that is the point you miss. You conflate the UCB team as scientists doing research. Heck they are not even researching BOINC and how it works ... Though DA has roots in the scientific community he is no longer acting as a scientist doing research. A critical point you ignore. UCB as developer of BOINC is not involved in science. Not their job. Their job is to develop BOINC, one part of which is the "warm fuzzy" stuff. I agree it is not the responsibility of the projects to do BOINC development, and never should have been put on Travis and the other projects. UCB has continually abdicated in their responsibility to do robust development and thus they, the projects, have been forced into the breach. The main complaint I would have of your solution is that it is not a solution at all, but, once again, an abdication of responsibility of a major design element of BOINC and throwing up our hands and begging someone else to clean up the mess ... In that the credit system is one of the core elements of BOINC it is, has been, and still should be, the responsibility of the BOINC development team. Following your argument to its logical end the addition of ATI support increases the statistics and that too should be pushed out to the stat sites to develop ... eventually UCB as the developers of BOINC will have no responsibilities at all ... a great comfort to them I am sure ... Anyway, as I said before, you simply want to avoid the problem, I want UCB to man up and fix it along with all the other problems they have been avoiding lo these many years ... |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
That's no longer the problem of BOINC or the ******SCIENCE PROJECTS******* You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB. Again, dismissing out of hand... The projects would only be responsible for their own project. The BOINC developers would only be responsible for developing the BOINC software. It does put a bit more responsibility on the projects, as they would need to allocate more database space to house the credit tables, which would include categories such as "cummulative", "Generation 1", "Generation 2", etc... Each time a new application would be released, the project would freeze credit for that generation, reset credit, start a new generation, including figuring out the normalization factor for the current generation so that the cummulative bucket goes up appropriately. After that, all that is needed is for the projects to issue their statistics dumps like they do today, with whatever modifications are needed to give the statistics sites the ability to come up with the exchange rates for both current and cummulative... If one wanted to go really wild and crazy, projects could also issue dumps for each generation. Stat sites could then provide far greater detail on how a particular individual performed within each credit generation, or just go for the cummulative stats. Cummulative would be the direct equivalent to / replacement of the current BOINC-wide value. What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail... |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB. Which are the very people I agree should not be in the business of doing credit nonsense. But the point you did not read is that I don't agree that it should absolve DA and Rom and anyone else core to the development of BOINC ... which is not doing any science at all ... these are the people I am reffering to ... UCB is not SaH and SaH is not UCB ... Again, dismissing out of hand... Which you also keep accusing me of when I am not dismissing it, or any other statements or arguments "out of hand". I am pointing out the flaws in your logic. That you don't yet recognize the flaws does not mean that they are not there ... there are years of discussion about the one credit per task idea which, I think we agree, is pretty much what this boils down to ... and other than the proponents this idea has garnered virtually no support because of the complexities that have to be added on as you have noted to have it even begin to make sense historically... Face it, the problem started with a flawed design that UCB only made token efforts to correct and then dropped. It has been exacerbated by the deflationary modes put into the code along with the development of GPU computing and multiple core systems. The truth of the matter is that this problem, like many of the BOINC problems could be solved fairly handily if UCB would take their thumb out and go to work in a disciplined manner. The projects would only be responsible for their own project. The BOINC developers would only be responsible for developing the BOINC software. It does put a bit more responsibility on the projects, as they would need to allocate more database space to house the credit tables, which would include categories such as "cummulative", "Generation 1", "Generation 2", etc... Each time a new application would be released, the project would freeze credit for that generation, reset credit, start a new generation, including figuring out the normalization factor for the current generation so that the cummulative bucket goes up appropriately. Which design you just explain puts burden back on the projects. First for more space, second for updates each and every time they change the application. Not an insignificant burden. Instead of centralizing the credit fix you have just made it a long term maintenance task for each and every project in isolation removing one of the benefits of BOINC, centralized development of the core features ... What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail... In that I am not a statistician, though I like my credit scores as much if not more than the next guy does not mean that I cannot see that this proposal would not help the situation become less complicated and more fair. It would, indeed, likely please some that are super hung up on stats, but the point of BOINC is to bring it into the reach of the common guy ... not stat freaks ... So, yes, I disagree with lots of bad ideas ... sometimes even my own ... and this is a bad idea ... it solves nothing, adds burdens where it should not, and does not make the system simpler or easier to understand and use ... |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
You apparently missed the word "or" in my sentence... The remaining part of the sentence referred to, in the case of this project, Travis, Dave, and anyone else at RPI, not the people at UCB. In the case of Einstein, it would be referencing Bruce Allen, Bernd Machenschalk, etc... Only when it came to SETI would it have anything to do with the people at UCB. DA, Rom, et al, have shown they're incapable of dealing with it. If a plumber has had 10 attempts to fix a leaky faucet in your home, but hasn't done it, do you insist that they keep coming until they fix it, or do you ask for your money back and hire someone else? There comes a point where you have to go with someone else, IMO...
They already have to worry about recalibrating to random adjustments from SETI. This puts the changes completely in their control, with metrics that they can be certain of, thus in that regard there is no material change from the current situation as for the work required, with a plus of an increase in the reliability of the data used for the recalibration. As for the DB space, they already are maintaining a cummulative table. All that would need to be done are the tables for each generation of applications. What I'm saying should actually be a statistician's dream, but you're fighting it tooth and nail... Credit, and comparing credit across projects, is about people who care about statistics... "The common guy" is either ambivalent to this situation or is upset because of the continual random lowering for the sake of lowering. Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration... On that note, I will not have anything further to say to you if you respond...so the last word is yours if you wish to take it... |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration... And I have a high regard for you ... I am saddened that this is your take away ... because I mislike a bad idea of yours suddenly I am the only source of good ideas? Well, so be it ... It is not that I am the only one with good ideas, it is that this one is not a good idea or a solution. Historically there have been several proposals to repairing this issue, and there have been several ideas about fixes that I could support, mine with the calibration concepts that addresses more than just the credit problems; the original system, fixed to make it work; and a couple of others that I would have to chase about to nail down the specifics again. If you do come up with an idea that is worthy of consideration I would be more than happy to support it ... this one just isn't it. Oh, and I do agree that DA and company should not be trusted with a wet paper bag full of garbage. BUT, like it or not, for the moment DA has a stranglehold on BOINC development. Which is one of the reasons I have been so critical of his "leadership" ... I mean face it, even if your idea was a good one it too would not be implemented even if you wrote the code to make it work ... he would not allow it to be incorporated into the baseline ... I asked and considered writing a rebuttal letter to his latest submission for a grant but I knew in the end that he does not have the intellectual honesty to include such in his package so in the end did not ... but the best thing for the long term health of BOINC would be for it to get out from under his grip ... then the projects would have to cooperate in getting a rational development process ... |
Send message Joined: 8 Feb 08 Posts: 261 Credit: 104,050,322 RAC: 0 |
Tested the CPU version 0.20_X87 against 0.19_SSE on MP2200: Waited for a few finished (and validated) WUs and the faster times are impressive! On the MP2200 I now see ~16% short run times for the 0.20_SSE than with the 0.19_SSE. *thumbs up* |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
For any one solution to have an effect would take all of the projects (and most likely Boinc staff) to make or add the changes. To me any change could only be in addition to the current system, since many like/want the credits. It would be nice to know in comparison how much actual work was done to compare to other users. The closest thing now is the credits which can be used within a project to have a rough comparison. credit lowering: I gave a couple wu's a try yesterday. My XP p4 using sse2 went from 80 to 75 min, a 9% increase. It is nice, but doesn't equal the large drop in credits. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 15 Aug 09 Posts: 7 Credit: 218,896 RAC: 0 |
this is what i see.first if i was them id what the biggest fastest comp out thier,and anything that slows the wu,s up gets less credit.example,less then 24hr internet time,surfing the net,not dedicated computer,up time,ect.they go from 74,54,39 depending on how much of it is thiers to use.the time it takes to do a wu is not whats its about but the speed it takes to do it,just my opinion.thanks |
Send message Joined: 27 Aug 07 Posts: 915 Credit: 1,503,319 RAC: 0 |
David Anderson is trying to adhere to something that is fundamentally flawed, and so are you by following him. You people from the project side need to step up and tell him that he's got no clothes on... He sure doesn't. me@rescam.org |
Send message Joined: 9 Apr 09 Posts: 10 Credit: 117,669,581 RAC: 0 |
Frankly, I like you Paul, I really do, but this is starting to sound like you believe that your ideas and your ideas alone are worthy of consideration... So just to chime in here with a question, what is preventing a Fork of the BOINC project itself? If the credit parity and the critical but ignored bugs could be fixed, projects and stat sites could move over to the BBOINC (Better BOINC) project. The grip of DA would be gone and he would be free to be king of his BOINC project and its only citizen. I mean there would have to be a compelling benefit to the science projects to get critical mass. This would be much like a merging of ideas, return to the original ideas of BOINC and remove responsibility from DA. |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
So just to chime in here with a question, what is preventing a Fork of the BOINC project itself? If the credit parity and the critical but ignored bugs could be fixed, projects and stat sites could move over to the BBOINC (Better BOINC) project. The grip of DA would be gone and he would be free to be king of his BOINC project and its only citizen. I mean there would have to be a compelling benefit to the science projects to get critical mass. This would be much like a merging of ideas, return to the original ideas of BOINC and remove responsibility from DA. Nothing, and it has been done ... The problem is that there aren't enough people interested to make this a viable option. I would love to help but for one I am not a C or even a C++ coder ... for another, do you track BOINC for compatibility or start to digress to make for a better system? Both choices have advantages and disadvantages ... One of the realities and traps we are in is that the actual user base is quite small. Though 1.7M some people have signed up for a BOINC account there are only about 280,000 active users ... of that number most are just interested and capable of running the client ... I am not saying that it could not be done ... but just as a practical matter you would need 10-20 really good people to make a go of it ... and they would have to be really into it ... it is not just that BOINC is so big, but that there is so much that is messed up ... And those projects that are heavily invested in BOINC? It would take a powerful argument to cause them to shift. And, likely though this is not a bad thing necessarily, is that the end result would be two systems that would gradually diverge with people falling into one camp or the other. Much like there are those that are Folding@Home fanatics who argue that their thing is better than BOINC and the BOINCers ... |
Send message Joined: 20 Sep 08 Posts: 1391 Credit: 203,563,566 RAC: 0 |
If I remember correctly, Seti Classic started in 1999, Boinc started in 2002, and Seti migrated to Boinc in 2004. Seti proved to Berkeley that the concept of distributed computing utilising the general public was a viable proposition, and the point of Boinc was to consolidate and build upon that, and develop a common infrastructure for any project to use. Looking at it from that point of view, as an overall umbrella, Boinc has been quite a success. The problem seems to me to be one of where having proved it all works, the developers seem to have lost interest in the fine tuning to finish it all off. |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
If I remember correctly, Seti Classic started in 1999, Boinc started in 2002, and Seti migrated to Boinc in 2004. Seti proved to Berkeley that the concept of distributed computing utilising the general public was a viable proposition, and the point of Boinc was to consolidate and build upon that, and develop a common infrastructure for any project to use. I agree with the first part, especially the point of it being quite a success. I differ on the last point. I do not think that it is that they have lost interest, it is more that as gifted amateurs they managed to get this far and think that their judgement is superior to everyone else. Some would also say that I am just as bad thinking that I always know more than everyone else making me equally arrogant. Which is a fair point, except, that we have never really tested that out ... What we do know is that UCB, or specifically Dr. Anderson as the HMFICC is very willing to take the credit for all the good in BOINC and not at all willing to take the blame for that which is wrong. Nor is he/UCB (however you wish to allocate blame/credit) that willing to take outside advice ... again me aside, he does not even listen well to people like JM VII on the resource scheduler ... So, I don't think it is a lack of interest in finishing it off ... it is a lack of interest in listening to the participant community or even the project types ... there is a recent example where I suggested a setting for projects like CPDN and Orbit with long running tasks and one of the heavies from CPDN endorsed the idea and DA said no ... so here we are still manually micromanaging the downloading of work from CPDN so that we don't get 10-20 CPDN tasks (I got 8 some time ago) when what we want and should have is only one ... Like I said if it was just me they were not listening too that would be fine ... I could easily live with that ... but they listen to no one ... |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
This could be where the needed improvements in the Boinc Manager will be done and distributed by outside sources such as Crunch3r and others. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 20 Sep 08 Posts: 1391 Credit: 203,563,566 RAC: 0 |
This could be where the needed improvements in the Boinc Manager will be done and distributed by outside sources such as Crunch3r and others. Well, if people developed 3rd party versions of the Boinc Manager in the way you suggest, would Berkeley allow them to download work? Would they be blocked? |
©2025 Astroinformatics Group