Welcome to MilkyWay@home

Credit lowering

Message boards : Number crunching : Credit lowering
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 9 · Next

AuthorMessage
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 30912 - Posted: 15 Sep 2009, 1:01:19 UTC - in response to Message 30910.  

I just want to emphasize this:

So what are we doing? We take our stock application, get a general idea of how many flops it will take from the code, (and now after the credit change) we're applying that to the supposedly standard credit multiplier, modified for double precision work.

What do you suggest we should do?
ID: 30912 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 30915 - Posted: 15 Sep 2009, 1:53:17 UTC - in response to Message 30910.  


If we never lowered our credit from when the project first came out, each WU would be at about 20,000 credit. That's pretty ridiculous.


I extreemly doubt that. Also the amount of data being done per wu is 100's more than initially. Every time a slight or large gain was made in speeding up the app it was credit chop time within days. Those were all random picks for credits at the time. Now the credit was/is based on a specified amount per calculation that DA told you to do. It can't be just right and too much at the same time. I give it another few months and another credit cut will happen b/c it is too much credit being given out still.

It's stupid credits, they are free, it isn't like you lose money giving them out.

One possibility on a better comparison between projects is to base the credits on each calculation done. It would be a better showing of how much work has been done.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 30915 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dan T. Morris
Avatar

Send message
Joined: 17 Mar 08
Posts: 165
Credit: 410,228,216
RAC: 0
Message 30916 - Posted: 15 Sep 2009, 1:57:21 UTC

ID: 30916 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dan T. Morris
Avatar

Send message
Joined: 17 Mar 08
Posts: 165
Credit: 410,228,216
RAC: 0
Message 30917 - Posted: 15 Sep 2009, 2:00:37 UTC - in response to Message 30916.  

196) Message boards : Number crunching : credit change -- 2/16/2009 (Message 12196)
Posted 205 days ago by Profile Travis

I hope GPU won't be penalized. A crunched WU is worth the same credit whether crunched by stock, optimized or GPU. If you can turn 'em around quicker, you get credits quicker, be it becuase you have a faster machine or a number of machines, or a way of crunching them quicker. And optimized/GPU shouldn't be throttled either IMHO



While the credit for the workunits may go up or down as a whole (right now it may be a little low but I want to see how things work out), the GPU app will get the same amount of credit for a workunit as anything else because they're doing the same amount of work. That's going to be our policy on this one, so you don't have to worry about GPUs being specifically targeted for a credit reduction.

The workunits we're running are a deterministic amount of credit so they will all be awarded the same amount of credit regardless of what's crunching them.
ID: 30917 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dan T. Morris
Avatar

Send message
Joined: 17 Mar 08
Posts: 165
Credit: 410,228,216
RAC: 0
Message 30918 - Posted: 15 Sep 2009, 2:01:25 UTC
Last modified: 15 Sep 2009, 2:15:56 UTC

Me thinks that the GPU has been targeted... humm.
ID: 30918 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
YoDude.SETI.USA [TopGun]

Send message
Joined: 29 May 09
Posts: 37
Credit: 34,016,951
RAC: 0
Message 30919 - Posted: 15 Sep 2009, 2:17:15 UTC

Well I guess I might as well throw in my two cents worth in also.

My personal feeling is one of...Why lower the credit? Why not all the other projects raise there's to fall more in line with MW?

I see it somewhat like this...if you have a fast computer with all the bells and whistles, you can crunch a crap load of work in no time at all (figuratively speaking anyway). If you have a slow dog computer, then you get crappy credit for endless hours of work being done. Generally, this is the way it works, which is all good and fine.

What this means to me is that the ratio of credit for all project should (and likely needs to) be standardized based on throughput of WUs vs the amount of time spent on any project. This evens the playing field for everyone for any project.

It also means that because people are credit hungry, some will leave, some will stay. I think people all have a basic competitive nature about them and when they do something, practically anything, that gives them some form of reward, they tend to strive harder and do more to get an edge over others doing the same thing. Tiger Direct knows that I've spent $1200 on a new core-i7 box just for this reason, more credit as fast as possible. There are many people running "farms" of boxes just to try to get to the top of the hill and maybe even take down those Big Guys with a gazillion credits.

My point being with this, is that a lot of people will likely tend to go where ever they can to get the most credit as they can, whether or not they may or may not think the particular project they abandon or join is worthwhile in any other sense.

This may sound as though, those people that are in that competitive mode, are heartless towards anything but the credit. Some may see that as true, but guess what?! Those people that are going for the highest paying credit projects are still doing the favor of (whatever) the project, the crunching to get the work done. And though it may not be MW or SETI, they will find a nitch somewhere that they are happy. With that in mind, it really doesn't matter what project they are doing so long as it makes them happy and the work gets done. The heartless people are the ones that get pissed over everything and quit Boinc altogether because of something like a lowering of credit that just displeases them to no end.

Those people that are credit hungry (like me) also tend to spend money on faster and more computers to get the job done. This is over all a good thing for everyone. It stimulates the economy a little (for, computer purchases, shippers that deliver the computers and you know the electric company just loves it, ISPs aren't complaining either and don't forget the projects are getting worked on more and more because of it) and so it's all a good overall thing I think. If nothing else, you get a kickass computer to play with!!!

Me personally, I like WUs that go fast and don't take FOREVER to do. This is one of the reasons I chose MW as a place to get some work done. I have another box crunching GPUGrids, but to me they are so painfully long and time consuming, I can't bare to even look at the manager screen other than to just simply make sure the project is still running properly. Recently they did a nerf to their project, but not by lowering the credit, but by making the WUs longer. I was pissed, but I didn't leave the project and the work is still getting done.

MW pisses me off with this, but guess what?...I'm still here :)

At least, that's my two cents worth.


Yo-
ID: 30919 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 30921 - Posted: 15 Sep 2009, 2:56:39 UTC - in response to Message 30915.  
Last modified: 15 Sep 2009, 3:00:08 UTC

If we never lowered our credit from when the project first came out, each WU would be at about 20,000 credit. That's pretty ridiculous.

I extreemly doubt that.

I don't.
The current ATI app calculates on a HD4870 about 15,000 times (yes, fifteen thousand times) faster than the old 1.22 version (which was already a bit faster than the very first one) on a 3GHz Core2.
And even the CPU apps got faster by a factor of several hundreds. As the CPU apps still pay roughly the same per day as those old ones (factor of 2 wouldn't matter for the argument), you can think about what the current apps would be worth in those old credits. 20,000 is quite a good guess I think (50 credits now, factor 400 speedup).

Also the amount of data being done per wu is 100's more than initially. Every time a slight or large gain was made in speeding up the app it was credit chop time within days.

All reasons making Travis number of about 20,000 "old credits" more likely, don't you think?

I give it another few months and another credit cut will happen

That may happen if/when new optimizations are implemented, reducing the amount of work necessary to process a WU.

One possibility on a better comparison between projects is to base the credits on each calculation done. It would be a better showing of how much work has been done.

That is a really good idea! Why nobody have thought of it before?
Wait! That's exactly how the credits are derived here at MW (and SETI and GPUGrid, too)!
One FLOP is a floating point operation, i.e. an calculation. For each trillion double precision calculations (1 TeraFlop = 1,000,000,000,000 floating point operations) you get 5.4 credits here. For each trillion single precision operations at SETI or GPUGrid you get 2.7 credits. It is a really simple system!
ID: 30921 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 30923 - Posted: 15 Sep 2009, 3:06:26 UTC - in response to Message 30912.  
Last modified: 15 Sep 2009, 3:10:34 UTC

I just want to emphasize this:

So what are we doing? We take our stock application, get a general idea of how many flops it will take from the code, (and now after the credit change) we're applying that to the supposedly standard credit multiplier, modified for double precision work.

What do you suggest we should do?

I think what you are doing is fair enough. MW was overpaying. Cross project parity really should be an aim of the BOINC devs.

You have no argument from me regarding dropping the multiplier to 5.4 even though it hurts at the time, my RAC was still going to be over 120,000. OMG I remember the days where getting a RAC of 100 was considered to be OK and when I hit 1,000 I thought great!

LOL, I remember working for many months to get a total credit of 34,000 on Predictor@home and now I get that in a few hours.

Reality check here please complainers!

Live long and BOINC!

Paul.
ID: 30923 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 30924 - Posted: 15 Sep 2009, 3:10:06 UTC - in response to Message 30921.  

If we never lowered our credit from when the project first came out, each WU would be at about 20,000 credit. That's pretty ridiculous.

I extreemly doubt that.

I don't.
The current ATI app calculates on a HD4870 about 15,000 times (yes, fifteen thousand times) faster than the old 1.22 version (which was already a bit faster than the very first one) on a 3GHz Core2.
And even the CPU apps got faster by a factor of several hundreds. As the CPU apps still pay roughly the same per day as those old ones (factor of 2 wouldn't matter for the argument), you can think about what the current apps would be worth in those old credits. 20,000 is quite a good guess I think (50 credits now, factor 400 speedup).

And you really think travis would give 20000 credit/wu, I think not. That is what I was talking about.

Also the amount of data being done per wu is 100's more than initially. Every time a slight or large gain was made in speeding up the app it was credit chop time within days.

All reasons making Travis number of about 20,000 "old credits" more likely, don't you think?
It may calculate out to be 20k + for the current wu's with the original credits. And again do you think travis would make the current wu's 20000? I think not.

I give it another few months and another credit cut will happen

That may happen if/when new optimizations are implemented, reducing the amount of work necessary to process a WU.

That I don't have a problem with as long as it is justified. But to keep saying 'too much credit & won't happen again' and still lower them more.

One possibility on a better comparison between projects is to base the credits on each calculation done. It would be a better showing of how much work has been done.

That is a really good idea! Why nobody have thought of it before?
Wait! That's exactly how the credits are derived here at MW (and SETI and GPUGrid, too)!
One FLOP is a floating point operation, i.e. an calculation. For each trillion double precision calculations (1 TeraFlop = 1,000,000,000,000 floating point operations) you get 5.4 credits here. For each trillion single precision operations at SETI or GPUGrid you get 2.7 credits. It is a really simple system!

[sarcasm]Really?! gee I had no idea.[/sarcasm] Why don't the other projects go to that.
ID: 30924 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dan T. Morris
Avatar

Send message
Joined: 17 Mar 08
Posts: 165
Credit: 410,228,216
RAC: 0
Message 30925 - Posted: 15 Sep 2009, 3:23:50 UTC

The operative words from Travis..

That's going to be our policy on this one, so you don't have to worry about GPUs being specifically targeted for a credit reduction.




But they are being targeted. The faster you make them go the more the credits will be lowered.



ID: 30925 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 30927 - Posted: 15 Sep 2009, 3:59:05 UTC - in response to Message 30910.  
Last modified: 15 Sep 2009, 4:00:01 UTC



So what are we doing? We take our stock application, get a general idea of how many flops it will take from the code, (and now after the credit change) we're applying that to the supposedly standard credit multiplier, modified for double precision work.



This actually seems fair enough, but lowering the credit over and over again is not how you gain friends.
The next truly ATI supported project that comes along that works and is stable... there will be an exodus from your project because of the way we are treated.

I have a PM from you from months back that you said your determination for credit was based on the stock MW app as compared to SETI stock app.

Now after some months you have gone back and modified that position.

I propose that the stock app to stock app the credit per hour was nearly the same. Now however, I can only suppose that SETI will be higher.

But really why would you want to compare your credit to SETI? Really? Why? Because that is just the way it is done?
Is this a science project or a DA project? Just asking.
.
ID: 30927 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Aurora Borealis
Avatar

Send message
Joined: 13 Sep 09
Posts: 20
Credit: 5,662,415
RAC: 0
Message 30928 - Posted: 15 Sep 2009, 4:36:05 UTC - in response to Message 30927.  
Last modified: 15 Sep 2009, 4:47:29 UTC


I have a PM from you from months back that you said your determination for credit was based on the stock MW app as compared to SETI stock app.

Now after some months you have gone back and modified that position.

I propose that the stock app to stock app the credit per hour was nearly the same. Now however, I can only suppose that SETI will be higher.

But really why would you want to compare your credit to SETI? Really? Why? Because that is just the way it is done?
Is this a science project or a DA project? Just asking.

I would like to point out that the 'stock' Seti apps spend months being optimized in Beta testing before being release in the wild. This does not appear to have been the case here.
edit: The original mutibeam app took 80+ hrs to process mid range WU in early testing and under 5 hrs when released.
ID: 30928 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 30929 - Posted: 15 Sep 2009, 4:51:10 UTC - in response to Message 30911.  


The best thing for all of you that head up the various projects to do is to freeze the current credit system stats and start over with the "1 credit for 1 completed task" methodology, followed by official announcements by all of the projects as well as the BOINC staff that comparing projects is no longer supported by the BOINC platform and that the ONLY rankings that are supported are those within a single project. Each project is its' own entity, sharing only a common software framework. From that point on, if BOINCStats, BOINC Combined Statistics, All Project Stats, Knights Who Say Ni, or any of the other stat sites decide that they want to try to make an exchange system, IT IS ON THEM AND NO LONGER ON YOU, THE PROJECT, TO TRY TO KEEP UP WITH...


But not all of our tasks take the same amount of work (compare 1 stream to 2 stream to 3 stream), does 1 credit for 1 completed task really make sense for that?



I think you're majorly missing my point.

If you no longer have to concern yourself with what some other project other than your own does, then you can award whatever you wish to award and it has no bearing on other projects.

The cause of the angst is the comparing of different calculations.

The solution is NOT to continually crank down the credit as David cranks down his over at SETI just so that you all can "match each other" with each lowering that David does. Yes, from time to time he really does just decide that credits "need" to be lowered.

Thus, the issue is that you keep matching a moving target over at SETI. This means that from time to time you will do a credit lowering for no reason other than David says so. We (those of us long-term users) have been through this and have seen it done. You, personally, may not have seen it done. You'll get a nice sales pitch of a Utopian concept of parity, but every time you people that run projects buy into this, you actually are wreaking havoc on actual statistical values over the long term, regardless of if they are too high or too low, but simply because they have been changed.

The whole argument supporting "Cross-Project Parity" is that David doesn't want credits to be an overriding reason why one project is selected vs. another project by us end users. What I'm saying is:

If you remove the concept of comparing BOINC-wide credit out of the BOINC team's hands and the hands of you people that run projects, then it no longer becomes your concern.

The only thing you will need to worry about is if your project has tasks of differing amounts of work that you base the credit award accordingly inside of your own project, without having to constantly worry about what another project is doing. For example, if your 3s tasks are 3x the work as the 1s tasks, then a 3s task becomes "3 completed tasks" when it is done. Yes, that's a bit different than "1 credit for 1 completed task", but not a lot. It just means that you have a baseline task.

This is the same general concept of "cobblestones" and the reference machine, just without all the entanglements of 40+ projects and different apps / compilation methods, etc...

To "fix" the "credit problem", the first step is stopping the comparisons between projects......
ID: 30929 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 30942 - Posted: 15 Sep 2009, 11:50:18 UTC
Last modified: 15 Sep 2009, 11:52:34 UTC

I honestly don't know what you want from us. People always complain about credit but never offer any suggestions. I personally don't have a solution.

You should be careful with statements like this, some of us have made suggestions, this was only the last of a long line that I made starting deep in the beta test when we proved that the benchmark system so loved by UCB was fatally flawed.

And contrary to Brian's thought, there is no inherent reason that cross-project parity could not be made a reality. There is just no will to do so. Just as there is little will in the UCB universe for people to contribute to BOINC unless it is exactly congruent to what UCB wants (more specifically DA).

In the development world BOINC is sometimes compared to Linux as a competing development model but there is no comparison. In the Linux world there is discussion and debate as to the feature sets that will be implemented in new revisions and then the developers go to work. In BOINC, well, the wishes of the participant community have never been taken into account. Otherwise, some of the long standing "critical" bugs would have long since been addressed (I think it is bug 6 in Trac, rated critical, that is once again trashing work in project x because project y is not behaving; I proved that a couple months ago with Drug Discovery, IBERCIVIS, and a couple other projects).

All of the issues surrounding credit were hashed out and a number of competing proposals were made to solve the most egregious problems, and we were blown off ...

What is is, 5 years later? We still get blown off ...

In part because the projects don't push back to get this demotivating problem solved.

I don't think Folding@Home has had to change their award system, nor has WCG (which still has its internal "points"; though they have added "badges" - though I will note because of the lack of UCB leadership the badges of WCG have no relationship to the badges in YoYo ... but I digress)...

Anyway, the point is that suggestions have been made ... why was no work done? Because DA said he would not make the changes even if we improved the system ... because what we have is good enough ...
ID: 30942 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 30963 - Posted: 15 Sep 2009, 15:52:56 UTC - in response to Message 30942.  


And contrary to Brian's thought, there is no inherent reason that cross-project parity could not be made a reality. There is just no will to do so.


My point is that it would be such a major undertaking involving so many people that it is just not worth the effort when a far easier option is to remove the comparisons between projects out of anything that BOINC officially supports.

If you went the way I'm suggesting, the only thing a project has to do is maintain their own internal parity. If they have tasks that are different in the amount of work done, then all they need to do is make it to where there is no incentive to "cherry pick" tasks.

As for the 3rd party stat sites, the argument from the CPP perspective would be that some stat site would boost the importance of one project over another in an attempt to draw users to that particular project. So what? Unless all of the stat sites went in on it, all you'd get is the bickering that goes on here in the project forums moved to the stat site forums.

What I'm suggesting is ultra-easy for BOINC (DA) and the projects, since the projects would no longer have to worry about exchange metrics. The BOINC development team can move on to fixing bugs in BOINC and/or providing new services (gah...social networking!). The projects can remain focused on the science.

While CPP could be done, the question to ask is does it need to be done by BOINC and the projects themselves?
ID: 30963 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 30971 - Posted: 15 Sep 2009, 17:09:02 UTC - in response to Message 30963.  


And contrary to Brian's thought, there is no inherent reason that cross-project parity could not be made a reality. There is just no will to do so.


My point is that it would be such a major undertaking involving so many people that it is just not worth the effort when a far easier option is to remove the comparisons between projects out of anything that BOINC officially supports.

Forgive me for being hung up on original design intent. But, cross-project credit parity and accumulation was one of the few original design intents for BOINC. In that the beta site is long gone and I can't find it in any time-machine I cannot demonstrate this well ... but ... one of the main points was to get away from the one credit per task because there are flaws in historical comparisons there too ...

My first SaH classic task took 32 hours ... 6 months to a year later I was running faster HW but the tasks were taking about the same time because they doubled the processing done per task. Yet each task counted the same... I think there were two more doublings before I migrated to BOINC ... but ... that was one of the built in flaws of counting tasks ... so how is that any more fair?

And, even in some of the projects, CPDN for example, there is a spread of task sizes ... do we have to track how many of which model? What of the tasks that are non-deterministic in size?

In this local case, what happens if and when MW changes the model to be more precise, less precise, or to do more or less processing?

This was the reason that the computing effort was to be measured against a standard machine definition; which was one of the things that has been junked by UCB ...

Now we have deflationary models and continual reductions in awards with little effort to address the other issues ...

I do agree that it is now a very hard problem which is why I tried so hard lo those many years ago to get something done ... in part because as a developer I had painted my self into several corners like this ... sadly, DA and the rest of the UCB types and, ahem, apologists, are not interested in listening to voices of experience if they are not saying what they want to hear... sadly, sometimes the voices that are saying what you most don't want to hear are the voices you should be most attentive too ...

Case in point I have raised issues with projects only to hear silence ... so, I vote with my feet ... does that kill the project no ... but it is my only option and I use it ...

Personally I don't even think something like my calibration concept would really be all that hard and were the work done it would be a drop in for projects for the most part and the up side would be that:

a) They would be able to stop fiddling with credit issues, and
b) They would have higher confidence in the systems used to produce their results...

The only hard part is to get DA to stop patting himself on the back and to start to address the issues that plague BOINC with some engineering and not hacks and slash type fixes...
ID: 30971 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 30976 - Posted: 15 Sep 2009, 18:54:11 UTC - in response to Message 30971.  


And contrary to Brian's thought, there is no inherent reason that cross-project parity could not be made a reality. There is just no will to do so.


My point is that it would be such a major undertaking involving so many people that it is just not worth the effort when a far easier option is to remove the comparisons between projects out of anything that BOINC officially supports.

Forgive me for being hung up on original design intent. But, cross-project credit parity and accumulation was one of the few original design intents for BOINC. In that the beta site is long gone and I can't find it in any time-machine I cannot demonstrate this well ... but ... one of the main points was to get away from the one credit per task because there are flaws in historical comparisons there too ...


You know how you bemoan being dismissed out of hand? Well...........

Too many people are hung up trying to make the "square peg" fit the "round hole", meaning "hung up on original design intent" instead of thinking of other ways...

If you change to single-project-based intra-project parity, all that is needed is a baseline task. If the project has more than one type of task, but the other tasks are of the same general scientific method, all that needs to be done is adjust the credit reward to the multiple of the amount of work (FLOP counting is good enough) of the baseline. This way all tasks end up rewarding the same credit for work performed, making it impossible for people to "cherry pick". If a particular processor gets it done quicker, fine. If someone makes an optimized application and gets it done quicker, fine.

If a new application is released that has different science / different amounts of work, stats are frozen when the transition to the new application starts, and a new baseline is established. The project can create a master table that has the complete historical data that is then normalized across however many "credit generations" there may be to come up with the "since inception" ranking inside that single project. Thus you'd have "current standings" as well as an "overall".

The only thing that needs discouraging is "cherry picking", which is done by what I mentioned above (baseline value and a multipler). However, if people are still ingenious enough to come up with ways to game the system, the projects should track the number of aborts done by each host over a rolling 2-month window and set threshold values to where if the threshold is passed, the host can no longer pick up work for a period of a month. If multiple hosts from a single user do the same thing, then that user cannot pick up work for a month. One-strike rule. If it happens again, the user is on permanent ban, including message boards. Sure, someone will create a fake account and come on the forums and rant when it happens, but that could happen today over any number of things...so not really any different than today in that regard.

Instead of looking at the issue across the entire BOINC landscape, it is much easier handled at the individual project level, so long as it is established from the start that 1 task from "Project A" has no official relationship to 1 task from "Project B". If a 3rd party wishes to try to come up with exchange rate values, that's on them, not on BOINC and not on the project.

In other words, I'm telling you to notice the tree instead of the forest... ;-)
ID: 30976 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 31000 - Posted: 15 Sep 2009, 20:35:09 UTC

Hmm, yes, well ...

FLOP counts ... well, there are problems there too ...

Which FLOP gets counted how?

Just as an example ... the MW tasks are MW tasks are MW tasks ... yet how many FLOPS does it take? The CPU non-optimized application takes "x" FLOPS, the optimized apps take something less and the GPU apps are in a whole 'nother country ... and you can't necessarily compare apples to peaches there either because if the apps are truly matched to the GPU they will not necessarily have any relationship to each other ...

All I have been saying is that there is no need to jettison the original design intent. Yes, according to your lights it would be justifiable and perhaps worth it. Perhaps just as many of us disagree with your desire. Just because UCB is too lazy to solve the problem and too self-centered and controlling to let others solve issues does not mean the problem is unsolvable ...

In your approach no one would be able to trust the system either and the accounting and conversions would become another nightmare that is as unneeded as a facebook interface to BOINC...

And just because you don't use the current system (flawed as it is) to measure and contrast progress vis a vis various projects does not mean others do not ... flawed as they are the stats are still the only tool I have to help me decide allocations of work effort amongst the projects... I grant you that the fact that the system is so flawed and that there is no interest in fixing it does not help, but, it is the only yardstick I have ... so it is the one I will use ...

So, I see the trees of the 50+ projects I contribute too, and I look at the forest even though it is on fire ... so I could suggest that you do the same looking for the forest instead of only looking at the leaf on one tree ... :)

Another adage that comes to mind is about babys and bath water ... though I have to admit that the bath water has always sounded more attractive to me ...
ID: 31000 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
Message 31016 - Posted: 15 Sep 2009, 22:21:16 UTC - in response to Message 31000.  

I will just answer explaining some technical details for the case of MW here.

Hmm, yes, well ...

FLOP counts ... well, there are problems there too ...

Which FLOP gets counted how?

My current flops count (look to the output of a 0.20 GPU task, it is different than the number in 0.19 and reflects the changes from 0.19 to 0.20, which mainly consists of using better optimized versions of sqrt, exp and so on) takes a very simple approach. I take the dissassembly of the code which is run on the GPU and count the operations there. That means the code as written is passed through the CAL compiler (within the driver) optimizing all what's possible. After that the optimized machine code runs through a disassembler (StreamKernelAnalyzer downloadable from AMD) and I simply count the operations in the optimized machine code executed on the GPU.
Counting rules are quite simple, every double precision ADD or MUL is 1 operation, same as FRACT, LDEXP and some other stuff I don't remember just in the moment. Conversion instructions, integer instructions (for the control flow) and so on are not counted at all. Really only such simple double precision instructions doing some real operation/calculation are counted, each as one operation.

Just as an example ... the MW tasks are MW tasks are MW tasks ... yet how many FLOPS does it take? The CPU non-optimized application takes "x" FLOPS, the optimized apps take something less and the GPU apps are in a whole 'nother country ... and you can't necessarily compare apples to peaches there either because if the apps are truly matched to the GPU they will not necessarily have any relationship to each other ...

No, the code between the different versions is very similar. They use really (almost) the same number of operations. Only the stock app is slightly worse (i.e. does more operations), but this "slightly" means maybe 5% or so (maybe even less), nothing spectacular. The faster apps provided by me don't take another approach to the problem saving a load of operations. It does really the same in the same way, if one neglects some minor differences in the 0.20 apps actually adding some percent over the stock app to improve accuracy somewhat (those changes may be incoporated to the stock app as well as CUDA when the project has reviewed them, they know of it for a week now).

The reason why my versions are faster is foremost a different compiler being able to vectorize the hot loops within the code. If you compare the normal Win32 app in the package (using x87 FPU instructions) with the project supplied stock app you will see a much smaller difference (no vectorization kicks in).

Most changes to the code of the optimized apps is not about saving instructions (that was done a year ago), it is about tricking the vectorizer into vectorizing the loops (I don't use any intrinsics or assembler) without changing it in a way which would influence the results and after that about getting the branch prediction into predicting right or not needing any prediction at all. It's nothing that would change the calculated flops, only things that change how fast they are calculated.

And the same is true for the GPU applications. They are generally working with the same algorithm using the exact same lookup tables and so on as the CPU versions. Only difference is that for the integration the GPUs evaluate a lot of data points in parallel to get the speedup and to use all the parallel resources of a GPU. The CPU applications do the calculation at one point in space, finishes it, combines that result to the the previous points, and start then with the next one.

To parallelize that algorithm is really easy. Both GPU applications (there is no difference between the ATI and CUDA version in this respect) do a lot of those calculation for a lot of points but are not combining it directly (which would serialize the task). They are writing the results to the graphics memory and combine them later as a second step. So you just save the second step for later, but one still has to do it. So you are not saving any operations, you are just trading parallelism against your memory footprint. That's all.

To sum it up, the flops count of all application versions is very close together and all use the same general algorithm (exactly the same for the hot loops the app spends 99% of its time).
ID: 31016 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Len LE/GE

Send message
Joined: 8 Feb 08
Posts: 261
Credit: 104,050,322
RAC: 0
Message 31026 - Posted: 16 Sep 2009, 0:51:46 UTC

Tested the CPU version 0.20_X87 against 0.19_SSE on MP2200:

0.19_SSE - 3h 35m
0.20_X87 ~ 7h (stopped after 42m at 10%)

Back to 0.19_SSE
Any chance for a 0.20_SSE that runs in the time range of the 0.19_SSE?
ID: 31026 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 9 · Next

Message boards : Number crunching : Credit lowering

©2024 Astroinformatics Group