Welcome to MilkyWay@home

Posts by Brian Silvers

41) Message boards : Number crunching : Server Crash November 10 (Message 33345)
Posted 18 Nov 2009 by Brian Silvers
Post:
For those speculating about when the project will be sending work again, I'm of the opinion that this project will replace hard drives and do whatever else is needed to get things going again before Cosmology can even figure out what's caused the BOINC Transitioner to stop...

:sigh:

If it weren't for an actual physical device sitting in space (the Planck spacecraft, orbiting around the Earth's L2 Lagrangian Point), I'd give up on that project...
42) Message boards : Number crunching : Server Crash November 10 (Message 33308)
Posted 16 Nov 2009 by Brian Silvers
Post:
[quote]Predictor...well...there were mistakes made on both sides of that issue...

What 'other' side was there? That project's lack of communication truly sucked.


Apparently you were not following the same saga (the hacker saga). Both sides caused the situation to be much worse than what it was originally...
43) Message boards : Number crunching : Server Crash November 10 (Message 33301)
Posted 15 Nov 2009 by Brian Silvers
Post:
Brian, I think you are quite right regarding expectations. You also are quite right in pointing out that there are other projects which are truly 'bad citizens' in the BOINC firmament. Cosmology -- as you noted, or even worse, Predictor.


Cosmology is not a "bad citizen". It's just that they are out of their depth and don't want to pay enough money to get an appropriately talented and dedicated administrator.

Predictor...well...there were mistakes made on both sides of that issue...
44) Message boards : Number crunching : Server Crash November 10 (Message 33299)
Posted 15 Nov 2009 by Brian Silvers
Post:
True enough -- and perhaps it is a case of too high expectations -- as seemingly a no new work and no news update in excess of three weeks would be ok with you.


That's a hypothetical situation, as that hasn't happened here, but again, those of us participating on a regular basis at Cosmology go through periods of 3-6 months without anything at all coming from the project except for tasks.

To further illustrate the problems there, even just to get those tasks there are times where you have to babysit the download queue because some new user has freaked out at the amount of time the work takes (in some cases, more than 24 hours CPU time), or the amount of memory they take (700MB - 1GB per task, so if you have a quad core you're looking at up to 4GB), so said new user issues a reset project and strands the tasks, then when the next person comes along to pick the task up when it hits the timeout, the file is not sitting on the server so the download gets stuck in retry mode until possibly your whole download queue is full of retries. I've personally had to abort 10 transfers to get 1 task to work on. That's the most severe instance of that issue for me. Most of the time I only have to abort 3-5 transfers to get 1 task...


In my project management days, I learned that setting expectations is important, but if your baseline expectations are low, then I suppose that isn't as much of an issue.


The important thing is setting realistic expectations. Even then though, if a situation exists where the users, which in this case are us, do not tolerate any reevaluations of the estimated timeline, then that is not a good situation. That's what happened in the job I had, where I was told that I was required to work the weekends if I felt that I needed more time. I wasn't the only one told that either. Business users were allowed to change things in their requirements, all the way up to the point of User Acceptance Testing. If it meant that we had to work even harder, well, that was just the breaks...

That's what I see when I see several of the more demanding people here start ranting about how their needs are not being met...
45) Message boards : Number crunching : Server Crash November 10 (Message 33286)
Posted 15 Nov 2009 by Brian Silvers
Post:
Which would, based on your explanation of the process, have been excessively optimistic at that. I'm familiar with the procurement process, in my earlier life I was a lab administrator back at Yale, then later was in materials management at corporations. My own sense is that if a prospective timeframe were offered (and I think providing that sort of information is a good idea), then something like two or three weeks (but we hope it is less than that) would have avoided setting unrealistic expectations.


If two or three weeks had been said, then this same kind of talk would've happened at 14 days if either nothing had happened or nothing had been perceived to have happened.

If you want to see totally poor management, go look around at Cosmology. There are issues which have not been fixed in over a year. Admins continually bungle SQL scripts. The server crash in February/March of this year still has not been completely "fixed", meaning there are newer issues than those that are more than a year old, so they're not even back to where they were before the crash. The scientist had some health problems, but not a single person came onto the forum or the main page to even mention it. He then comes back, posts a handful of messages, then is gone again for another 5-6 weeks, until just this past week the transitioner failed. That's it...one piece of the BOINC server-side software. They've had 3 business days (Wednesday, Thursday, and Friday), as well as today to just get that going. Based on past experience, it won't get fixed until at least middle of next week, and possibly not until Thanksgiving or later.


Again, MW has become a project where under performance to expectation has become increasingly the norm -- be it credit handling (where DA appears to have a significant influence), communications (which have become rather thin of late) or server reliability.


Let's see. We have an admission that Dave is no longer participating, Travis said that he is working on his Thesis and that he has shown some scientists some basics of what to do, they have said in the past that they don't control the actual physical hardware, Travis just had to go to Spain for the BOINC Workshop, etc, etc, etc... Collatz goes through the same type of server problems and Jon over there made a post at one point that demonstrated how demanding people are and what it would take (him quitting jobs and working full time and then some on the project) just so that certain competitive people could have a hobby...

Maybe, just maybe, your expectations are a bit too high...

46) Message boards : Number crunching : Server Crash November 10 (Message 33282)
Posted 14 Nov 2009 by Brian Silvers
Post:
I think you may have the more realistic assessment here -- certainly the 1 or 2 days optimism NEVER HAD A CHANCE of happening.


The news announcment did not say "The hard drives will be here in 1 or 2 days...". It actually said that they would "hopefully" have the drives in 1 or 2 days.

There is likely a Purchase Order process that has to be followed inside the university. If the PO got signed, then it probably was not signed until the 11th, so the order couldn't happen until at least the 11th. After that, it would depend upon availability of the drives at whoever the order was placed with and the method of delivery chosen (ground, next day, 2 day, 3 day). An order shipped on the 12th for 2-day delivery via UPS or FedEx would be delivered on Monday, as you have to specify Saturday Delivery with both of them.

After shipment, depending on where the drives came from, then any delivery could be problematic. USPS was not picking up / delivering packages on the 11th (Veteran's Day), so a small backlog would've happened. I do not know about UPS or FedEx. Additionally, parts of the Eastern Seaboard were under threat of flooding and/or strong winds, which could've delayed air/truck transportation times.

Long story made short...there are multiple reasons why ordering hardware can take longer than 1-2 days. Should Travis not have mentioned a timeframe? Most definitely. At most he should've said "as soon as possible", but even with that it would likely have been translated into 1-7 days... Once any numeric timeframe is stated, people rigidly stick to it and expect it, even if there are genuine reasons why the timeframe slipped. I saw it as a programmer. If we stated that we thought something would take 3 weeks, but once we got into it we realized it was more complex than we thought and we wanted another 1-2 weeks, we would get told that it had to be done in the original estimate, even if it meant we had to work 12-16 hour days (or longer) and not have the weekend off to make it happen. Yes, I really was told that one time...that if something wasn't done, I should consider the weekend to be regular working days... If it hadn't gotten done by the original estimate, it would not have been a "life or death" scenario, but it was treated as such...
47) Message boards : Number crunching : Cruncher's MW Concerns (Message 33149)
Posted 8 Nov 2009 by Brian Silvers
Post:

Travis already said and promised gpu wus 100x longer months ago. He said they had more complex data that could be crunched, so yes it would be scientific.

OK, I missed that. It would certainly make a radical difference.


Yes, it indeed would... That was the whole premise behind MW_GPU. The current tasks are still within the range of CPUs. If they moved you all off to the other project and did the more complex work, they could be getting a LOT more done. If they were concerned about faster turnaround here, they could give you all the 3-stream (3s) tasks as well, leaving the 1 and 2-stream tasks here.

But given the reality of rate of progress here in MW and the possibilities becoming a reality, Travis may as well say that this project is aiming to put the first donkey on Mars by the year 2012. Not meant to be a criticism of Travis, I'm just saying we can make the best of what we have and leave the pie in the sky for when it happens.


So, here's the choice... If the new hardware makes it to where site and work availability are at manageable levels for the project, should the project keep it that way, or increase the workload to yet again get to the point that we're at now, thus requiring even more new hardware?

If the project is happy with not having to babysit the servers as much and happy with the rate that the research is happening, then what exactly gives you the right to demand they do otherwise? Yes, I know that you are providing a service and you can stop providing services at any time that you choose, but to me that is not "being a team player". If the new server goes in and it makes their life easier and they want to keep it that way for a while, then you should be respectful of that.
48) Message boards : Number crunching : Cruncher's MW Concerns (Message 33131)
Posted 7 Nov 2009 by Brian Silvers
Post:
Cruncher's MW Concerns.

Lack of cached wu's on GPUs. With a wu taking 55 seconds or less, for my machine with 2 * GPUs in my quady, that gives me just under 15 minutes of wu's cached. Be nice to have 30 to 60 wu's cached per GPU.


Sadly this has been a problem with the project since it's inception. Due to what we're doing here our WUs need a somewhat faster turn around time, so chances are you're not going to be able to queue up too much work.

Also, with the server in it's current struggling state, letting people have more WUs in their queue only slows it down farther, so it's not something we can really change.

I was talking about the future.....once the new hardware is operational.


Wouldn't it be nice to just not have these continual struggles?

The answer isn't more of the same size tasks you all get now. The answer still will be longer running tasks for GPUs. It doesn't matter to me if they get the same credit per second as they do now, they just need to be on the order of 100 times longer...
49) Message boards : Number crunching : Cruncher's MW Concerns (Message 33087)
Posted 5 Nov 2009 by Brian Silvers
Post:
However I still claim, that there were WUs that ran 1.5x longer than others with the same credit grants, but will never be able to prove it.


Such a defeatist attitude... ;-)

I have 3 reported tasks from my Pentium 4 right now. One took around 7250 seconds, while the other two took around 5100 seconds, all three getting 53.45 credits.

My average credit per day on that system when I stopped processing here a few days ago was around 800.

7250 + (5100 * 2) = 17450 total runtime seconds thus far.

24 * 60 * 60 = 86400 seconds in a day

86400 / 17450 = 4.9513

53.45 * 3 * 4.9513 = 793.94 ~= "around 800"

Yes, it may "stink" to get multiple of the longer running tasks, but there are also plenty of the shorter running tasks to average things out over the long term.


And just some small suggestion, before I drop this topic completely. It wouldn't hurt if Travis told us not only that there was increase in runtime, but also by how much, and which WUs would that be.


Personally, I think he just meant that they were starting up 3s (3-stream) searches instead of 1 or 2 stream which generally cause the server to be real sluggish. The runtime variation in the tasks has been there for weeks, if not months, so this is a tempest in a teapot...
50) Message boards : Number crunching : No Thankyou (Message 33072)
Posted 5 Nov 2009 by Brian Silvers
Post:
That said though, I don't think a server upgrade alone will suffice... The GPUs need more complex work so that they are not pounding away all the time...

Collatz just increased the task size by 50% to slow the activity there ... not sure if it is working or not, only time will tell ... and by working I mean reducing the server load ... I now the tasks are taking longer, though on my fastest the time only went from 9-10 minutes a task to ~15-17 minutes per ... still, it is an increase ... which will slow my hit rate ...


So, both projects are experiencing either the same or very similar issues. At some point maybe people will understand that projects simply are not prepared to deal with this at this point. They can get mad if they choose, but it won't change the reality of what is happening.

The current tasks are still well within the capability of my 5-7 year old CPUs. They can be handled with relative ease by more modern CPUs.

This is boiling down to a pure Supply vs. Demand scenario. Demand is high, but supply is limited / low. When this situation happens, the "cost" is high. The problem is, if people keep on ragging on the project because of just not being able to climb up the BOINC leader standings as fast as they want, there is the risk that this project will simply decide that there is less overhead by getting an internal cluster and running tasks on the internal cluster. That stops the noise and expense of having to try to keep people happy... It might not be as fast as utilizing BOINC, but there could be a point where it is "fast enough"...

People should try to keep that in mind...
51) Message boards : Number crunching : No Thankyou (Message 33058)
Posted 5 Nov 2009 by Brian Silvers
Post:

Disk drives should not fail from heat either, or dust, or age ... then again ... they do ...

Even RAID arrays fail ... paradoxically adding more disks to an array to make it "safer" actually reduces the MTBF on the array itself ...


The problem with the "upgrade the server"-only route is that one can only guess at how much of an upgrade is enough. Numerous people here seem to know that this project is available to the entire world, but don't understand the potential impact that has in the number of transactions per second. If only the server is upgraded without an increase in the complexity of the work being sent to us, then the same number of transactions per second will need to be handled. You also then have to take into consideration any influx of new users putting a strain on the system again...

Personally, I'd like to know the specs of the current system, which is what I posted for Travis, but I figure this may get lost in the thread. At that point a budget amount could be stated and people who actually do understand server specifications might be able to offer suggestions.

That said though, I don't think a server upgrade alone will suffice... The GPUs need more complex work so that they are not pounding away all the time...
52) Message boards : Number crunching : No Thankyou (Message 33057)
Posted 5 Nov 2009 by Brian Silvers
Post:
I think the last time we had a db crash was a couple months ago?


Coincidentally, it was 1 month, almost to the day...

Could you post the specs for the current system you're using for the server? Those of us who actually understand system administration and server specifications may be able to help make suggestions on improvements...


53) Message boards : Number crunching : Strange things happen (credit) (Message 32997)
Posted 3 Nov 2009 by Brian Silvers
Post:
Communication here has become Predictor like -- not a good thing.


I notice you do not have Cosmology as one of your projects. "Communication" that happens there pretty much only occurs between computer systems of volunteer and project. All of the admins have bungled things and then left. The project scientist disappeared for months, then showed back up saying that he had a bad injury and his wife had a kid, then has disappeared for another month now... I can offer sympathy up to a point, but the project appears to be slipping back into the same behavior. They currently have a job posting for a project admin, but are only wanting to pay $15/hr and asking for a lot of skills. If the project suffers another crash, it's doubtful things would be working again for 3-6 months, considering it took them nearly a month to get out of the one in February...


Regarding more complex work units (particularly here for GPU work -- not quite so much for Collatz), that certainly would be one approach. But the root cause (in my view) is the lack of alternative projects for GPU processing.


The "root cause" might be that, but a "contributing factor" is that the credit per unit time is higher here than at Collatz, so that shifts the attraction towards this project.

Many, many months back, Travis made a real effort at communicating as bumps occurred over here, but sadly, that era for this project appears to have passed. So my empathy and respect for this project has dwindled as my frustration with the performance of the project along with the information vacuum have simply moved my feelings toward ire.


If he indeed had the flu, then he may have needed to spend extra time on his studies. Additionally he was selected to go to Barcelona for the BOINC Workshop, so he can't exactly do as much remotely as he could on-site. Additionally, the user base here has been extremely demanding. I'm not sure it is appropriate to heap everything upon him.

Like Paul Buck said in a post that has disappeared with the crash, some people seem to think that the project owes them work. The performance problems are exacerbated by people trying to get that work that they think they're owed, especially by utilizing scripts to hit the server more often than their systems normally would otherwise. An unwillingness to accept that the tasks being processed are far too easy for their hardware and a strong resistence to alternatives (longer tasks / separate project) mean that until the project spends money on hardware upgrades, the performance problems here will continue to happen. At some point, it could become "more hassle than it is worth" from the project's perspective, much like how LHC got tired of being barked at about the replication, having to spend a lot of time and energy on babysitting the forum, thus have appear to have decided to process anything they need on an internal cluster.

54) Message boards : Number crunching : Strange things happen (credit) (Message 32984)
Posted 3 Nov 2009 by Brian Silvers
Post:

I for one will wait - but I DO agree that some info from a staff member would be MOST APPROPRIATE!!!

Travis - I know that you have a thankless job (I do database and network jobs in my spare time) and I appreciate all your efforts, but please give us some info dude!


Be interesting if the disk(s) failed due to high sustained i/o pressure...

Also of interest is that once this project went down, Collatz' web pages also got to the point where things normally are here...

Things simply need to be made more complex for GPUs, both here and at Collatz. Not talking about any credit reductions... Credit for unit time can remain the same for the time being. It's just obvious that the i/o load on the servers need to be reduced...
55) Message boards : Number crunching : credit table 2.0 (Message 32841)
Posted 28 Oct 2009 by Brian Silvers
Post:
Important points from me that you seem to be missing are bolded and underlined.


You want to argue not learn.
You simply do not want to go over what I am trying to explain, you want to talk about everything but that. Anything that could possibly show that no matter what you are right and everyone else is wrong.


Actually, you have a misunderstanding of what I originally said, thus you're attempting to talk about things that I wasn't... Why should I talk about something that I wasn't trying to say? Why should I be belittled for not talking about something that I wasn't trying to say?


Point A)
The simple fact is this...
If system A) can do 100 work units a day.
If system B) can do 1000 work units a day.
If System C) can do 3000 work units a day.
which system is going to get more work done?
The is the fundamental premise that you ignore.


If you read my post that starts out with the 7 iterations of 4000 tasks...and truly try to understand it, what I've been trying to tell you is that over the course of 4 years I've had numerous deflations in the amount of credit I earn.

I was never talking about this project and this project only when I said that I find it impossible to believe that you have done more work in 7 days than mine have done in 4 years.

The BOINC-wide standings that you post in your SIGNATURE GRAPHIC is what I was talking about. You gained the same number of total credit, yes, CREDIT, from this project in 7 days than I have been able to obtain from two systems and 5 projects in 4 years.

Over time, the deflationary cycle reduced the total amount of credit that I was able to obtain. The BOINC-wide standings are credit-based, not workunit based, thus 1.3 million credits obtained by you may or may not be the same as 1.3 million credits obtained by me. The uncertainty is due to the lack of a standard and the continual deflation over time which I've experienced more of than your system has with this single one week timeframe.

Since I kept getting smaller amounts of credit for either the same or more complex work, the total amount of credit I have obtained is less than it would've been if the deflation hadn't happened.

This project's credit award is still higher than the historical high for any project that I have participated in. As such, your credit awards running the GPU application are higher than the projects I have participated in. Even if you do more of them, and that is not in dispute, the credit award for each one has a larger gap between each task that I completed for all of the other projects, with the gap being larger for more recent work and smaller for the oldest work due to the deflation involved.

My point is that it is much easier for you to overtake the total number of credits that I obtained by processing tasks here because you haven't had the deflation penalty and the fact that the tasks award more than the historical high of the other projects, and we won't even talk about the average historical reward of the other projects...

It does not matter to me if you want to talk about "workunits".

THAT IS NOT WHAT I WAS TALKING ABOUT!

...and like Will Smith said in "Men In Black", I'd appreciate it if you eased up off of my back about it...


However computer B) did those units in 1/5 of the time. Duh I am not stupid, computer A) did more work. However the time differential is in computer B)'s favor and it will surpass the amount of work done by computer A) in a fraction of the time. Also gaining more credit than A) in the process.
This is what you so carefully ignore. While computer B) may have got more credit per work unit, computer B) WILL complete more of them.


As I said in my other post, future potential is not justification for credit rewards.

This point you're trying to make though is being driven by your mistaken understanding of the original nature of what I said. If you do not believe that I'm telling you that you're mistakenly barking up the wrong proverbial tree, then there's nothing that can be said that will change your tone and demeanor...and thus we would need to agree to disagree...
56) Message boards : Number crunching : credit table 2.0 (Message 32837)
Posted 28 Oct 2009 by Brian Silvers
Post:
7 iterations of 4000 tasks that award declining amounts of credit in Project A. Each task takes 15 seconds for a total runtime of 1000 minutes for each iteration ("day").

4000 * 30 = 120000
4000 * 26 = 104000
4000 * 23 = 92000
4000 * 21 = 84000
4000 * 20 = 80000
4000 * 19 = 76000
4000 * 18 = 72000

120000 + 104000 + 92000 + 84000 + 80000 + 76000 + 72000 = 628000

628000 credits for 28000 tasks completed. Average credit per task = 22.42857


7 iterations of 3000 tasks that award a fixed amount of credit initially 1/3rd larger than in the first example in Project B. Each task takes 5 seconds for a total runtime of 250 minutes for each iteration ("day").

3000 * 40 * 7 = 840000

840000 credits for 21000 tasks completed. Average credit per task = 40.00000


The built-in deflationary value of credit awarded over time with regards to BOINC-wide leader standings, which include all projects, whether or not one wants to talk about other projects, causes an inability to state with absolute certainty that a system with 840000 credits actually has done more work than a system with 628000 credits, but yet the BOINC-wide leader boards unanimously indicate that the system with 840000 credits has a higher numerical ranking amongst systems than the system that has 628000 credits.

The unknown / undefined value that determines which system should be ranked higher than the other is the total number of operations performed. A higher number of operations by the second system would justify it being ranked higher in the standings.

  • If the first system performed 1 million operations and the second system performed 1 million operations, then they should be ranked the same.
  • If the first system performed 1 million operations and the second system performed 2 million operations, then the second system should be ranked higher.
  • If the first system peformed 100 thousand operations and the second system performed 1 million operations, then the second system should be ranked higher.
  • If the first system performed 1 million operations and the second system performed 300 thousand operations, then the first system should be ranked higher.



You will note that the time component never enters the equation.

In that last example, if the second system had run for the same 1000 minutes, it would've done 4 times as much, and thus would've performed 1.2 million operations. However, the unrealized potential for work performed cannot be factored in. To do so would be equivalent of asking your employer to pay you for 40 hours when you only worked for 10. Payment for work is done upon the completion of the work, with the exception of CPDN which has its' own justifiable reason for a pay-as-you-go process due to the extreme length of their tasks. The "future potential" of a system does not earn it credits, nor should it, as that would merely lead to more people trying coerce projects into giving them credit without the slightest intention of doing the work.

If at some point in time a technique to reduce the total number of operations needed to complete a task is implemented, if that technique can be shared across all architectures, the credit per operation should drop by the same ratio. This would keep the credit per each operation the same and would retain the ability to discern which systems should be ranked higher or lower than other systems.

57) Message boards : Number crunching : credit table 2.0 (Message 32836)
Posted 28 Oct 2009 by Brian Silvers
Post:
You are full of it, it has been proven and since you do not care to actually debate or even attempt to put any understanding into this I can call you what you are... and what everyone else thinks you are. A whiny complaining little...


OK Admins, enough is enough. Childish name calling is not suitable for an adult conversation. I agree this thread shoud be locked.


It's ok. I am going to reply to him one more time, and the thread does not have to be locked unless he lobs more insults, something which I am not doing to him...

Actually, I have two posts, one reply to him, one to the thread in general.
58) Message boards : Number crunching : credit table 2.0 (Message 32815)
Posted 27 Oct 2009 by Brian Silvers
Post:
If computer A) does 275,124 work units in a particular time frame(the actual amount is hamudgen, or meaningless).
If computer B) does 275,124 work units in half the time.

Which computer did more work?

Just because it took longer for one computer to reach the same amount of completed units it means it did more work?

This was and is my entire point. A point that you continuously ignored.
All things considered equal a fast computer will do more work than a slower one. Is this correct?


This is my final post on this subject. I wasn't going to post anything at all, but it is clear that you have some major misunderstandings about the BOINC credit system and how it works, and what the Cross Project Parity fanatics look at to justify their views.

In the scenario above, both computers did the same amount of work. They both did 275,124 work units.

BOINC-wide standings though are not based on total number of work units. With SETI Classic, they were. With BOINC, they are not. It does not matter if you don't want to talk about other projects, as the people who will be bringing this to you won't respect your wishes. If the CPP crowd knew that the basis for the charts that they use to compare projects was distorted as badly as it is, they'd be all up in Travis's email demanding further cuts.

What you are not understanding is that each change that has happened across the many years, across any of the projects, impacts the net worth of a work unit that was processed during a different credit granting era/scheme/epoch.

I stated that I found it to be impossible that your system here has done more work in what amounts to be 7-8 days (it was 2.5x for 21 days) than what my systems had done in 4 years on other projects. Regardless of whether you want to talk about other projects, again, David Anderson isn't going to respect your "because I said it's not allowed to be talked about" decree.

What happens each time that a project lowers credit per unit time is that new users have to do more work to attain the same total credit as users who processed under the older, higher credit rate. This potentially sets up a situation where older users not only have a "head start" over newer users, the newer users have a handicap.

If you know accounting terms, the way things have happened with David Anderson at the helm mandating these reductions, is the Net Future Value of work is always less than the Net Present Value. Also true with that is the Net Present Value is less than the Net Past Value (if such a thing existed). It is a continually deflationary cycle. That is what has happened, whether you want to talk about it without restrictions or not.

The prime example I gave was Cosmology. My average credit/time there is barely above the BOINC benchmark * time method. I get 420 credits for 22-28 hours of work sometimes. Cross Project Parity fanatics got ahold of that project. We were told that the "excessive credits of the past make up for the low credits now" (in a nutshell).


This was why I went through and did the work units per day/month/year comparison. I totally ignored and removed the credit argument from that equation, you kept trying to put it back in using cobblestones and comparing percentages of whatever in your replies.


...because that's the way that the BOINC credit and ranking system really works. It is pointless to talk about workunits done when the ranking system doesn't rank people on workunits done. It ranks people on credits obtained. As I said, the credit trend is continually deflationary, not static. The cut that hit anyone still using app version 0.19 here was pure deflation. They are still doing the same work with the same application, just getting less for it.

Again, you cannot just ignore the pieces that you want to ignore and talk about the pieces that you want to talk about. David Anderson isn't going to honor that...and it would behoove you to try to figure out a real counter-argument to what he and the CPP fanatics will say rather than just "well I don't want to talk about that".


I am not focusing in on any one particular thing that you have said, I am being very fair, very open and supporting my statements with evidence... or at the very least realistic examples.


The problem is, the realistic examples that I've brought in, you just call bunk as you don't want to talk about it. David Anderson, John McLeod VII, and a host of other people are not going to have one bit of respect for "I don't want to talk about that". They are the ones that will make the decision and implement it. Telling them "you can't talk about that" is just going to get you laughed at by them and ignored.

Adios...
59) Message boards : Number crunching : Website slow for anyone else? (Message 32755)
Posted 26 Oct 2009 by Brian Silvers
Post:
Slow as a slug race.

Will this ever be fixed properly?


It requires the GPU users to not be hitting the server as much, so until / unless the project gets serious about making tasks take longer for GPUs, we will continue to have the slow web page response times...
60) Message boards : Number crunching : More and more failures to connect to server- deja vu (Message 32616)
Posted 21 Oct 2009 by Brian Silvers
Post:
its no good we need more ati asssisted projects ;)
So we can overload them all xD

It's about time someone had the idea of taking distributed processing a step further and having distributed servers. I'm sure that many would volunteer to use their powerful systems to recieve WUs and issue new ones, and perhaps being given credits for WUs recieved, stored, moved to the MW server when it is better able to cope, and dispense new WUs in the meantime.


That would never work because people are too demanding, too fickle, and too unpredictable, not to mention being a logistical nightmare. People would demand to have more and more credit, or we'd have the CPP people stepping in and saying there was too much credit being issued. People would be fickle and get irritated with a project and decide they weren't going to do it anymore. You'd also have a lack of people that signed up to do it that would be on and able to do it at any given time. After all that, you'd have to make sure that all tasks were synchronized across multiple systems, especially for a project that depends on incoming work to generate new work. Results on various "distributed servers" could be stale and no longer needed.

Beyond even those issues, you'd have issues of security. The area of the system would have to be encrypted with the only people having access being the people at the PROJECT. If the donor of the system were to have access to that system, the science could be compromised and/or user results could be compromised. There'd have to be monsterous audit trails to keep everything tracked. There'd also need to be some sort of mandatory enforcement of antivirus programs and signatures being up to date.

Then you'd have to get down to actual physical specs. A mandatory requirement would be a battery backup of significant backup time. Next you'd need to have those systems set up with at least RAID 5, in case of data loss. The donor would have to be responsible for performing a full backup probably every day with hourly incrementals. Redundant power supplies would also be required. Next, you'd need a broadband connection that provided significant up/down speeds and one that allowed server-type actions.

Once all is said and done, the hassle of dealing with the general public, the possibility of users compromising the project security or data integrity, and other significant risks to the project along with significant costs for the donor, would make this a non-starter. Even if technology improved drastically, you'd still have the bane of system admins, the users...or in this case the donors, to contend with, and as I said, they're demanding, fickle, and unpredictable...

Best solution == longer tasks for GPUs :-)


Previous 20 · Next 20

©2024 Astroinformatics Group