Welcome to MilkyWay@home

Problem with tiny cache in MW

Message boards : Number crunching : Problem with tiny cache in MW
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

AuthorMessage
therealjcool

Send message
Joined: 5 Oct 09
Posts: 22
Credit: 22,661,352
RAC: 0
Message 32169 - Posted: 8 Oct 2009, 23:51:14 UTC

Errr.. just to make sure I got this straight - if the Servers or my internet connection go down, I will run out of work in 20-40 minutes depending on the machine?

Wow.. that is an extremely short timespan. I run all my crunchers 24/7, and all the CPUs have at least 3 days of WCG work queued up, most have 5 or 7 days... I couldn't stand the thought of them ever idling.

Since I'm used to my ATI GPUs idling, I won't mind as much, I guess :D
ID: 32169 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 32183 - Posted: 9 Oct 2009, 4:45:27 UTC - in response to Message 32169.  

Errr.. just to make sure I got this straight - if the Servers or my internet connection go down, I will run out of work in 20-40 minutes depending on the machine?

Unless you also connect to Collatz ... of course, if you want some semblance of Resource Share being obeyed you will also have to run with 0.1 days cache ...

Sadly UCB still thinks that strict FIFO on GPU tasks makes sense ... the problem is that they imposed that rule to solve some issues with turbulence in the tasks being selected to be run which is actually the consequence of some bugs (now seemingly fixed) and some design issues they are working diligently to ignore ...

It is not a hugely notice problem yet because almost everyone is running pretty much with single projects on their GPUs and don't notice the imbalances when they run a second project ...
ID: 32183 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ExtraTerrestrial Apes
Avatar

Send message
Joined: 1 Sep 08
Posts: 204
Credit: 219,354,537
RAC: 0
Message 32206 - Posted: 9 Oct 2009, 20:32:16 UTC - in response to Message 32167.  

Wow, now I start to understand your problem. Let's get back to the beginning:

1. Fact: the server is struggling under the current load as we've got too many WUs passing in and out

Possible solutions:
- make the server beefier -> dumb & costly -> nope
- make the WU handling more efficient -> that'd be a huge task for the BOINC devs
- reduce the number of WUs -> yes

2. How can we reduce the number of WUs and keep the amount of work done at least constant? Only by putting more work into each WU.

3. As you stated correctly there's a limit to how large they could make the WUs as eventually they need some results back to generate new ones.
-> they may have to use this option in the next couple of weeks, which is totally fine as currently they're getting tasks back more than quick enough

4. What if they go to the longest tasks feasible and there are still too many WUs? In this case they need to give larger tasks to fast hosts and smaller ones to slower hosts.

4.1 One way to achieve this is the separate project for GPUs. As I said at least 2 times (I think) that's totally fine with me. But they have to do it. And that's the problem. It was planned in spring and has not been done up to now. I don't know the reason, but I know there is some reason.

And I suppose this reason is not going to change anytime soon. I'd be happy to be wrong here, but then there's not much point in me shouting for the separate project - you're doing enough shouting already.. in a positive way ;)

4.2 However, in case the separate project is not going to happen I have shown an alternative way of realizing point 4. It does require some careful modifications in the server software, but once done I think it does offer some advantage over the separate project approach.

Had I been able to better understand your problem with my posts I'd have stated this more clearly before: I just don't believe they're going for the separate project, so I want to present an alternative. It's an answer to the OPs "Can we please have another system that works?", I don't claim it to be the only answer.

5. And finally, once we realized point (4) in the proper way, there'll be enough work for everyone (hint: easier with my system). At this point the me/you talk will be completely irrelevant. How to get there? That's what I want to think and talk about. I don't think that's too short sighted.

BTW: everything I wrote here is totally unaffected by credits. Either way the project team will decide that.

MrS
Scanning for our furry friends since Jan 2002
ID: 32206 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 32208 - Posted: 9 Oct 2009, 21:27:13 UTC - in response to Message 32206.  

Wow, now I start to understand your problem.


Let's make it clear: I do not have a problem.

You, as a GPU user, have a "problem". I put that in quotes because it's difficult for me to consider it to be a real problem when we're talking about the highest amount of credit ever seen from a BOINC project and what seems to be a goal to just get more of said credit, but we'll call it a real problem in need of a solution for the sake of argument.

I am taking exception to your desire to want to try to form a "solution" to your problem by attempting to make your problem my problem and everyone else's problem...


1. Fact: the server is struggling under the current load as we've got too many WUs passing in and out


Not exactly. The types of searches they have going right at this moment seem to be just below the point that causes things to break down. From what I can see, without metrics, the way things are right now is roughly where the point of peak performance lies...as viewed by total scientific output. It may not set records for volunteers that want as many credits as they can get, but this level of activity at least keeps the server responsive and work flowing... Might it be increased in efficiency more? Perhaps....or perhaps not... Again, I don't have the raw metrics to go on, only my past experience as a system admin as well as the observations that the server doesn't seem to be sluggish.


Possible solutions:
- make the server beefier -> dumb & costly -> nope
- make the WU handling more efficient -> that'd be a huge task for the BOINC devs
- reduce the number of WUs -> yes


I am commenting as I go. At this point, I'd agree that the total number of tasks could stand to be reduced, but not by trying to take from the bottom to feed to the top... Let's see where you go...


2. How can we reduce the number of WUs and keep the amount of work done at least constant? Only by putting more work into each WU.

3. As you stated correctly there's a limit to how large they could make the WUs as eventually they need some results back to generate new ones.
-> they may have to use this option in the next couple of weeks, which is totally fine as currently they're getting tasks back more than quick enough

4. What if they go to the longest tasks feasible and there are still too many WUs? In this case they need to give larger tasks to fast hosts and smaller ones to slower hosts.

4.1 One way to achieve this is the separate project for GPUs. As I said at least 2 times (I think) that's totally fine with me. But they have to do it. And that's the problem. It was planned in spring and has not been done up to now. I don't know the reason, but I know there is some reason.


Possibly because CUDA was more difficult to get done than anticipated? Also because server-side support wasn't there for ATI? Not sure. At any rate, that is the best thing to pursue if it becomes as bad as it did earlier this year, not some idea of taking from the bottom to give to the top...

If there was a separate project:

  • GPU users could have that all to themselves.
  • The CPU side of things would no longer have to have the scientists be cautious not to make the searches too small and cause the GPUs to totally overwhelm the server.
  • Your idea of "fast" vs. "slow" hosts could be employed by the CPU side to give the 3 stream tasks to faster processors and the 1 stream tasks to slower processors, improving the CPU side of things.




4.2 However, in case the separate project is not going to happen I have shown an alternative way of realizing point 4. It does require some careful modifications in the server software, but once done I think it does offer some advantage over the separate project approach.


At this point we start disagreeing.

The 3 stream tasks are handled by aging Pentium 4 systems within 2 hours with the SSE2 optimized application. A better alternative to trying to do a "grab" for the top users would be to develop a CPU feature detection wrapper like what Einstein@Home uses, then bring the optimized application fully in-house and provide the optimized applications to all CPU users as the "stock application". There'd need to be x87, SSE, and SSE2 versions, with possibly a SSE3 version (I'm using one currently). That would improve performance for the CPU users, if you are still convinced that their throughput is to blame for the ills of the project.

If they don't do a separate project, don't bring the optimized application in-house, and the project is at peak efficiency right now, which evidence suggests that they are, then those of you with GPUs need to just deal with it. No project promises that you will have a constant stream of work, 24x7x365. The desire to take some from those who have very little to begin with to give to those who already have a lot, is flat out crass, arrogant, selfish, etc, etc, etc... The only way I could ever consider supporting this type of proposal is if the project itself said it was the best thing in the world. After they did that though, I'd promptly detach and move elsewhere...
ID: 32208 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 32216 - Posted: 10 Oct 2009, 3:55:31 UTC

The problem with suggesting that the GPU consumption of tasks here can be solved with a GPU only project with longer tasks has one hole in it... I already have a GPU that are quite content with the tasks here ... and unless they change the tasks here to make them incompatible with the currently available software you are just as likely to see the same people here pulling tasks to feed their GPUs while at the same time running over at the GPU side...

Or to put it another way, your solution may not work ... that is the problem with open source projects ... those pesky people out there may do things that you don't expect with your code.
ID: 32216 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 32224 - Posted: 10 Oct 2009, 9:29:51 UTC - in response to Message 32216.  
Last modified: 10 Oct 2009, 9:37:45 UTC

The problem with suggesting that the GPU consumption of tasks here can be solved with a GPU only project with longer tasks has one hole in it... I already have a GPU that are quite content with the tasks here ... and unless they change the tasks here to make them incompatible with the currently available software you are just as likely to see the same people here pulling tasks to feed their GPUs while at the same time running over at the GPU side...

Or to put it another way, your solution may not work ... that is the problem with open source projects ... those pesky people out there may do things that you don't expect with your code.


While all this back and forth was going on, I thought, somewhat sarcastically, of a project called "HelloWorld@Home". The general concept would be to create a project that simply wrote out the words "Hello World" in a text file and submitted the text file to the project servers. It would then issue 1 million BOINC credits for each submission.

The project would be limited to CPU users only. How? Use of an AES-256 encryption key pair. The correct app would check for various things, like processor feature specs and other test parameters and submit them along with the text file. Some things tested for would be the amount of time to do a particular mathematical operation. Many such calculations would be embedded in the code, and the results all sent back to the server. The server would handle encryption key verification as well as a rotating check of the calculations. If the chosen calculation was out of a tolerance range, that task would be declared invalid and no credit awarded... Those are just some ideas I kicked around briefly. They could be expanded upon...

Paul, you're talking to someone who has some system admin and security background. Yep, open source is a bear, but it is not totally unsecure. Enough safeguards could be built in to identify people who want to cheat.

Alternatively, all one really has to do is make the GPU project credits more attractive than the CPU project, thus the only incentive for a GPU user to attempt to process CPU tasks is if the GPU project was down.

As a system admin, I'd run periodic queries specifically looking for people that wanted to cheat that way, and if found, David Braun would look kind compared to me... :-)

Finally, yeah, sure, it's possible that people will do such a thing (cheat), but that is not an excuse for doing nothing. Also, the vast majority will be respectful and move on. An incentive would be issuing nothing but the short-running tasks for a little while. Yep, it would cause this server to have problems, but those interested in credits would find themselves losing ground by refusing to change...

Incentives... That's all you need.... Incentives... ;-)
ID: 32224 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 15 Jul 08
Posts: 383
Credit: 729,293,740
RAC: 0
Message 32448 - Posted: 16 Oct 2009, 21:41:32 UTC

MW acknowledges a maximum of 8 cores, so it hands out not more than 48 WUs at a time. The issues arising from this and possible solutions are currently being discussed in thread "Problem with tiny cache in MW" .. though I guess it's a tough read.

MrS

You rang, I think it belongs in this thread :-)

To continue the saga, as the poster mentions: the 8 WU/core limit is a problem. Yesterday while diagnosing why my MW WUs were failing (turns out there's bad WUs being sent out) I moved one of my ATI cards from a 4 core XP32 box to a 2 core Win7 64bit box. Now I have 2 machines hammering the server every minute (when MW is running) trying to fill a queue that takes a total of 5 to 8 minutes to run depending on WU size. How many thousands of machines are doing the same? No wonder the servers here are so bogged down. Not a very smart way to manage resources IMO.
ID: 32448 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 32450 - Posted: 16 Oct 2009, 22:18:49 UTC - in response to Message 32448.  
Last modified: 16 Oct 2009, 22:22:22 UTC

Now I have 2 machines hammering the server every minute (when MW is running) trying to fill a queue that takes a total of 5 to 8 minutes to run depending on WU size. How many thousands of machines are doing the same? No wonder the servers here are so bogged down. Not a very smart way to manage resources IMO.


The issue is and will continue to be the fact that the tasks being processed by GPUs were not originally designed to be processed by GPUs. They were designed to be processed by CPUs. The ATI app was designed by a 3rd party person. It took the same tasks that were being done by CPUs and allowed them to be run on GPUs.

Let me repeat: The 3rd party application that was developed allowed tasks that were originally intended to be processed by CPUs to be able to be run / processed on suitable ATI GPUs.

Why is this important? Because the project was technically then being run "out of specifications", aka "overclocked". I overclock my CPU. It has its' risks and rewards. If you push any CPU, IC, or other electronic device hard enough, you will eventually find its' breaking point. This is where products like Prime95, Orthos, SuperPI, MemTest86+, etc... came into the forefront as tools to test overclock stability.

The bad WUs / short WUs are making this problem crop up again. Time and time again the server-side infrastructure shows that it can't handle being pushed any harder, yet that's what you appear to want. Instead of looking so short-term, try looking at a longer-term solution instead... It has all of the same benefits that you appear to want, it just doesn't get done as quickly.

The answer to your problem was and continues to be either larger tasks for GPUs to process on this same server, or an entirely new server and new project for GPUs to process tasks that are more complex.

The other "solution" that has been floated is to attempt to take tasks away from certain people and give them to others. The reality of that is that it would at most only bring fleeting moments of "relief", or could actually cause things to get worse, as the "give an inch, take a mile" rule would come into play. People would then be unhappy about 30 minute caches and would want an hour. At some point in time after that, some people would be unhappy about having an hour of cache and would want 3 hours, then 6, then a day, etc, etc, etc... People with 38xx series ATI cards would be villainized as the "slow users", taking the place of those of us with CPUs. When the 58xx series cards get out there in larger numbers, the 38xx and 43xx series, as well as possibly the 47xx series, would be proclaimed to be "slow", etc, etc, etc...

Does that sound like a solution to you? Wouldn't it be better to have a real solution instead of the computer equivalent of "class warfare" / "economic redistribution of wealth"??? Really, that's what you're proposing with the "take from others" approach, except it equates to "take from the poor to give to the rich". That eventually doesn't work, because "the poor" just don't have enough to be able to boost "the rich" for any extended period of time, thus people will keep continuing to want to broaden the definition of "poor" so that there are more people to be plundered in favor of the few...
ID: 32450 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile [P3D] Crashtest

Send message
Joined: 8 Jan 09
Posts: 58
Credit: 53,161,741
RAC: 0
Message 32451 - Posted: 16 Oct 2009, 22:28:13 UTC

We need larger WUs and a bigger cache:

I tryed a ultralow-voltage singlecore-cpu with a oc'ed 4870x2

6 WUs done 3min 12sec ...

on Collatz : 120 WUs done in 19h 56min
ID: 32451 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GalaxyIce
Avatar

Send message
Joined: 6 Apr 08
Posts: 2018
Credit: 100,142,856
RAC: 0
Message 32453 - Posted: 16 Oct 2009, 22:38:11 UTC - in response to Message 32450.  
Last modified: 16 Oct 2009, 22:42:42 UTC

They were designed to be processed by CPUs.

No they weren't. Rensselaer Polytechnic Institute developed a research project called Milkyway@Home which required the processing of data by computer or whatever means, calculators, atomic clocks or counting on fingers - whatever it takes to feed their research.

Your analysis of the whole thing being concentrated on CPUs as if that was the end in itself is quite wrong. Let's get those resources properly and smartly managed. My GPUs are straining for MORE! because they want to smartly and resourcefully contribute to Milkyway@Home!



Yea, why not change the world?

ID: 32453 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
Message 32464 - Posted: 17 Oct 2009, 2:02:05 UTC - in response to Message 32453.  
Last modified: 17 Oct 2009, 3:01:58 UTC

They were designed to be processed by CPUs.

No they weren't. Rensselaer Polytechnic Institute developed a research project called Milkyway@Home which required the processing of data by computer or whatever means, calculators, atomic clocks or counting on fingers - whatever it takes to feed their research.


Since I seem to be being taken to task by you, again, over another quibble over wording:


About MilkyWay@home
The goal of Milkyway@Home is to use the BOINC platform to harness volunteered computing resources in creating a highly accurate three dimensional model of the Milky Way galaxy using data gathered by the Sloan Digital Sky Survey. This project enables research in both astroinformatics and computer science.
In computer science, the project is investigating different optimization methods which are resilient to the fault-prone, heterogeneous and asynchronous nature of Internet computing; such as evolutionary and genetic algorithms, as well as asynchronous newton methods. While in astroinformatics, Milkyway@Home is generating highly accurate three dimensional models of the Sagittarius stream, which provides knowledge about how the Milky Way galaxy was formed and how tidal tails are created when galaxies merge.

MilkyWay@Home is a joint effort between Rensselaer Polytechnic Institute's departments of Computer Science and Physics, Applied Physics and Astronomy. Feel free to contact us via our forums, or email astro [at] cs [dot] rpi [dot] edu.


That was from the front page of the project. While one could argue that "computing" could still be done on fingers, toes, an abacus, a calculator, or whatever, I think that if one wanted to be that silly as to try to spin the term "computing" to mean that vs. what we all have been using to process tasks with, a computer, one might also get hyper-offended about the comment about this being computing done "on the cheap".

Further, from the request for donations forum posting:

One of our users re-wrote our code so that it worked on a GPU, and showed us how much faster our code would run if we had a GPU version. This sparked activity on our side to make this available to everyone. In a way, we are learning a whole new and interactive way to do computing for science projects that was pioneered by the SETI@home application. But as a scientific community we are only beginning to learn how to use the tremendous resources that are available.


Isn't it time to move past attempts at crafty wordsmanship and quibbles over minutea, or even misunderstanding / misinterpretation of what someone meant?

If it is time to move on past that, then maybe you could see that larger tasks for GPUs would help a lot more than increasing the amount of the smaller tasks.

So, if you are still going to go for wordsmanship rather than a productive conversation, then I'm tired of trying to out-joust you in a wordsmanship contest...so if that's what you want, you "win"... The choice is yours to make...
ID: 32464 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile GalaxyIce
Avatar

Send message
Joined: 6 Apr 08
Posts: 2018
Credit: 100,142,856
RAC: 0
Message 32481 - Posted: 17 Oct 2009, 8:44:09 UTC

No, you really don't get it, do you? It's not quibble, quibble, quibble. It's crunch, crunch, crunch!


ID: 32481 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 15 Jul 08
Posts: 383
Credit: 729,293,740
RAC: 0
Message 32782 - Posted: 26 Oct 2009, 16:32:11 UTC

Related question: a quad with 1 ATI card is limited to a 24 WU cache. If another ATI card is added is the cache doubled to 48 WUs or is the machine still limited to 24?
ID: 32782 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Labbie
Avatar

Send message
Joined: 29 Aug 07
Posts: 327
Credit: 116,463,193
RAC: 0
Message 32790 - Posted: 26 Oct 2009, 19:09:56 UTC - in response to Message 32782.  

Related question: a quad with 1 ATI card is limited to a 24 WU cache. If another ATI card is added is the cache doubled to 48 WUs or is the machine still limited to 24?


Still limited to 24. :(


Calm Chaos Forum...Join Calm Chaos Now
ID: 32790 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 32791 - Posted: 26 Oct 2009, 19:10:26 UTC - in response to Message 32782.  

Related question: a quad with 1 ATI card is limited to a 24 WU cache. If another ATI card is added is the cache doubled to 48 WUs or is the machine still limited to 24?

No. The cache is based upon the number of cpu cores the computer has. Each core gets 6 wu's. So dual=12, quad=24, and so on.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 32791 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 15 Jul 08
Posts: 383
Credit: 729,293,740
RAC: 0
Message 32803 - Posted: 26 Oct 2009, 21:31:42 UTC

Thanks for the answers. So if I add another 4770 to my dual core It'll have a total cache of 5 minutes assuming the "long" WUs. Great :-(
ID: 32803 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Beyond
Avatar

Send message
Joined: 15 Jul 08
Posts: 383
Credit: 729,293,740
RAC: 0
Message 33672 - Posted: 24 Nov 2009, 21:23:23 UTC

Any admin feedback on this subject yet? Thanks!
ID: 33672 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 33746 - Posted: 26 Nov 2009, 9:19:54 UTC

The project is addicted to the fast turn around times that GPU crunching gives them. I don't think they care if GPU crunchers can't cache many wu's.
ID: 33746 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 33747 - Posted: 26 Nov 2009, 9:26:39 UTC - in response to Message 33746.  

The project is addicted to the fast turn around times that GPU crunching gives them. I don't think they care if GPU crunchers can't cache many wu's.


I know the volunteers at our project for the most part care about credit. I know there's a few of you out there who are really interested in the science we're doing, but it sadly seems that for some reason there are quite a few people who are interested in the mythical credit :P

For our project to do the science we're doing, we need fast turnaround times on workunits. I hope there's still the sticky describing why we really need that. On top of this, we've found that when we increase the workunit cache, the amount of WUs out in the system increases to a point where the database can't handle it's queries fast enough (because a lot of it is dependent on the result table). So right now, this project needs a low cache, partly because of the server and partly because of the scientific needs of the project.

If you guys appreciate the science we're doing here, then we hope you'll put up with having a small WU cache. Part of what's nice about BOINC is that you can also be part of other projects that will fill out your cache when you don't have work from us.

This isn't to say that we're not working to try and make the situation better so you can have a longer cache time (because we are), it's just that these things take time. Sadly, it's typically more time than people are used to dealing with, especially since I'm really the only one developing the server code, and I only just finished my PhD which took up a lot of my time.

Anyways, I'm hoping that in this spring we'll be able to deal with a lot of these issues and I'll be able to train someone to do what I've been doing -- and even more, and hopefully do a better job at it. At the very worst I'll find a job and keep doing this part time because I really am interested in the research behind this project and don't want to see that stop.
ID: 33747 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 33749 - Posted: 26 Nov 2009, 10:06:01 UTC - in response to Message 33747.  

The project is addicted to the fast turn around times that GPU crunching gives them. I don't think they care if GPU crunchers can't cache many wu's.



For our project to do the science we're doing, we need fast turnaround times on workunits. I hope there's still the sticky describing why we really need that. On top of this, we've found that when we increase the workunit cache, the amount of WUs out in the system increases to a point where the database can't handle it's queries fast enough (because a lot of it is dependent on the result table). So right now, this project needs a low cache, partly because of the server and partly because of the scientific needs of the project.

If you guys appreciate the science we're doing here, then we hope you'll put up with having a small WU cache. Part of what's nice about BOINC is that you can also be part of other projects that will fill out your cache when you don't have work from us.


But you seem to forget that cpu crunching gets you the same cache and yet takes 50 times longer to complete a wu. So your argument tends to fall in a heap at that....
ID: 33749 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 · Next

Message boards : Number crunching : Problem with tiny cache in MW

©2024 Astroinformatics Group