Welcome to MilkyWay@home

Posts by [KWSN]John Galt 007

21) Message boards : Number crunching : ATI GPU app 0.19f fixes the ps_sgr_208_3s errors (Message 27393)
Posted 9 Jul 2009 by Profile[KWSN]John Galt 007
Post:
22) Message boards : Number crunching : More Invalid wu's (Message 26384)
Posted 24 Jun 2009 by Profile[KWSN]John Galt 007
Post:
9 out of 1000 for me...8 on 4850s and 1 on a 3850...
23) Message boards : Number crunching : 4850 vs 4870 (Message 26025)
Posted 19 Jun 2009 by Profile[KWSN]John Galt 007
Post:
3850 requires 1 6 pin...as does a 4850, but more watts...found that out the hard way...smoked a 400w 25a single rail PSU this morning after installing the 4850 last night...
24) Message boards : Number crunching : strange message (Message 25834)
Posted 17 Jun 2009 by Profile[KWSN]John Galt 007
Post:
This is a typical message if the computer isn't on all the time, or if it has been shut down for a few days. I dual boot with Ubuntu 64 bit for PrimeGrid PSP Sieve challenges, and when I go back to XP I always get that message. You can edit the client_state.xml file to change that number to something like .95, or leave the PC on and crunching for a bit and it will all work out in the end.
25) Message boards : Number crunching : new joking WU-size/type ps_new_11 and ps_new_13 (Message 25018)
Posted 11 Jun 2009 by Profile[KWSN]John Galt 007
Post:
Thanks...with 4 seconds on a 4850, there is no way the server could keep up...
26) Message boards : Number crunching : Compute Errors (Message 25011)
Posted 11 Jun 2009 by Profile[KWSN]John Galt 007
Post:
Imcrazy,

This happens quit a bit on hosts that are shared with other projects.

It seems that the shorter the other projects WU's the more MW hangs.

I believe it has something to do with the way BOINC handles debt.

I think you mentioned you are also crunching Prime Grid and Aqua. The shorter WU's will suspend your MW WU's until your short term, long term debt is cleared.

The new Multi Thread aqua can play havoc on the ATI app since a Aqua WU now wants to use multiple CPU's and will occasionally put MW in suspend mode.



To test this, when you see MW hung up, just suspend the other projects, MW should take off and start crunching again without having to reset or reboot your box


Thanks, Kevin...a good explanation, since I am running PG on my i7 with the GPU doing MW, and I see the PSP sieve WUs jumping into EDF mode, even though the due date is 7 days off and I have a .5 day cache.
27) Message boards : Number crunching : Compute Errors (Message 24971)
Posted 11 Jun 2009 by Profile[KWSN]John Galt 007
Post:
Yes, I am using the latest version 0.19f. I was able to play around with it a little more last night. It seems that absolutely everything is running high priority for some reason. I made no changes to my BOINC prefrences either. I also found that if I suspend my other project (prime grid) everthing starts back up. That will however have a negative impact on PG. None of this started happening until a recent windows update. I'm very much open to suggestions on how to correct it. I thought I might try and reinstall 19f as soon as I can get a chance in case something got messed up with the update. If that doesn't work maybe reinstalling BOINC. The two systems are running 6.4.7


I have seen that with my 4850 in my i7. It seems like the WU hangs at some point, either from the CPU getting overloaded (all MW tasks 'running' but only 3 crunching) or BOINC trying to do task switching.
28) Message boards : Number crunching : Compute Errors (Message 24591)
Posted 8 Jun 2009 by Profile[KWSN]John Galt 007
Post:
These are the ones I could get before insta purge took care of them.

Host 39176 GPU
ps_sgr_208_2s_2_1637698_1244467419
ps_sgr_208_2s_2_1637697_1244467419
ps_sgr_208_2s_2_1637695_1244467419_0
ps_sgr_208_2s_2_1624691_1244465356_0
ps_sgr_208_2s_2_1615716_1244463950
ps_sgr_208_2s_2_1615702_1244463950_0

Host 60779 GPU
ps_sgr_208_2s_2_1641829_1244468066
ps_sgr_208_2s_2_1628695_1244465990
ps_sgr_208_2s_2_1628692_1244465990
ps_sgr_208_2s_2_1623219_1244465117
ps_sgr_235_2s_1_1572923_1244457187
ps_sgr_208_2s_2_308822_1244201732

Host 39247 CPU
ps_sgr_208_2s_2_1598226_1244461206


All but one of my 0 credits have come on the 208_2s_2 WUs as well...too bad instapurge will get them shortly...I didn't see any in the most recent results...
29) Message boards : Number crunching : Compute Errors (Message 24400)
Posted 6 Jun 2009 by Profile[KWSN]John Galt 007
Post:
I have had a few over 4 different 3850 cards, but am not really worrying about it...Probably over 99% have completed sucessully...
30) Message boards : Number crunching : ATI GPU app 0.19f fixes the ps_sgr_208_3s errors (Message 24299)
Posted 5 Jun 2009 by Profile[KWSN]John Galt 007
Post:
Thanks, CP...

Running both 32 and 64 bit...both working great!!!
31) Message boards : Number crunching : Compute Errors (Message 24266)
Posted 5 Jun 2009 by Profile[KWSN]John Galt 007
Post:
Hello MW@Home,

These *_3s_* runs are 3 stream runs that I started. I will tell Travis that there is a problem with them on the GPUs but not the CPUs and abort said run asap.

Sorry for the inconvenience,
John Vickers


No probs...

Once CP gets the ATI app sorted out, we will burn thru these like nothing...

And thanks for posting...
32) Message boards : Number crunching : Compute Errors (Message 24259)
Posted 5 Jun 2009 by Profile[KWSN]John Galt 007
Post:
The current workaround is to abort all the _3s_ WUs, close and restart boinc, and hope you get some _1s_ or _2s_ WUs next time. Its horribly manual and requires keeping an eye on boinc, which explains all the requests/demands in this thread for Travis to fix it.

Come on guys, deep breaths :)
The problem isn't with the workunits, they're fine, so Travis can't fix it.
It's a bug in Cluster Physiks GPU application as he has already pointed out.
Give him a chance and he'll sort it


The real problem is that if a WU gets sent out to 2 GPU clients, and both abort it, the WU dies from too many errors, so the project suffers.

Just my $0.02.....
33) Message boards : Number crunching : Why is it so hard to get work? (Message 23450)
Posted 26 May 2009 by Profile[KWSN]John Galt 007
Post:
As I pointed out in this message and to Travis...

To stop the scriptors hitting the project so hard, you could increase the minimum time between host contacts at the server end. LHC@home increased theirs to just over 15 minutes....maybe you could try 2 minutes and see what happens. I believe it is a simple server side setting.


That's what Bill was suggesting, and got shot down for it... Personally, I don't think 2 minutes is long enough. Needs to be at least 10. I'm sure though that 10 will cause a huge amount of complaining about how that means that someone might have 3-5 minutes of not being able to get anything, although I'm not sure how that's different from the current situation, but then again, I view this from the perspective of solving a problem, not from the eyes of a competitor in a competition.



And something I posted got shot down as well...

If you limit the # of WUs per core, there should be plenty for all. 5k per CPU is a bit much, even if you run only CPUs...
34) Message boards : Number crunching : 4 * GPU Slots! Oh My! (Message 22721)
Posted 19 May 2009 by Profile[KWSN]John Galt 007
Post:
I think Vyper has a 4x GTX295 running at SETI...first person to get over 50k RAC and the top computer. But SETI has their weekly downtime, so I can't be sure that is what he is running.
35) Message boards : Number crunching : 4 * GPU Slots! Oh My! (Message 22710)
Posted 19 May 2009 by Profile[KWSN]John Galt 007
Post:
Hehehehe read guys its simply impossible to run more then 4 cards the maker writes to use it in 3 way sli OR 4 way crossfire thats it.

The fact that it has 7 slots is simply not for 7 cards >.< those extra slots are to move to other slots if other parts or the case are in the way.

And i agree with Verstapp the pci bus seems to be the bottleneck with multi gpu boards i have seen only one system with 3 dualgpu double slots cards and believe me thats a cramped system ;)
Till today i have not seen any 4 way dualgpu double cards solutions build in a case :D
They run the whole shabang in the open air and that guy had also a freezer on his cpu since it ran at very high oc intel i7 920.

I am not sure if the 7 slots are all capable of running at 16x but probably some slots are 8x or less for compatibility reasons.

My current motherboard is also capable of running 4 cards but only at 8x speed not that it matters much if its 16x or 8x untill someone donates me a couple of GTX295 cards :D
But i wonder if it really matters i haven't seen any performance gain or seen any speed increase between 8x or 16x slotted machines, i really think the pci busses are the bottleneck. Has anyone info about test what speed is actually reached when using high performance cards. Or are we stuck with the theoretical speeds provided by the manufactors.

I have seen allready psu giving 1,5 Kw but its a big monster needing 2 x space of normal power supply i thought it was a Zippy.



GPUGrid Lab machine...




Motherboard:
MSI K9A2 Platinum / AMD 790 FX 4xPCI-E 16x
CPU:
9950 AMD Phenom X4 2.60Ghz, RAM 4Gb.
GPUs:
4x NVIDIA GTX 280

Power supply:
Thermaltake Toughpower 1500W, with 4 PCI-E 8-pin and 4 PCI-E 6 pin power cables.


So it is possible...
36) Message boards : Number crunching : never getting more than one task (Message 19937)
Posted 22 Apr 2009 by Profile[KWSN]John Galt 007
Post:
Thanks, Alinator...it seems like the times that are recorded as the last contact coincide with a Message from server: (Project has no jobs available) line in the messages tab, whereas all other attempts are normal black text...

Back to crunching...
37) Message boards : Number crunching : never getting more than one task (Message 19827)
Posted 21 Apr 2009 by Profile[KWSN]John Galt 007
Post:
One thing I have noticed is that sometimes my PC will ask for work, get none, and then go into backoff. However, when I check the PC in my computers list in my MW account, it shows that it hasn't connected in a few hours. One shows a last contact at 15:19 UTC, but it did do a work request since then. Maybe there is a backoff on the server side that we don't know about, since the upgrade of the server software...
38) Message boards : Number crunching : Milestones (Message 19635)
Posted 20 Apr 2009 by Profile[KWSN]John Galt 007
Post:
Got my 6th project with over 1 million...
39) Message boards : Number crunching : MilkyWay_GPU (Message 18835)
Posted 15 Apr 2009 by Profile[KWSN]John Galt 007
Post:
Haha yes, its actually completely ready and I think Travis has the apps up. We're just working on getting the web server to redirect the page correctly.


Dude....awesome...

40) Message boards : Number crunching : Milestones (Message 18680)
Posted 14 Apr 2009 by Profile[KWSN]John Galt 007
Post:
Now hit 2 million..but am going to pull my CPU cores off for a while...got some other projects that need attending to...


Previous 20 · Next 20

©2020 Astroinformatics Group