Welcome to MilkyWay@home

Posts by Gill..

1) Message boards : MilkyWay@home Science : MilkyWay@Home Progress Report (Old) (Message 43489)
Posted 5 Nov 2010 by Gill..
Post:
A classic demonstration of throwing spaghetti against the wall to see what sticks. It is good some of it has stuck so there is a path to better sticking. I have been involved in math modelling since 1967 and know what spaghetti throwing means. No blame.

I have to think a lot on the the basic explanation and approach you give but as some things have stuck it has to be on the right track. But I question that is has stuck. You still give visual examples of that appears to be happening. Does the data match the predictions without knowing the distribution of dark matter? The images cannot be correct without knowing the distribution of DM.

I guess I am still missing the basic point of showing such images yet not knowing the distribution of DM but assuming there is none -- or is that my error?

I am not expecting any answers now. I have not spent enough time on your post to say anything credible about it.


You are not alone fellow user in looking for update. Just posting to bump, possibly getting attention before another 6 months rolls around before the next post.

And yes, screen savers are the last thing needed and a waste (although it would probably be cool.
2) Message boards : Number crunching : It is fermi? How do you think? (Message 38664)
Posted 13 Apr 2010 by Gill..
Post:
Don't freak out when the 470 shows up on my credit tomorrow either.......

Why should anyone freak out, when you start using a cut down and artificially limited card (for double precision), which is probably not faster or even slower than a HD4770? I guess the performance on other projects (using SP or integers like Collatz) may be more interesting ;)


As I'm finding out! And to think I sold 2 4770's for this onslaught...that's ok - my 5850's upstairs are still rolling primarily in Milky

Regardless, I do want to at least try some on the 470 to give a fair comparison to it's ATI brethren upstairs. The primary driver for the purchase was indeed for the projects that are soley CUDA. However, it should be noted - that Milky's maturity from a programming standpoint is outstanding that it provides such a nice level playing field for both brands.

So with that in mind, I tried searching to see what I can do. My first couple did an immediate compute error (similar to ATI's after a CCC install), and I of course stopped it immediately.

I've looked in the DATA folder and it doesn't have the old familiar appinfo...and I'll readily admit I'm a CUDA noob. So is 470/480 totally out for now, or is there anything I can do? Are there similar tweak methods I can use on these CUDA apps, or is it more locked down.

Any modified executables with the compute capability requirement tweaked to allow to 2.0?

Feel free to use me as a guinea pig for Fermi if you like, I like this project!

Thanks in advance.

Gas Giant - Kylie-PC is me...it's my daughter's custom watercooled HTPC I built her. That's the above referenced 470. It is indeed real. Took 4 day's to get from CT to MA (where it was scanned in transit as of 4am Friday). Got it at noon today.

Edit 2 - No way I could pass up opening the package and crunching on it. Value be darned - I bet I can still get my full $349 back in 6 months if I wanted...or pretty close to it..there will be tons of crappy cards - but this one is in the sweet spot, and will be rare for quite some time I bet (like the 5850 was for months)...
3) Message boards : Number crunching : It is fermi? How do you think? (Message 38444)
Posted 9 Apr 2010 by Gill..
Post:
Lol, this must have been why I was flagged. To all wondering my increase:

Sold my 4770's, as well as my 4870. Got a couple of 5850's. 5850's+2monitors+ccc10.3=disaster.

Plus, these are new Sapphires (non reference)...so no quick flashy, no afterburner. I edited the BIOS, then flashed...hence all my mysterious clients (2 computers in reality).

I've updated to .23 as they requested, and I've updated my team as well.

Do catch the offenders, but please remember everyone (Calm down David, no need to threaten pulling funds, keep your donations to yourself) - this IS for science - so let Travis do what he needs.

They will sort it out..

Don't freak out when the 470 shows up on my credit tomorrow either.......
4) Message boards : Number crunching : IMPORTANT! Nvidia's 400 series crippled by Nvidia (Message 38443)
Posted 9 Apr 2010 by Gill..
Post:
Noted 2 weeks ago. I wonder if anyone is willing to own up to crunching with one here?


My 470 is in CT on the way to MA as we speak!

I'm torn on how I should set it up, I'm thinking deuce 5850's upstairs - 470 downstairs.

If it's loud, wife will make me put it upstairs. I'll do a comparison and definitely post.

Either way, getting it for $349 - I couldn't pass it up. right place, right time..... I'm sure some Nvidia-phile would love it if it doesn't compare....

I'll do Seti, GPUGRID and Milky? sound good. I'm so psyched, I can't sleep.
5) Message boards : Number crunching : Cruncher's MW Concerns (Message 33242)
Posted 13 Nov 2009 by Gill..
Post:
Cruncher's MW Concerns.

Lack of cached wu's on GPUs. With a wu taking 55 seconds or less, for my machine with 2 * GPUs in my quady, that gives me just under 15 minutes of wu's cached. Be nice to have 30 to 60 wu's cached per GPU.


Sadly this has been a problem with the project since it's inception. Due to what we're doing here our WUs need a somewhat faster turn around time, so chances are you're not going to be able to queue up too much work.

Also, with the server in it's current struggling state, letting people have more WUs in their queue only slows it down farther, so it's not something we can really change.


Well, on the first point, that doesn't entirely address the situation at hand. With the CPU renderer that can take 1.5 hours, 2 hours, or so a WU; that can take substantially less time on a GPU.... Fast turn around is one thing, but expecting results back every 30 seconds (12 second completion times) would hardly reduce the load on the servers. At least on the networking end, it would mean chatty boxes that are constantly hammering it with requests for new work, and constant uploads/reporting results. All these additional requests then have to be handled, and as they occur more often... In fact a larger queue for these people, with a slightly more aggressive backoff algorithm could help some types of resource load problems.

The problem is, that addressing it, would require 1. estimating a devices average completion time, and then 2. instead of restricting it by x number WUs (assuming all computing devices are equal), restrict it by y-time factor where faster devices can get a larger queue. Only problem that introduces is that if one's looking at a single BOINC client, it could pull both WUs for the CPU as well as the GPU, and if it comes up as a single computer ID that might get hairy for saying "OK, the number of GPU WUs allowed to be uncompleted at a time should be one thing, the number of CPU WUs that much smaller. Unless there's a way to distinguish between them from the scheduler's standpoint, which would require more info then a simple computer ID, with whatever benchmark stats got uploaded assuming one computation device of equal perf.

Allowing 30 minutes, or 1 hr of GPU tasks to exist on a box at a time, wouldn't delay return times greater then allowing one task to be completed on a CPU that takes longer then that to crunch, even exclusively. Also, the CPU can run a greater number of total projects (scheduled runtime), then projects that are GPU only. But it does add complexities to the mix; if one's goal is to get them in said reasonable time frame; when the WU completion time can very between seconds on the one hand, vs hours on the other. And both present rather different problems (results that aren't returned for days on the one hand, but on the other results getting reported so often with a constant stream of downloads, that the server's being practically hammered with constant uploads/downloads, and requests in unrelenting fashion).

There's almost a balance to be had between faster turn around (and the attendent faster WU generation) and server load issues that could be introduced if clients are having to contact it all the time, without some form of break before the next demand/request is placed upon the server via the network pipe, at least looking at the face of it. But with such a wild variation in completion times, and given a single box could have 1 of each device turning them in, meh... The single box would simply have no single constant wrt performance, due to the variation between the devices doing the tasks; even if the time to completion is what one really wants to get at, ignoring the differences in speed on varying devices which this introduces..


Yeah, what this dude said. I think someone's taking them for a ride with these "vibrating hard drives".....Seriously...construction??

2 servers, 1 for CPU tasks - 1 for GPU tasks...which are formulated differently to serve up the data to the project in the required times (as desired by the project_ - but keep everyone else happy with wu times, and reduce server load! (my suggestion would be no VM'ing them either - give them healthy CPU's and platforms to work with in addition to the suggestions above).

Crunchy crunchy, pair of dragons now...

4890 joined the fam with my unlocked 550BE
4870 downstairs on the 940 BE heating my living room.

Don't let Collatz get all the credits. Plus, I just got some dudes to join with some heavy hitting equiptment (295 and a 5870)...and then 2 days later we go for another huge outtage...

All my BOINC lobbying takes hits ...(I got tough skin, don't worry)

But RPI team still the best in the business!!

Such efficiency on these ATI cards are the envy of all the distributed computing projects....keep up the good work and thanks.

PS Good luck on your dissertation!
6) Message boards : Number crunching : More and more failures to connect to server- deja vu (Message 32566)
Posted 20 Oct 2009 by Gill..
Post:
So project servers ARE down? Been down since late into the night last night. And, if you're wondering - got a number of computation errors right before....which is NOT normal....

NICE on the 5870 - you'll have to let me know how it goes..I'm almost at 2 M in a couple week period with my 4870..

got the GD70 with 4 PICE's....ohhh imagine those stacked with 5870's (never could I afford that)...

but may 5850's over time!

Also, everyone on my overclock site says x8x8 lanes don't affect this type of crunching - but they're going off Folding numbers...is that true here too??

my board will do x16 x16 or x8x8x8x8

blam!
7) Message boards : Number crunching : BOINC 6.10.12 fixes Ati issues? (6.10.13 has been released as well) (Message 32221)
Posted 10 Oct 2009 by Gill..
Post:
Question in general about the MW & Collatz apps. Since v2 Collatz has been more or less stable on my machines (both HD 4770 / WinXP32 / v9.8 no CCC / BOINC v6.10.13). MW runs fine when the machines are left as dedicated crunchers, but if I try to do anything else when MW is running (like open Windows explorer or running notepad, sometimes even bringing BOINC manager into focus) I'll get an immediate video crash. Given buggy ATI drivers, what could be the difference that allows Collatz to run with other apps and causes MW to crash? Possibly that MW is using double precision?

I have really no idea. Both applications access the GPU in exactly the same way. The only difference is what gets calculated. And I don't have those problems when using Catalyst 8.12. ATI apparently changed something in the drivers, what has broken them for MW on XP. Vista and Win7 drivers appear not to be affected and strangely it doesn't happen with Collatz either. I have a WinXP machine running Collatz with Catalyst 9.9 (and a 790GX chipset IGP needing just 11,000 seconds on average ;) without any problems.

I'm not entirely convinced that the CAL bugs introduced in ATI drivers earlier this year are fully exterminated. After I switched from BOINC 6.4.7 to 6.10.13, I tried both Cat 8.12 and 9.9 for a while on a HD4870, everything else unchanged.

Cat 8.12 rock solid up to 790MHz.
Cat 9.9 barely stable at stock, VPU crash guaranteed anywhere beyond 750Mhz.


830 GPU, 1050 Memory clocks on a 4870. CCC 9.9, BOINC 6.10.6. Rock solid for days now on 85% CPU (all 4 cores). I run one Milky Way wu at a time at an average of about 43 seconds. I like the 1 second pause of juice going to the chip to keep it at 76C at 85% fan...keeping room comfortable!

RAM running at 3.7 GB out of 8 at 800Mhz. Vista x64. Would love Linux version...I get 30% better CPU benchmarks in Ubuntu.
8) Message boards : Number crunching : server crash (July 29) (Message 31844)
Posted 2 Oct 2009 by Gill..
Post:
Here is what I'm showing, like everyone else - been like this all night.

Don't knock the dudes - they're trying...

Think of the bright side, you're giving your GPU a break and your electricity might be a bit less this month....


10/2/2009 2:24:17 AM Milkyway@home update requested by user
10/2/2009 2:24:18 AM Milkyway@home Sending scheduler request: Requested by user.
10/2/2009 2:24:18 AM Milkyway@home Reporting 24 completed tasks, requesting new tasks for GPU
10/2/2009 2:24:23 AM Milkyway@home Scheduler request completed: got 0 new tasks
10/2/2009 2:24:23 AM Milkyway@home Message from server: Project is temporarily shut down for maintenance
9) Message boards : Number crunching : 3rd.in - optimized apps (Message 31485)
Posted 25 Sep 2009 by Gill..
Post:
BOINC 6.1.06 and CCC 9.9 Was tough to get it going, but once I did - holy hot ...well you know.

4870 1 GB 940 BE@ 3.6 in Vista x64.

the smaller of the workunits takes me on average 45 seconds. - not kidding

Running very smooth for the most part. I've throttled back to 92% on the 4 cores for BOINC Total.

Overclock or not (only 2 days or so testing mind you - first time running the 4870 with Milky Way)...got what appears to be a system restart - I'm thinking it's definitely GPU. Windows comes back as normal - BOINC starts, but doesn't connect to the client...and that's it..until I see it - I restart the client - and off it goes running like a madman.....oh, and - the video card setting are back to stock..

Could have been too hot in the room - this thing is a steady high 77C or so (GPU, not the room itself), but the weather should cool - and I'm leaving the window open tonight.

Going stock 750/900 as I head to bed - 100% fan though definitely.

For computation errors, I've only noticed a couple altogether. There may be more I'm missing - but as I've monitored it pretty much when I'm not at work last two days... I've only seen a couple (minus the mess when I first tried it - blew away my first 24 tasks like an idiot...).

Utterly so impressed with this, it's not even funny.

to the guy with the 3870X2 - dude, I see a 5800 series in your future...money you'll save in the electric bill over 6 month period will justify the investment - even my 4870 is probably more efficient.

Anyone know when a Linux client will be available? I'd like to get back to my extra 30% CPU optimization....

Second question - should I just keep using 6.10.6 and CCC99 and push on? Give it one more night and then try pushing back to 8.12 (I'll cry..., i don't know if I can actually go through with that).

Oh yeah, forgot the best part about this optimized app. 5% CPU usage on one core?? That is soooooooo awesome, primary reason I abandoned folding@home. I wanted all 4 cores running 4 tasks, plus 1 GPU...

Now I do....crunching nirvana...
10) Message boards : Number crunching : HD5870 (Message 31433)
Posted 24 Sep 2009 by Gill..
Post:
These are certainly going to keep some houses warm this winter. With an idle power draw of 27W, rising to 188W under full load, these bricks will draw enough power to dim and flicker street lighting even if you had the PSUs to run them.



Exactly my plan... my wife freaked out when she walked in my home office tonight - I got the optimized app working last night! Yaya... It was 25 degrees hotter in the room..

So moving 4870 to HTPC i'm building for downstairs, and hopefully a 5800 series in my main rig....

Video of optimized app running them in 43 seconds at 825 core/975 mem clocks

http://www.youtube.com/watch?v=vNMp3e7jpMU
11) Message boards : Number crunching : Temporary Availability of the ATI application through BOINC (Message 31329)
Posted 22 Sep 2009 by Gill..
Post:
For debian/ubuntu you can get boinc client 6.4.5 here:
GetDeb

Collatz/Milkyway GPU apps run on it fine.

That would surprise me. It's Windows only for the GPU applications so far.


Thank you, so I'm not crazy - I checked all those links...there's no GPU with Linux anywhere right? I use Linux because it beat my vista on CPU only by 30%...

But if I can use the GPU over in Vista, I'll switch over to that and download everything needed - that's all I was asking or getting at.

Thanks... I'll check it out tomorrow.

Got all these fancy animations now in Ubuntu with 9.9 I didn't have with stock..

Back to Vista, also CCC 9.9 - and I'll test this out...

12) Message boards : Number crunching : Temporary Availability of the ATI application through BOINC (Message 31230)
Posted 20 Sep 2009 by Gill..
Post:
So, it says I'm BOINC 6.2.18 for Linux....

Just updated to CCC 9.9 in Linux (my weekend project - almost broke my install last time I tried...but didn't need them with BOINC being the only thing I did on it previously)

Newest Linux version of BOINC is 6.6.36 (Linux x64)...link on that page for GPU's...says I need 6.1.0 for ATI...which I thought wasn't out yet...

can't find the Milky way APP..

it's (BOINC CLIENT) never recognized my card (4870 1 GB)...I'm dying for it to..

So am I on a different version schedule due to Linux? I just want the thing, badly.

I'm building an HTPC to cool the downstairs this winter, hopefully quad coring it with the GPU..

Just give me the app...let me burn my house down please. I'll be a guinea pig if you want to release it on my machine...

So how do I get this thing....and should I update the BOINC client, or should it not really matter and wait for Ubuntu to autoupdate it for me?

Second question...should I just give up and get an Nvidia card for crunching? Thing is...800 stream processors...come on...I don't want to give that up - I'd rather let it heat my house while saving the world (albeit slightly expensively).








©2024 Astroinformatics Group