Welcome to MilkyWay@home

Server Updates and Status

Message boards : Number crunching : Server Updates and Status
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

AuthorMessage
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 17159 - Posted: 30 Mar 2009, 11:07:16 UTC

Climate Prediction stayed up. It often has weekend outages but managed not to synchronise with Milkyway in this case. Plus, with slightly longer [multi-thousand hours on my o/ced nehalems] WUs we could stand a couple of days without contacting a server.
Here, otoh, my last-generation radeon 3800s run out of work after an hour. Which means I got a lot of folding done over the weekend...

So... either longer WUs or more WUs per download please.

Cheers,

PeterV

.
ID: 17159 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BarryAZ

Send message
Joined: 1 Sep 08
Posts: 520
Credit: 302,524,931
RAC: 2
Message 17166 - Posted: 30 Mar 2009, 17:39:45 UTC - in response to Message 17159.  

But Einstein crashed (and is still crashed three days later) even though it has longer work units (8 hours or longer).

Longer work units will help with load issues (both server and I/O) but that probably will require some design work. Actually it sounds like that is on tap for the GPU work. It just might take a little more doing. I suspect the other issue is relative resources -- my sense is that the number of design/support folks for Milkyway is less than what Climate or Einstein has. (or SETI or Rosetta or fill in the blank).

I think the plan to move the GPU work to a parallel stream and make the work units for it much longer is major good news. Hopefully MilkyWay has adequate resources to juggle that. It certainly is good news fro us poor CPU centric processor types who tend to fall to the bottom of the available workunit food chain these days.


Climate Prediction stayed up. It often has weekend outages but managed not to synchronise with Milkyway in this case. Plus, with slightly longer [multi-thousand hours on my o/ced nehalems] WUs we could stand a couple of days without contacting a server.
Here, otoh, my last-generation radeon 3800s run out of work after an hour. Which means I got a lot of folding done over the weekend...

So... either longer WUs or more WUs per download please.


ID: 17166 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 17167 - Posted: 30 Mar 2009, 17:58:59 UTC

On the other hand, with the source being open, what is to prevent the continued development and use of the GPUs on the shorter tasks?

The only way to decisively move the GPU worker to the GPU work stream is to make sure that that work is always available, and that it pays more ... or ... you will have longer GPU tasks, but the GPUs will still be drawing from the CPU task pool ...
ID: 17167 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brickhead
Avatar

Send message
Joined: 20 Mar 08
Posts: 108
Credit: 2,607,924,860
RAC: 0
Message 17169 - Posted: 30 Mar 2009, 19:02:32 UTC - in response to Message 17167.  
Last modified: 30 Mar 2009, 19:04:01 UTC

On the other hand, with the source being open, what is to prevent the continued development and use of the GPUs on the shorter tasks?

The only way to decisively move the GPU worker to the GPU work stream is to make sure that that work is always available, and that it pays more ... or ... you will have longer GPU tasks, but the GPUs will still be drawing from the CPU task pool ...

Well, apparently not the *only* way, as suggested in an earlier post: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=756&nowrap=true#17038
ID: 17169 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 17178 - Posted: 30 Mar 2009, 21:55:08 UTC - in response to Message 17169.  

On the other hand, with the source being open, what is to prevent the continued development and use of the GPUs on the shorter tasks?

The only way to decisively move the GPU worker to the GPU work stream is to make sure that that work is always available, and that it pays more ... or ... you will have longer GPU tasks, but the GPUs will still be drawing from the CPU task pool ...

Well, apparently not the *only* way, as suggested in an earlier post: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=756&nowrap=true#17038


Well, almost but not quite.

Six months from now, or a year from now, depending on how fast the applications are made and released we could see several projects using ATI cards as GPU, and support for ATI GPUs is promised in a quick turn around. With that in mind, what is to stop me from "tuning" my GPU to pull maximum from the CPU project using a GPU compiled version (like what we currently have) on multiple systems. Though that "solution" would reduce the pull, it would not necessarily deter me from continuing to pull the maximum allowed.

Which is my point. you have to make it so that it makes no sense at all to run the GPU application against the CPU type tasks ... or the temptation will still be there ... or, change the internal signatures so only non-GPU applications can work, or change ... or or or ...

My point is that with open source there is good news and bad... in this case, the thought is to reduce the server load by making a GPU arena with longer running tasks ... but there is nothing proposed yet that prevents me from pulling tasks from the CPU side and running them on a GPU ... which is what the proposal is attempting to force ...
ID: 17178 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Zanth
Avatar

Send message
Joined: 18 Feb 09
Posts: 158
Credit: 110,699,054
RAC: 0
Message 17180 - Posted: 30 Mar 2009, 23:05:53 UTC - in response to Message 17087.  

My i7 can crunch more than 500 MW WUs in 24 hours...

Per core?



Oh, sorry, I thought you meant like 500 total, or 500 per machine, not per core.
ID: 17180 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Thamir Ghaslan

Send message
Joined: 31 Mar 08
Posts: 61
Credit: 18,325,284
RAC: 0
Message 17196 - Posted: 31 Mar 2009, 6:39:20 UTC - in response to Message 17180.  

Any one got lucky enough in the past to get the full 5,000 quota per day?

Just averaging here and not sure how correct my calcs are, my 4870x2 can do one task in 30 GPU seconds, forgeting CPU time and wall clock time!

5000 * 30 seconds = 150,000 seconds.
150,000 / 60 = 2,500 minutes
2,500 / 60 = 41.6 hours.

Give or take 2 days for 5,000 tasks and possibly one day if the 4870x2 uses two GPUs.

And give or take 130,000 to 150,000 credits for these 5,000 tasks!
ID: 17196 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
jedirock
Avatar

Send message
Joined: 8 Nov 08
Posts: 178
Credit: 6,140,854
RAC: 0
Message 17197 - Posted: 31 Mar 2009, 6:54:53 UTC - in response to Message 17196.  

5000 * 30 seconds = 150,000 seconds.
150,000 / 60 = 2,500 minutes
2,500 / 60 = 41.6 hours.

That's also for a single core processor. For a dual-core, double it. For a quad-core, quadruple it. For an i7, 8x (octuple?). You get the picture.
ID: 17197 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 17201 - Posted: 31 Mar 2009, 9:54:18 UTC

5,000 rocks, easily - each of my 3800s gets about 17,000/day.
5,000 WUs, never. Though I'd get a lot closer than I am at the moment if, when one of my PCs asked for WUs, it actually got some. :)

Hopefully the bigger WU plan will help with this.
Cheers,

PeterV

.
ID: 17201 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 17202 - Posted: 31 Mar 2009, 9:54:28 UTC

5,000 rocks, easily - each of my 3800s gets about 17,000/day.
5,000 WUs, never. Though I'd get a lot closer than I am at the moment if, when one of my PCs asked for WUs, it actually got some. :)

Hopefully the bigger WU plan will help with this.
Cheers,

PeterV

.
ID: 17202 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile verstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
Message 17203 - Posted: 31 Mar 2009, 9:54:49 UTC
Last modified: 31 Mar 2009, 9:57:50 UTC

5,000 rocks, easily - each of my 3800s gets about 17,000/day.
5,000 WUs, never. Though I'd get a lot closer than I am at the moment if, when one of my PCs asked for WUs, it actually got some. :)

Hopefully the bigger WU plan will help with this.

Sorry for the triple post. Not only is this board very slow, I just kept clicking on Submit until it was finally accepted, but it doesn't allow me to delete my own posts.
Cheers,

PeterV

.
ID: 17203 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Thamir Ghaslan

Send message
Joined: 31 Mar 08
Posts: 61
Credit: 18,325,284
RAC: 0
Message 17207 - Posted: 31 Mar 2009, 12:12:25 UTC - in response to Message 17203.  

5,000 rocks, easily - each of my 3800s gets about 17,000/day.
5,000 WUs, never. Though I'd get a lot closer than I am at the moment if, when one of my PCs asked for WUs, it actually got some. :)

Hopefully the bigger WU plan will help with this.

Sorry for the triple post. Not only is this board very slow, I just kept clicking on Submit until it was finally accepted, but it doesn't allow me to delete my own posts.


According to: http://boincstats.com/stats/user_graph.php?pr=milkyway&id=3578

these are my best five days:

Date Credit
2009-03-15 15:59:18 38,811
2009-03-16 16:01:47 29,782
2009-03-27 16:02:31 23,771
2009-03-25 16:01:12 17,095
2009-03-14 16:00:58 15,774

Averaging here about low and high credit work units, thats around 1400 to 1500 tasks which took about 6 hours of GPU time.
ID: 17207 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile borandi
Avatar

Send message
Joined: 21 Feb 09
Posts: 180
Credit: 27,806,824
RAC: 0
Message 17211 - Posted: 31 Mar 2009, 14:31:51 UTC

My 4850 (OC'ed to 680/1050 from 625/1000) churned out 20k in 9 hours, which makes ~53k/day if it continues to churn at that speed. (That being said, it only just got work after an hour being down). 53k/day, at 30creds/WU, makes around ~1700WUs/day.
ID: 17211 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 17226 - Posted: 31 Mar 2009, 23:04:41 UTC - in response to Message 17211.  

Just a heads up on our situation. We should be meeting with labstaff this week to set up the milkyway_gpu project (it'll be at http://milkyway.cs.rpi.edu/milkyway_gpu and perhaps milkyway_gpu.cs.rpi.edu).

Once that's up and running I should have a preliminary application and code out there for people to use with it shortly after.
ID: 17226 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile caferace
Avatar

Send message
Joined: 4 Aug 08
Posts: 46
Credit: 8,255,900
RAC: 0
Message 17231 - Posted: 1 Apr 2009, 0:25:40 UTC - in response to Message 17226.  

Just a heads up on our situation. We should be meeting with labstaff this week to set up the milkyway_gpu project (it'll be at http://milkyway.cs.rpi.edu/milkyway_gpu and perhaps milkyway_gpu.cs.rpi.edu).

Once that's up and running I should have a preliminary application and code out there for people to use with it shortly after.

Excellent. Looking forward to some looooong (like 30+ minute clock time) GPU wu's. :)

-jim
ID: 17231 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 17232 - Posted: 1 Apr 2009, 0:31:18 UTC - in response to Message 17226.  

Just a heads up on our situation. We should be meeting with labstaff this week to set up the milkyway_gpu project (it'll be at http://milkyway.cs.rpi.edu/milkyway_gpu and perhaps milkyway_gpu.cs.rpi.edu).

Once that's up and running I should have a preliminary application and code out there for people to use with it shortly after.



Any statement on how the project will run afterwards? Is there going to be 1 project or 2 projects - for those of us that run both CPU and GPU apps..
From the URL change it appears that we will have to attach to a different project?

How are stats going to be held, as a single project or 2 projects?

.
ID: 17232 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 17239 - Posted: 1 Apr 2009, 2:15:53 UTC - in response to Message 17232.  

We'll be running them as separate projects, with separate top lists and all that.

I think this is the best approach so we can have lists of our best CPU and our best GPU crunchers. It will also easily allow users with both GPUs and CPUs to connect to both and (hopefully) run both at the same time -- at least that's our goal.
ID: 17239 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile The Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
Message 17240 - Posted: 1 Apr 2009, 2:33:10 UTC

Will BOINC recognise the GPU which it currently doesn't?

Will it be able to schedule 4 MW cpu tasks on my quad as well as 1 MW task on my GPU? I know it is a goal.

Will people be able to work around the CPU project and use their GPU on it via a 3rd party hack?

Do you need alpha/beta testers?
ID: 17240 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Travis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
Message 17241 - Posted: 1 Apr 2009, 2:38:57 UTC - in response to Message 17240.  

Will BOINC recognise the GPU which it currently doesn't?

Will it be able to schedule 4 MW cpu tasks on my quad as well as 1 MW task on my GPU? I know it is a goal.


Thats the goal but I think it might have to be implemented by the BOINC people (as opposed to us) because it will involve some changes to the BOINC client.

Will people be able to work around the CPU project and use their GPU on it via a 3rd party hack?


I hope no one would do this. The whole point is so everyone can have available work. There shouldn't be any difference in the work crunched to credit radio between projects, so I think it will be in everyones best interests that the GPU applications do the milkyway_gpu project. Once we make the swap over we'll not be awarding credit crunched here to the GPU applications as some additional incentive.

Do you need alpha/beta testers?
I'm sure we will :) Need to get the site up and running first though, lol.

ID: 17241 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Kevint
Avatar

Send message
Joined: 22 Nov 07
Posts: 285
Credit: 1,076,786,368
RAC: 0
Message 17245 - Posted: 1 Apr 2009, 4:09:35 UTC - in response to Message 17239.  

We'll be running them as separate projects, with separate top lists and all that.

I think this is the best approach so we can have lists of our best CPU and our best GPU crunchers. It will also easily allow users with both GPUs and CPUs to connect to both and (hopefully) run both at the same time -- at least that's our goal.



Are you sick of these questions yet ?


How are you going to handle the current credit data base? Freeze it and start over with new projects? Or are you going to determine what gets moved to what project? If the later, how are you going to determine what gets moved to what project? Ex, I have approx 25M on CPU, and 25M on GPU.



.
ID: 17245 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · Next

Message boards : Number crunching : Server Updates and Status

©2024 Astroinformatics Group