Welcome to MilkyWay@home

MilkyWay_GPU

Message boards : Number crunching : MilkyWay_GPU
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 18 · Next

AuthorMessage
Profile Simplex0
Avatar

Send message
Joined: 11 Nov 07
Posts: 232
Credit: 178,229,009
RAC: 0
Message 19982 - Posted: 22 Apr 2009, 9:37:02 UTC

"
April 12, 2009
So I've been working on making the new project specifically for GPUs. This is will alleviate the no work situation because we will be giving much longer workunits to GPU clients. However, I've discovered a bug in a python script that was made when compiling. I need labstaff to give me permission to overwrite the file, but am not sure they work on weekends. So hopefully by early next week we can have a new project up!
"


This was 10 days ago. What's up? are you reinventing the wheel?
It would be nice with an update.
ID: 19982 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Alberto Tanikawa

Send message
Joined: 29 Jan 09
Posts: 32
Credit: 1,962,668
RAC: 0
Message 19997 - Posted: 22 Apr 2009, 13:05:13 UTC - in response to Message 19833.  

Martin Chartrand

I know its "outdated" I didn't ask about that. I asked if the project will SUPPORT it.

I went with the 8800GT because my mobo looks like it only supports a single pci-e card.


The 8800GT should be supported once Milkyway adds Cuda support. The 8800GT is on the Boinc/Nvidia list of GPUs that support Cuda. And as previously stated, you can put a larger GPU in, it will just take up more room inside your computer, not require two PCI-E slots to connect to the motherboard - they only take up the same physical space as two single slot GPUs would take.
ID: 19997 · Rating: 0 · rate: Rate + / Rate - Report as offensive
SATAN
Avatar

Send message
Joined: 27 Feb 09
Posts: 45
Credit: 305,963
RAC: 0
Message 20003 - Posted: 22 Apr 2009, 13:53:57 UTC - in response to Message 19982.  

This was 10 days ago. What's up? are you reinventing the wheel?
It would be nice with an update.


At least Travis and Dave provide decent updates to what is going on. Which is far better than some of the other bigger projects.
Mars rules this confectionery war!
ID: 20003 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Simplex0
Avatar

Send message
Joined: 11 Nov 07
Posts: 232
Credit: 178,229,009
RAC: 0
Message 20005 - Posted: 22 Apr 2009, 14:53:58 UTC - in response to Message 20003.  

At least Travis and Dave provide decent updates to what is going on. Which is far better than some of the other bigger projects.


It's not under 'NEWS' on the frontpage so where is it?
ID: 20005 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile borandi
Avatar

Send message
Joined: 21 Feb 09
Posts: 180
Credit: 27,806,824
RAC: 0
Message 20006 - Posted: 22 Apr 2009, 14:56:02 UTC - in response to Message 19997.  

Martin Chartrand

I know its "outdated" I didn't ask about that. I asked if the project will SUPPORT it.

I went with the 8800GT because my mobo looks like it only supports a single pci-e card.


The 8800GT should be supported once Milkyway adds Cuda support. The 8800GT is on the Boinc/Nvidia list of GPUs that support Cuda. And as previously stated, you can put a larger GPU in, it will just take up more room inside your computer, not require two PCI-E slots to connect to the motherboard - they only take up the same physical space as two single slot GPUs would take.


Only cards with CUDA Compute Capability 1.3 offer double precision calculations (from CUDA Programming Guide 2.1, Appendix A, page 81). Which means GTX 260+, Tesla S1070/C1060, Quadro Plex 2200D2, Quadro FX 5800.

CUDA has revisions like most hardware/software, so it's a case of making sure you have the right stuff. There may be a way to emulate double precision calculations, but it'll be a big performance hit.


ID: 20006 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile banditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
Message 20014 - Posted: 22 Apr 2009, 15:47:46 UTC - in response to Message 20003.  

This was 10 days ago. What's up? are you reinventing the wheel?
It would be nice with an update.


At least Travis and Dave provide decent updates to what is going on. Which is far better than some of the other bigger projects.


It seems to come in spurts.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 20014 · Rating: 0 · rate: Rate + / Rate - Report as offensive
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 20015 - Posted: 22 Apr 2009, 15:48:38 UTC
Last modified: 22 Apr 2009, 15:49:43 UTC

With this CUDA version of MW coming along some time soon (?), the decision to install an ATI HD card in my quad is being put off. ATM it looks like a toss between a working system, with an HD4850, or getting a nVidia GTX265 or some such for a system yet to be implemented.

Ho! Hum!

If only we knew the best way to jump?
Go away, I was asleep


ID: 20015 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Bruce
Avatar

Send message
Joined: 28 Apr 08
Posts: 1415
Credit: 2,716,428
RAC: 0
Message 20019 - Posted: 22 Apr 2009, 15:55:16 UTC - in response to Message 20015.  
Last modified: 22 Apr 2009, 15:56:35 UTC

I've got a 9500GT, I dont think its gonna make the grade, so I'll have to wait til fall to get a better computer, there would be too much involved to upgrade what I have,new power supply etc for GTX 265. Only have a 300 watt power now.
The 9500 runs Seti-beta really good though.
ID: 20019 · Rating: 0 · rate: Rate + / Rate - Report as offensive
zpm

Send message
Joined: 27 Feb 09
Posts: 41
Credit: 123,828
RAC: 0
Message 20021 - Posted: 22 Apr 2009, 16:13:00 UTC - in response to Message 20019.  

build your own system like i plan to.... an i7 system for me (looks like 1600 US right now.)
ID: 20021 · Rating: 0 · rate: Rate + / Rate - Report as offensive
John Clark

Send message
Joined: 4 Oct 08
Posts: 1734
Credit: 64,228,409
RAC: 0
Message 20022 - Posted: 22 Apr 2009, 16:13:06 UTC

Both my newer PCs have 3 PCI-E slots (2 x 16) on their MoBos, and, fortunately, I have a 450W PSU in one and a 550W PSU in the other. So, the 120W additional draw of an HD4850 or 4870 can be accomodated OK. Not sure what a GTX265+ needs, but I think I will be OK there as well.
Go away, I was asleep


ID: 20022 · Rating: 0 · rate: Rate + / Rate - Report as offensive
BarryAZ

Send message
Joined: 1 Sep 08
Posts: 520
Credit: 302,524,931
RAC: 15
Message 20026 - Posted: 22 Apr 2009, 16:33:32 UTC - in response to Message 20022.  

It depends on how real the 450W is for power. With top line 450W PS, that should be ok.
Make sure they have the appropriate power connector as well, and of course that the case configuration can handle both the heat and size of the video card. The 48xx series cards do generate a fair amount of heat and tend not only to be 'double wides' but also are rather long and can block access to connections on the front of the MB.

I have a 450W PSU in one and a 550W PSU in the other. So, the 120W additional draw of an HD4850 or 4870 can be accomodated OK. Not sure what a GTX265+ needs, but I think I will be OK there as well.


ID: 20026 · Rating: 0 · rate: Rate + / Rate - Report as offensive
[HWU]Flotta Stellare - Starfle...

Send message
Joined: 22 Feb 09
Posts: 6
Credit: 25,439,032
RAC: 0
Message 20045 - Posted: 22 Apr 2009, 19:17:31 UTC - in response to Message 19982.  

"
April 12, 2009
So I've been working on making the new project specifically for GPUs. This is will alleviate the no work situation because we will be giving much longer workunits to GPU clients. However, I've discovered a bug in a python script that was made when compiling. I need labstaff to give me permission to overwrite the file, but am not sure they work on weekends. So hopefully by early next week we can have a new project up!
"


This was 10 days ago. What's up? are you reinventing the wheel?
It would be nice with an update.

http://milkyway.cs.rpi.edu/milkyway_gpu/ the website is now online and running and with a news of 16 april
ID: 20045 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Labbie
Avatar

Send message
Joined: 29 Aug 07
Posts: 327
Credit: 116,463,193
RAC: 0
Message 20061 - Posted: 22 Apr 2009, 20:26:42 UTC - in response to Message 20045.  

"
April 12, 2009
So I've been working on making the new project specifically for GPUs. This is will alleviate the no work situation because we will be giving much longer workunits to GPU clients. However, I've discovered a bug in a python script that was made when compiling. I need labstaff to give me permission to overwrite the file, but am not sure they work on weekends. So hopefully by early next week we can have a new project up!
"


This was 10 days ago. What's up? are you reinventing the wheel?
It would be nice with an update.

http://milkyway.cs.rpi.edu/milkyway_gpu/ the website is now online and running and with a news of 16 april


And that is the last update we have gotten.



Calm Chaos Forum...Join Calm Chaos Now
ID: 20061 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Martin Chartrand
Avatar

Send message
Joined: 25 Mar 09
Posts: 65
Credit: 53,099,671
RAC: 0
Message 20063 - Posted: 22 Apr 2009, 20:29:00 UTC
Last modified: 22 Apr 2009, 20:30:42 UTC

Yes! some of those cards are extremely hungry for power.
I have a computer downstairs that is solely for crunching and testing new applications on a Virtual Computer.
I have a 4870 in it and a Thermaltake 650 Watts PS. It is more than enough.
I would think 500 Watt would be the minimum requirement. 600 Watt would be enough.
On my gaming console I have a GTX285 and again a Thermaltake 650 Watt PS.
It also crunches.
My GTX285 is for sale if anyone is interrested. (Box and everything) I just got it a month ago but I would like to aquire the GTX295.
I paid $550 Canadian for it.
I also have a 8800GTX for sale that used to be in this gaming console.

A note here...

I saw in the "table" of comptatible GPU (in another forum) that the 8800GTX as a G80 core making it impossible to crunch with CUDA technology.
But about a week before I changed it, I was crunching 5 WU through SETI@HOME.
4 processor and 1 CUDA.
Can anyone explain this?
My gaming console spec can be found @ www.getnatural.ca/syssum.jpg.
As well check out how my thermaltake V1 fan keeps my cpu cool under 100% load @ www.getnatural.ca/boinc.jpg

Martin Chartrand
ID: 20063 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Martin Chartrand
Avatar

Send message
Joined: 25 Mar 09
Posts: 65
Credit: 53,099,671
RAC: 0
Message 20077 - Posted: 22 Apr 2009, 21:33:19 UTC - in response to Message 20063.  

Yes! some of those cards are extremely hungry for power.
I have a computer downstairs that is solely for crunching and testing new applications on a Virtual Computer.
I have a 4870 in it and a Thermaltake 650 Watts PS. It is more than enough.
I would think 500 Watt would be the minimum requirement. 600 Watt would be enough.
And those cards are big.
On my gaming console I have a GTX285 and again a Thermaltake 650 Watt PS.
It also crunches.
My GTX285 is for sale if anyone is interrested. (Box and everything) I just got it a month ago but I would like to aquire the GTX295.
I paid $550 Canadian for it.
I also have a 8800GTX for sale that used to be in this gaming console.

A note here...

I saw in the "table" of comptatible GPU (in another forum) that the 8800GTX as a G80 core making it impossible to crunch with CUDA technology.
But about a week before I changed it, I was crunching 5 WU through SETI@HOME.
4 processor and 1 CUDA.
Can anyone explain this?
My gaming console spec can be found @ www.getnatural.ca/syssum.jpg
As well check out how my thermaltake V1 fan keeps my CPU cools under 100% load @
www.getnatural.ca/boinc.jpg
Only the GPU crunching @ www.getnatural.ca/gpu.jpg
Pic 1 of crunching console @ www.getnatural.ca/IMG00003.jpg
Pic 2 of crunching console @ www.getnatural.ca/IMG00004.jpg
Pic 1 of gaming console @ www.getnatural.ca/IMG00005.jpg
Pic 2 of gaming console @ www.getnatural.ca/IMG00006.jpg
Pic 3 of gaming console @ www.getnatural.ca/IMG00007.jpg (note how close the GTX285 is from the RAMS)
Pic of ME playing Racquetball at the Canadian Racquetball Championship last year @ www.getnatural.ca/Racquetball.jpg (So now there¡¦s a picture to my post ƒº)

Martin Chartrand
ID: 20077 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Martin Chartrand
Avatar

Send message
Joined: 25 Mar 09
Posts: 65
Credit: 53,099,671
RAC: 0
Message 20078 - Posted: 22 Apr 2009, 21:37:02 UTC

Please delete post 20063 from me.
My mistake. The one below is the good one.

Martin
ID: 20078 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Simplex0
Avatar

Send message
Joined: 11 Nov 07
Posts: 232
Credit: 178,229,009
RAC: 0
Message 20080 - Posted: 22 Apr 2009, 21:47:48 UTC - in response to Message 20045.  


http://milkyway.cs.rpi.edu/milkyway_gpu/ the website is now online and running and with a news of 16 april


Hmm... yes. A lot of info there regarding the new GPU apps. ;)
ID: 20080 · Rating: 0 · rate: Rate + / Rate - Report as offensive
ExtraTerrestrial Apes
Avatar

Send message
Joined: 1 Sep 08
Posts: 204
Credit: 219,354,537
RAC: 0
Message 20082 - Posted: 22 Apr 2009, 21:58:58 UTC - in response to Message 19623.  

Something to consider: a 4870 at 30s/WU and ~100W needs 3kJ to crunch a WU. My Phenom 9850 with optimized app needs about 20mins/WU. At ~100W that's 120kJ, fourty times as much energy. Do you still think it's a good idea to run Mw on CPUs?

MrS

Encourage the project to throw out everyone except people that have GPUs and only have a single project... that's your agenda, right?

Even if the numbers of Mr. Spadge are a bit flawed (a quad will complete 4 WUs in that time and most quadcores consume a bit less than a HD4870 meaning the advantage is not 40x but only a factor of 8 or so), it is very true that GPUs are far more economical to use. One simply wastes energy with a CPU here.
Frankly, CPUs can be put to a better use at other projects, where GPUs can't help. Really.


See, that's what rubs me the wrong way about "GPU guys", as claimed by other people. This is the same schtick that came out of the other threads. Toss the people with CPUs out so there's more work units for those of us "in the club"...

That kind of attitude just isn't happening in reverse. The project offered a totally new project for GPUs. There is a segment of GPU users who seem to accept the project's idea of having GPUs do NEW WORK that is more sensitive that could not be done by CPUs in sufficient time, but there is another segment that doesn't like the perception that they have to start over with their stats.


Hi Brian,

I think you're quite wrong about my "agenda". I know the project staff is never going to do what I suggested in the first place, so this doesn't really matter. My point is to convince people that running MW on CPUs is much less efficient than running it on GPUs. Cluster Physik said it perfectly, no need to repeat it again.

You assume the only reason we're saying this is because we have GPUs and want more credit. That's wrong. I'd still say CPU-MW is a waste now, even if I didn't have a GPU. Buying this GPU was about personal gain (and for my team), but this discussion is not personal. It's about the bigger scheme of things.

We'll be getting MW-GPU now. It may still take many weeks, but in a finite time we should get enough WUs for everyone. There's no point arguing about the few current WUs.

Let's assume GPU-Grid ported their client to CPUs. It would work easily, but they'd be much slower than GPUs. Now imagine that, due to some coincidence, even the "slow" CPU-Version would get you more credits/time than any other BOINC project. Would you run it? Withdraw your CPUs from other projects, which can not be done by GPUs?

Noone except project staff would be allowed to forbid you doing so. Noone could blame you for going the route of maximum personal gain for yourself. But would it be right to do so? Efficient, for the benefit of science? I don't think so. That's what I want people to think about.

As I said some time ago, this isn't about "science". The project has made the determination that both tasks are worthwhile ventures. It is time the "GPU guys" accept that...


You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs.

MrS

(PS: sorry for forgetting about the other 3 cores.. now that was dumb)
Scanning for our furry friends since Jan 2002
ID: 20082 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 20089 - Posted: 22 Apr 2009, 22:34:00 UTC - in response to Message 20082.  

You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs.

You missed the flip side of the coin, capacity.

If I am running using only the definition of efficiency as you have noted then you are correct that the GPU processing is more efficient. But, I have an idle core ... is it not wise to bend it to use even if it is not the most efficient way to get the job done?

I have both CPUs and GPUs and as projects come on line with GPU work I will be allocating those GPU resources to obtain the "best" mix of work that pleases me. But, I am not going to not apply my CPUs to this project just because I have a GPU that can beat the CPUs ... maybe adjust the proportion, or the resource share ...

But it is not always a simple answer as to what is the most efficient way to get from here to there.
ID: 20089 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Profile Paul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
Message 20090 - Posted: 22 Apr 2009, 22:34:00 UTC - in response to Message 20082.  

You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs.

You missed the flip side of the coin, capacity.

If I am running using only the definition of efficiency as you have noted then you are correct that the GPU processing is more efficient. But, I have an idle core ... is it not wise to bend it to use even if it is not the most efficient way to get the job done?

I have both CPUs and GPUs and as projects come on line with GPU work I will be allocating those GPU resources to obtain the "best" mix of work that pleases me. But, I am not going to not apply my CPUs to this project just because I have a GPU that can beat the CPUs ... maybe adjust the proportion, or the resource share ...

But it is not always a simple answer as to what is the most efficient way to get from here to there.
ID: 20090 · Rating: 0 · rate: Rate + / Rate - Report as offensive
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 18 · Next

Message boards : Number crunching : MilkyWay_GPU

©2024 Astroinformatics Group