Message boards :
Number crunching :
MilkyWay_GPU
Message board moderation
Previous · 1 . . . 9 · 10 · 11 · 12 · 13 · 14 · 15 . . . 18 · Next
Author | Message |
---|---|
![]() ![]() Send message Joined: 11 Nov 07 Posts: 232 Credit: 178,229,009 RAC: 0 ![]() ![]() |
" April 12, 2009 So I've been working on making the new project specifically for GPUs. This is will alleviate the no work situation because we will be giving much longer workunits to GPU clients. However, I've discovered a bug in a python script that was made when compiling. I need labstaff to give me permission to overwrite the file, but am not sure they work on weekends. So hopefully by early next week we can have a new project up! " This was 10 days ago. What's up? are you reinventing the wheel? It would be nice with an update. |
![]() Send message Joined: 29 Jan 09 Posts: 32 Credit: 1,962,668 RAC: 0 ![]() ![]() |
Martin Chartrand The 8800GT should be supported once Milkyway adds Cuda support. The 8800GT is on the Boinc/Nvidia list of GPUs that support Cuda. And as previously stated, you can put a larger GPU in, it will just take up more room inside your computer, not require two PCI-E slots to connect to the motherboard - they only take up the same physical space as two single slot GPUs would take. |
![]() Send message Joined: 27 Feb 09 Posts: 45 Credit: 305,963 RAC: 0 ![]() ![]() |
This was 10 days ago. What's up? are you reinventing the wheel? At least Travis and Dave provide decent updates to what is going on. Which is far better than some of the other bigger projects. Mars rules this confectionery war! |
![]() ![]() Send message Joined: 11 Nov 07 Posts: 232 Credit: 178,229,009 RAC: 0 ![]() ![]() |
At least Travis and Dave provide decent updates to what is going on. Which is far better than some of the other bigger projects. It's not under 'NEWS' on the frontpage so where is it? |
![]() ![]() Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 ![]() ![]() |
Martin Chartrand Only cards with CUDA Compute Capability 1.3 offer double precision calculations (from CUDA Programming Guide 2.1, Appendix A, page 81). Which means GTX 260+, Tesla S1070/C1060, Quadro Plex 2200D2, Quadro FX 5800. CUDA has revisions like most hardware/software, so it's a case of making sure you have the right stuff. There may be a way to emulate double precision calculations, but it'll be a big performance hit. ![]() |
![]() ![]() Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 ![]() ![]() |
This was 10 days ago. What's up? are you reinventing the wheel? It seems to come in spurts. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 ![]() ![]() |
With this CUDA version of MW coming along some time soon (?), the decision to install an ATI HD card in my quad is being put off. ATM it looks like a toss between a working system, with an HD4850, or getting a nVidia GTX265 or some such for a system yet to be implemented. Ho! Hum! If only we knew the best way to jump? Go away, I was asleep ![]() ![]() |
![]() ![]() Send message Joined: 28 Apr 08 Posts: 1415 Credit: 2,716,428 RAC: 0 ![]() ![]() |
I've got a 9500GT, I dont think its gonna make the grade, so I'll have to wait til fall to get a better computer, there would be too much involved to upgrade what I have,new power supply etc for GTX 265. Only have a 300 watt power now. The 9500 runs Seti-beta really good though. |
Send message Joined: 27 Feb 09 Posts: 41 Credit: 123,828 RAC: 0 ![]() ![]() |
build your own system like i plan to.... an i7 system for me (looks like 1600 US right now.) |
Send message Joined: 4 Oct 08 Posts: 1734 Credit: 64,228,409 RAC: 0 ![]() ![]() |
Both my newer PCs have 3 PCI-E slots (2 x 16) on their MoBos, and, fortunately, I have a 450W PSU in one and a 550W PSU in the other. So, the 120W additional draw of an HD4850 or 4870 can be accomodated OK. Not sure what a GTX265+ needs, but I think I will be OK there as well. Go away, I was asleep ![]() ![]() |
Send message Joined: 1 Sep 08 Posts: 520 Credit: 302,538,504 RAC: 6 ![]() ![]() ![]() |
It depends on how real the 450W is for power. With top line 450W PS, that should be ok. Make sure they have the appropriate power connector as well, and of course that the case configuration can handle both the heat and size of the video card. The 48xx series cards do generate a fair amount of heat and tend not only to be 'double wides' but also are rather long and can block access to connections on the front of the MB. I have a 450W PSU in one and a 550W PSU in the other. So, the 120W additional draw of an HD4850 or 4870 can be accomodated OK. Not sure what a GTX265+ needs, but I think I will be OK there as well. ![]() |
Send message Joined: 22 Feb 09 Posts: 6 Credit: 25,439,032 RAC: 0 ![]() ![]() |
" http://milkyway.cs.rpi.edu/milkyway_gpu/ the website is now online and running and with a news of 16 april |
![]() ![]() Send message Joined: 29 Aug 07 Posts: 327 Credit: 116,463,193 RAC: 0 ![]() ![]() |
" And that is the last update we have gotten. ![]() Calm Chaos Forum...Join Calm Chaos Now |
![]() ![]() Send message Joined: 25 Mar 09 Posts: 65 Credit: 53,099,671 RAC: 0 ![]() ![]() |
Yes! some of those cards are extremely hungry for power. I have a computer downstairs that is solely for crunching and testing new applications on a Virtual Computer. I have a 4870 in it and a Thermaltake 650 Watts PS. It is more than enough. I would think 500 Watt would be the minimum requirement. 600 Watt would be enough. On my gaming console I have a GTX285 and again a Thermaltake 650 Watt PS. It also crunches. My GTX285 is for sale if anyone is interrested. (Box and everything) I just got it a month ago but I would like to aquire the GTX295. I paid $550 Canadian for it. I also have a 8800GTX for sale that used to be in this gaming console. A note here... I saw in the "table" of comptatible GPU (in another forum) that the 8800GTX as a G80 core making it impossible to crunch with CUDA technology. But about a week before I changed it, I was crunching 5 WU through SETI@HOME. 4 processor and 1 CUDA. Can anyone explain this? My gaming console spec can be found @ www.getnatural.ca/syssum.jpg. As well check out how my thermaltake V1 fan keeps my cpu cool under 100% load @ www.getnatural.ca/boinc.jpg Martin Chartrand |
![]() ![]() Send message Joined: 25 Mar 09 Posts: 65 Credit: 53,099,671 RAC: 0 ![]() ![]() |
Yes! some of those cards are extremely hungry for power. I have a computer downstairs that is solely for crunching and testing new applications on a Virtual Computer. I have a 4870 in it and a Thermaltake 650 Watts PS. It is more than enough. I would think 500 Watt would be the minimum requirement. 600 Watt would be enough. And those cards are big. On my gaming console I have a GTX285 and again a Thermaltake 650 Watt PS. It also crunches. My GTX285 is for sale if anyone is interrested. (Box and everything) I just got it a month ago but I would like to aquire the GTX295. I paid $550 Canadian for it. I also have a 8800GTX for sale that used to be in this gaming console. A note here... I saw in the "table" of comptatible GPU (in another forum) that the 8800GTX as a G80 core making it impossible to crunch with CUDA technology. But about a week before I changed it, I was crunching 5 WU through SETI@HOME. 4 processor and 1 CUDA. Can anyone explain this? My gaming console spec can be found @ www.getnatural.ca/syssum.jpg As well check out how my thermaltake V1 fan keeps my CPU cools under 100% load @ www.getnatural.ca/boinc.jpg Only the GPU crunching @ www.getnatural.ca/gpu.jpg Pic 1 of crunching console @ www.getnatural.ca/IMG00003.jpg Pic 2 of crunching console @ www.getnatural.ca/IMG00004.jpg Pic 1 of gaming console @ www.getnatural.ca/IMG00005.jpg Pic 2 of gaming console @ www.getnatural.ca/IMG00006.jpg Pic 3 of gaming console @ www.getnatural.ca/IMG00007.jpg (note how close the GTX285 is from the RAMS) Pic of ME playing Racquetball at the Canadian Racquetball Championship last year @ www.getnatural.ca/Racquetball.jpg (So now there¡¦s a picture to my post ƒº) Martin Chartrand |
![]() ![]() Send message Joined: 25 Mar 09 Posts: 65 Credit: 53,099,671 RAC: 0 ![]() ![]() |
Please delete post 20063 from me. My mistake. The one below is the good one. Martin |
![]() ![]() Send message Joined: 11 Nov 07 Posts: 232 Credit: 178,229,009 RAC: 0 ![]() ![]() |
Hmm... yes. A lot of info there regarding the new GPU apps. ;) |
![]() Send message Joined: 1 Sep 08 Posts: 204 Credit: 219,354,537 RAC: 0 ![]() ![]() |
Something to consider: a 4870 at 30s/WU and ~100W needs 3kJ to crunch a WU. My Phenom 9850 with optimized app needs about 20mins/WU. At ~100W that's 120kJ, fourty times as much energy. Do you still think it's a good idea to run Mw on CPUs? Hi Brian, I think you're quite wrong about my "agenda". I know the project staff is never going to do what I suggested in the first place, so this doesn't really matter. My point is to convince people that running MW on CPUs is much less efficient than running it on GPUs. Cluster Physik said it perfectly, no need to repeat it again. You assume the only reason we're saying this is because we have GPUs and want more credit. That's wrong. I'd still say CPU-MW is a waste now, even if I didn't have a GPU. Buying this GPU was about personal gain (and for my team), but this discussion is not personal. It's about the bigger scheme of things. We'll be getting MW-GPU now. It may still take many weeks, but in a finite time we should get enough WUs for everyone. There's no point arguing about the few current WUs. Let's assume GPU-Grid ported their client to CPUs. It would work easily, but they'd be much slower than GPUs. Now imagine that, due to some coincidence, even the "slow" CPU-Version would get you more credits/time than any other BOINC project. Would you run it? Withdraw your CPUs from other projects, which can not be done by GPUs? Noone except project staff would be allowed to forbid you doing so. Noone could blame you for going the route of maximum personal gain for yourself. But would it be right to do so? Efficient, for the benefit of science? I don't think so. That's what I want people to think about. As I said some time ago, this isn't about "science". The project has made the determination that both tasks are worthwhile ventures. It is time the "GPU guys" accept that... You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs. MrS (PS: sorry for forgetting about the other 3 cores.. now that was dumb) Scanning for our furry friends since Jan 2002 |
![]() Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 ![]() ![]() |
You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs. You missed the flip side of the coin, capacity. If I am running using only the definition of efficiency as you have noted then you are correct that the GPU processing is more efficient. But, I have an idle core ... is it not wise to bend it to use even if it is not the most efficient way to get the job done? I have both CPUs and GPUs and as projects come on line with GPU work I will be allocating those GPU resources to obtain the "best" mix of work that pleases me. But, I am not going to not apply my CPUs to this project just because I have a GPU that can beat the CPUs ... maybe adjust the proportion, or the resource share ... But it is not always a simple answer as to what is the most efficient way to get from here to there. |
![]() Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 ![]() ![]() |
You forget one thing: the GPUs could do the new and the old searches more efficiently than CPUs. You missed the flip side of the coin, capacity. If I am running using only the definition of efficiency as you have noted then you are correct that the GPU processing is more efficient. But, I have an idle core ... is it not wise to bend it to use even if it is not the most efficient way to get the job done? I have both CPUs and GPUs and as projects come on line with GPU work I will be allocating those GPU resources to obtain the "best" mix of work that pleases me. But, I am not going to not apply my CPUs to this project just because I have a GPU that can beat the CPUs ... maybe adjust the proportion, or the resource share ... But it is not always a simple answer as to what is the most efficient way to get from here to there. |
©2025 Astroinformatics Group