Welcome to MilkyWay@home

GPU app teaser


Advanced search

Message boards : Application Code Discussion : GPU app teaser
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 19 · Next

AuthorMessage
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge10 year member badge
Message 10915 - Posted: 15 Feb 2009, 17:42:52 UTC

Probably silly question, but, Why can these certain graphics cards do their own work for projects and not older cards?
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 10915 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10929 - Posted: 15 Feb 2009, 21:22:20 UTC - in response to Message 10915.  

Probably silly question, but, Why can these certain graphics cards do their own work for projects and not older cards?

Because the older cards can only do single precision calculations (32bit) and the HD38x0 and HD48x0 are the only ones (besides nvidias GTX2xx series) that can handle double precision (64bit).
ID: 10929 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileCori
Avatar

Send message
Joined: 27 Aug 07
Posts: 647
Credit: 27,592,547
RAC: 0
20 million credit badge10 year member badge
Message 10931 - Posted: 15 Feb 2009, 21:30:31 UTC - in response to Message 10839.  

I reinstalled Boinc this morning with default value, and now all seems to work, some freeze on the screen but some tuning needed, i suppose. Thanks for your support

Isn't the default the protected mode?...

No, per default the protection mode is off, you have to choose the "Protected application execution" option separately.
Lovely greetings, Cori
ID: 10931 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10932 - Posted: 15 Feb 2009, 21:42:34 UTC - in response to Message 10931.  

I reinstalled Boinc this morning with default value, and now all seems to work, some freeze on the screen but some tuning needed, i suppose. Thanks for your support

Isn't the default the protected mode?...

No, per default the protection mode is off, you have to choose the "Protected application execution" option separately.

Oh, I guess they have changed it because of the CUDA apps for SETI *lol*
ID: 10932 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileCori
Avatar

Send message
Joined: 27 Aug 07
Posts: 647
Credit: 27,592,547
RAC: 0
20 million credit badge10 year member badge
Message 10933 - Posted: 15 Feb 2009, 22:09:36 UTC - in response to Message 10932.  

I reinstalled Boinc this morning with default value, and now all seems to work, some freeze on the screen but some tuning needed, i suppose. Thanks for your support

Isn't the default the protected mode?...

No, per default the protection mode is off, you have to choose the "Protected application execution" option separately.

Oh, I guess they have changed it because of the CUDA apps for SETI *lol*

Hehe, I think it was disabled for default from the very beginning.
I've tried it once when it was newly added and it didn't convince me too much. *grin*
So I was glad I didn't have to un-check that option everytime I upgraded BOINC. :-D
Lovely greetings, Cori
ID: 10933 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileNeal Chantrill
Avatar

Send message
Joined: 17 Jan 09
Posts: 98
Credit: 72,182,367
RAC: 0
50 million credit badge10 year member badge
Message 10934 - Posted: 15 Feb 2009, 22:28:01 UTC

Sorry to sound stupid but is a 9700 all in one wonder to old to use this application?

Thanks in advance.

According to GPU-Z the GPU is an R300
ID: 10934 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10937 - Posted: 15 Feb 2009, 22:35:14 UTC - in response to Message 10934.  

Sorry to sound stupid but is a 9700 all in one wonder to old to use this application?

Thanks in advance.

According to GPU-Z the GPU is an R300

Sorry, but some years too old.
ID: 10937 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge10 year member badge
Message 10944 - Posted: 15 Feb 2009, 22:43:47 UTC - in response to Message 10929.  

Probably silly question, but, Why can these certain graphics cards do their own work for projects and not older cards?

Because the older cards can only do single precision calculations (32bit) and the HD38x0 and HD48x0 are the only ones (besides nvidias GTX2xx series) that can handle double precision (64bit).


Isn't that what pc's do? But older cards could do single calc's then. Why not add support for those? There would be plenty, since these new cards aren't that old.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 10944 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileNeal Chantrill
Avatar

Send message
Joined: 17 Jan 09
Posts: 98
Credit: 72,182,367
RAC: 0
50 million credit badge10 year member badge
Message 10945 - Posted: 15 Feb 2009, 22:45:36 UTC - in response to Message 10937.  

Sorry to sound stupid but is a 9700 all in one wonder to old to use this application?

Thanks in advance.

According to GPU-Z the GPU is an R300

Sorry, but some years too old.



No worries. Thanks for the swift reply.
ID: 10945 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10949 - Posted: 15 Feb 2009, 22:54:01 UTC - in response to Message 10944.  

Probably silly question, but, Why can these certain graphics cards do their own work for projects and not older cards?

Because the older cards can only do single precision calculations (32bit) and the HD38x0 and HD48x0 are the only ones (besides nvidias GTX2xx series) that can handle double precision (64bit).


Isn't that what pc's do? But older cards could do single calc's then. Why not add support for those? There would be plenty, since these new cards aren't that old.

This is about the floating point precision. Travis has set quite strict limits for the reslts of the test WUs. You can't reach them with single precision calculations on older cards (the really old ones support only 16 or 24bit FP). It may be possible to get there with some kind of software emulation, but this would be a lot of effort which would be lost in the future as more and more cards get double precision support. Furthermore it would be most likely as slow ;) or even slower than to do the computations on a CPU.
ID: 10949 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge10 year member badge
Message 10951 - Posted: 15 Feb 2009, 23:20:01 UTC - in response to Message 10949.  


This is about the floating point precision. Travis has set quite strict limits for the reslts of the test WUs. You can't reach them with single precision calculations on older cards (the really old ones support only 16 or 24bit FP). It may be possible to get there with some kind of software emulation, but this would be a lot of effort which would be lost in the future as more and more cards get double precision support. Furthermore it would be most likely as slow ;) or even slower than to do the computations on a CPU.


Ok. I guess if it was worth it, other projects would be using it for the 'older' cards.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 10951 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10954 - Posted: 15 Feb 2009, 23:24:34 UTC - in response to Message 10951.  

Ok. I guess if it was worth it, other projects would be using it for the 'older' cards.

Don't forget for most other projects (like SETI or GPUGrid) single precision is enough. That is the reason you can also use slightly older cards there. MW is just more demanding in this specific area.
ID: 10954 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge10 year member badge
Message 10955 - Posted: 15 Feb 2009, 23:27:08 UTC - in response to Message 10954.  

Ok. I guess if it was worth it, other projects would be using it for the 'older' cards.

Don't forget for most other projects (like SETI or GPUGrid) single precision is enough. That is the reason you can also use slightly older cards there. MW is just more demanding in this specific area.


I haven't seen much mention of their use till recently. Though I haven't checked the other projects boards.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 10955 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileDaniel

Send message
Joined: 25 Nov 07
Posts: 25
Credit: 54,276,968
RAC: 2,064
50 million credit badge10 year member badge
Message 10971 - Posted: 16 Feb 2009, 3:03:39 UTC

Running pretty well on my HD4830, but I would like it to be running 2 wu's instead of the 8 or so it's running currently. I have played with the app_info file and so far have had no success. Right now it's set on avg_ncpus to 0.2, and max_ncpus to 5 (quad core intel). Resource share on MW is set to 10% and it still continues to run 10 wu's in parallel. Any suggestions?
ID: 10971 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileDaniel

Send message
Joined: 25 Nov 07
Posts: 25
Credit: 54,276,968
RAC: 2,064
50 million credit badge10 year member badge
Message 10973 - Posted: 16 Feb 2009, 3:41:56 UTC - in response to Message 10971.  

Running pretty well on my HD4830, but I would like it to be running 2 wu's instead of the 8 or so it's running currently. I have played with the app_info file and so far have had no success. Right now it's set on avg_ncpus to 0.2, and max_ncpus to 5 (quad core intel). Resource share on MW is set to 10% and it still continues to run 10 wu's in parallel. Any suggestions?


I think I got it, but man it is killing my screen response time!
ID: 10973 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge10 year member badgeextraordinary contributions badge
Message 10976 - Posted: 16 Feb 2009, 4:03:07 UTC - in response to Message 10971.  

Running pretty well on my HD4830, but I would like it to be running 2 wu's instead of the 8 or so it's running currently. I have played with the app_info file and so far have had no success. Right now it's set on avg_ncpus to 0.2, and max_ncpus to 5 (quad core intel). Resource share on MW is set to 10% and it still continues to run 10 wu's in parallel. Any suggestions?

Increase avg_ncpus and run another project at the same time.

The screen response is far better, if you close GPU-Z and the Catalyst Control Center (and probably some other tools with monitoring functions). It appears there is some interaction with the monitoring stuff in these. But some tools are checked to be running fine in parallel, afaik ATI Tray Tools and Everest.
ID: 10976 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileDaniel

Send message
Joined: 25 Nov 07
Posts: 25
Credit: 54,276,968
RAC: 2,064
50 million credit badge10 year member badge
Message 10981 - Posted: 16 Feb 2009, 5:18:24 UTC
Last modified: 16 Feb 2009, 5:19:08 UTC

I am running another project, and I switched the resource share to 40% here at MW, bumped the avg_ncpus up to 0.5. I closed CCC and it seemed to help a bit, more towards the acceptable level.

Also, is there going to be a new version for .18?
ID: 10981 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileExar Kun [HoloNet]
Avatar

Send message
Joined: 12 Nov 08
Posts: 26
Credit: 1,519,179
RAC: 2
1 million credit badge10 year member badge
Message 11007 - Posted: 16 Feb 2009, 13:18:07 UTC

avg_ncpus set to 0.1

max_ncpus set to 3 (core2duo)

Seems to work very fine, the Milkyway units are still completed in 8 or 10 seconds. It's the same time when I used 0.50 core, is it "normal" ?

Too bad we have the 1000 workunit-per-cpu limit, is it possible to send a request somewhere to remove this limit when using an optimized app ? This limit is now obsolete when you can calculate 3 or 4 times more units in a day with an optimized app. The only reason I'm moving a computer on milkyway is to help you for your excellent work on ATI graphic cards, but the credits are not very interesting ^^

THank you for this app, anyway ^^
Star Wars BOINC Team



ID: 11007 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileTravis
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 30 Aug 07
Posts: 2046
Credit: 26,480
RAC: 0
10 thousand credit badge10 year member badge
Message 11008 - Posted: 16 Feb 2009, 13:24:00 UTC - in response to Message 11007.  

avg_ncpus set to 0.1

max_ncpus set to 3 (core2duo)

Seems to work very fine, the Milkyway units are still completed in 8 or 10 seconds. It's the same time when I used 0.50 core, is it "normal" ?

Too bad we have the 1000 workunit-per-cpu limit, is it possible to send a request somewhere to remove this limit when using an optimized app ? This limit is now obsolete when you can calculate 3 or 4 times more units in a day with an optimized app. The only reason I'm moving a computer on milkyway is to help you for your excellent work on ATI graphic cards, but the credits are not very interesting ^^

THank you for this app, anyway ^^


I'm working on something to remove the credit limit, which should go live in the next couple days. Also, I can raise the workunit-per-cpu limit. What would be a good value?
ID: 11008 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Honza

Send message
Joined: 28 Aug 07
Posts: 31
Credit: 86,152,236
RAC: 0
50 million credit badge10 year member badge
Message 11017 - Posted: 16 Feb 2009, 14:33:29 UTC - in response to Message 11008.  

Also, I can raise the workunit-per-cpu limit. What would be a good value?

3600*24/9 is up to ~10K WUs per day on HD4870.

Too bad BOINC is still far from ready for GPUs.
I would have suggested to raise WU limit only for hosts with GPUs and distribute WUs with pretty short deadline or extra large ones for such hosts...
BOINC Project specifications and hardware requirements
ID: 11017 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 19 · Next

Message boards : Application Code Discussion : GPU app teaser

©2019 Astroinformatics Group