Welcome to MilkyWay@home

MilkyWay_GPU - Almost There!


Advanced search

Message boards : Number crunching : MilkyWay_GPU - Almost There!
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · Next

AuthorMessage
ProfileuBronan
Avatar

Send message
Joined: 9 Feb 09
Posts: 166
Credit: 27,520,813
RAC: 0
20 million credit badge14 year member badge
Message 22841 - Posted: 20 May 2009, 22:55:56 UTC

What i mean is that everybody is referring to C.P. making an application for the ati cards but what you all forget is that it maybe is no longer possible to make it into a ati application if its cuda based.
Untill now non of the cuda based applications has been rewritten for ati at all >.<
So saying that someone will make it is kinda wishfull thinking, it might not even be possible.

Its new, its relative fast... my new bicycle
ID: 22841 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Emanuel

Send message
Joined: 18 Nov 07
Posts: 280
Credit: 2,442,757
RAC: 0
2 million credit badge15 year member badge
Message 22845 - Posted: 20 May 2009, 23:49:49 UTC

Just how exactly is Travis supposed to develop an AMD/ATI CAL application when he only has an nVidia card to work with? I think you're all forgetting the main reason for the delay: getting the app working on single precision cards. Now that it's working, I don't see any reason client-side why Cluster Physik wouldn't be able to update his CAL application for the new code. As long as CPU applications are allowed to run on the GPU project (thus allowing the application to select your ATI card), you should be fine.
ID: 22845 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profilebanditwolf
Avatar

Send message
Joined: 12 Nov 07
Posts: 2425
Credit: 524,164
RAC: 0
500 thousand credit badge15 year member badge
Message 22846 - Posted: 20 May 2009, 23:54:11 UTC - in response to Message 22845.  

Just how exactly is Travis supposed to develop an AMD/ATI CAL application when he only has an nVidia card to work with?

The main reason Mw is using ATI now is that Cluster helped make it, not Mw. The project didn't even have decent software for developement unitl recently. My thought is that the project went with developing the Cuda app b/c they got a card and are hoping Cluster will do the Ati.
Doesn't expecting the unexpected make the unexpected the expected?
If it makes sense, DON'T do it.
ID: 22846 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileWestsail and *Pyxey*
Avatar

Send message
Joined: 22 Mar 08
Posts: 65
Credit: 15,715,071
RAC: 0
10 million credit badge14 year member badge
Message 22848 - Posted: 20 May 2009, 23:59:34 UTC

gimme....gimme...gimme.. =P

CUDA Host ;)
ID: 22848 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfilePaul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
100 million credit badge14 year member badge
Message 22857 - Posted: 21 May 2009, 1:40:02 UTC - in response to Message 22835.  

I assume that both CUDA and the "CPU" project in the end process the same data and are therefore working towards the same goal and the MW@home GPU project is not going to be loaded with completely different stuff.

Brian, I think, addressed the exterior part of the BOINC system and GPU processing. In that I have spent the better part of a month pawing through the innards of the Resource Scheduler that actually decides to get work and launch it, well, I can tell you that they missed a bet that both Nick Alvarez and others suggested back when 6.6.20 was just a gleam in the distant future... we strongly suggested that the implementation of the code in this area be modularized and made, um, simple word, generic ...

What that would have meant is that adding ATI or OpenCL to the way things work would be trivial ...

It won't be ...

The Resource Scheduler, such as it is, is the usual rats nest of code beloved by C programmers with lots of breaks to leap out of loops and return statements in the middle of the module ... all the bad things that one should learn in programmer's school not to do...

So, for example, even if someone wanted to take a stab at adding ATI support, you would have to add all the code yourself ... you cannot just take a generic model, instantiate it, and have it configure to support the new resource class. had they done that ... ATI and OpenCL are slam dunks... as it is ... the concept of CUDA is enshrined is very specific code, in variable names, in class names, in every nook and cranny ...

And so, those of us that had hoped that we could see a smooth and rapid inclusion of other resource classes ... well ... not going to happen.

I would even make the point that some of the issues that Richard Haslegrove and I have been documenting may never have arisen had a different approach been taken.

Oh, and as a home work assignment, look up loose vs. tight coupling. Much of BOINC has very tight coupling between modules... and functions ... and events ...
ID: 22857 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
500 thousand credit badge14 year member badge
Message 22858 - Posted: 21 May 2009, 2:49:26 UTC - in response to Message 22857.  


What that would have meant is that adding ATI or OpenCL to the way things work would be trivial ...

It won't be ...


Now the real question people here need to be asking is not why Dave and Travis are such evil people and boneheads for not doing ATI first, but ask the BOINC development team why they didn't follow the standard things that they do teach you in school, like modular code, proper exits, proper class instantiation and destruction, etc, etc, etc... Where I used to work there was this data engine that was supposed to handle the flow of the code execution from one module to the next. It had numbered exits and entrances to other modules and what you were supposed to do was create your code and make an entrance and exit to it and use the tool to flow to the correct spot. If it was done that way, you could use the language's built-in runtime step-mode debugger. Naturally people felt it was too complicated so they just did deep calls to other modules without going through the framework. So, to do real debugging, you'd have to
launch the standard debugger that had issues where even the debugger itself would periodically crash...or take your best guess at where things were headed and put in popup messages like "1", "2", "3", and the like so that you had markers of where you were...because doing it that way you also had no capability of watching the variables.

Additionally, one might also ask how many, if any, roadblocks nVidia intentionally encouraged so as to make life harder for Stream code...
ID: 22858 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileuBronan
Avatar

Send message
Joined: 9 Feb 09
Posts: 166
Credit: 27,520,813
RAC: 0
20 million credit badge14 year member badge
Message 22860 - Posted: 21 May 2009, 3:06:43 UTC

As usual well spoken guys :D
I seem not to be able to speak clear language but thats to be expected from a stupid dutch
Its new, its relative fast... my new bicycle
ID: 22860 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfilePaul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
100 million credit badge14 year member badge
Message 22861 - Posted: 21 May 2009, 3:27:14 UTC - in response to Message 22860.  

As usual well spoken guys :D
I seem not to be able to speak clear language but thats to be expected from a stupid dutch

Smarter than me ... I can't hardly speak English ... much less a second language...
ID: 22861 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfilePaul D. Buck

Send message
Joined: 12 Apr 08
Posts: 621
Credit: 161,934,067
RAC: 0
100 million credit badge14 year member badge
Message 22862 - Posted: 21 May 2009, 3:39:21 UTC - in response to Message 22858.  

Additionally, one might also ask how many, if any, roadblocks nVidia intentionally encouraged so as to make life harder for Stream code...

Um, well, in the areas I am thinking of, not an issue. Not even a consideration. It is where the demand for tasks are matched to the available computing resources ... I did not even touch on the work fetch where similar issues abound ... though I am under duress on the mail lists for some of my assertions. Not that I am always right ... but at least I am not wedded to my assumptions ...

I can't top your story, I don't think, but when I worked on the OTH-B radar I worked on a software modification where we changed the look angle and the allowed range. In that process, almost 50% of my changes were to correct comments and "D" lines (debug print statements commented out in the FORTRAN code with the letter D vs C and when you compiled with "debug" those lines would be compiled in, activating them, like setting the options in cc config in BOINC) ...

Anyways, the GE guys kept giving me flack for updating the baseline with corrected "D" lines ... I kept pointing out that the next time I had to debug and needed information calculated in that module it was kinda stupid to have to correct the debug print lines the second or third time around. Amazing to me how uncommon common sense seems to be ... :)

Anyway, I for one fault not the project types for much of what befalls us ... the fault is in the rickety design that has not improved over time. Note I said the design ... yes bugs are fixed, features added ... but the design encapsulates many poor choices and every month new ones are added ... prolly why I am anathema in most places because I keep pointing at the fact that the Emperor seems to be showing his Wee-Willy-Whatsis to the crowd.
ID: 22862 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileDan T. Morris
Avatar

Send message
Joined: 17 Mar 08
Posts: 165
Credit: 410,228,216
RAC: 0
300 million credit badge14 year member badge
Message 22863 - Posted: 21 May 2009, 4:22:43 UTC

Has the admin really looked at how many folks that have spent thousands of dollars for ATI cards? I for one would not want to be on the receiving end if they don't get ATI supported..

Good luck.

DD,

ID: 22863 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
SATAN
Avatar

Send message
Joined: 27 Feb 09
Posts: 45
Credit: 305,963
RAC: 0
100 thousand credit badge13 year member badge
Message 22868 - Posted: 21 May 2009, 5:43:24 UTC - in response to Message 22863.  

Has the admin really looked at how many folks that have spent thousands of dollars for ATI cards? I for one would not want to be on the receiving end if they don't get ATI supported..

Good luck.

DD,


Why should they, nobody asked people to go and spend money on an App that wasn't officially supported by the Boinc system. As far as I'm aware neither Travis nor Dave have gone around holding guns to people's head making them buy GPU to crunch with. People did this on their own merit.

I'm saying this as someone that had ordered a new 4870. The App wasn't official, that's the risk people took.

Mars rules this confectionery war!
ID: 22868 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profileverstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
300 million credit badge14 year member badge
Message 22871 - Posted: 21 May 2009, 7:16:54 UTC

So we just stay anonymous...
Cheers,

PeterV

.
ID: 22871 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Cluster Physik

Send message
Joined: 26 Jul 08
Posts: 627
Credit: 94,940,203
RAC: 0
50 million credit badge14 year member badgeextraordinary contributions badge
Message 22873 - Posted: 21 May 2009, 9:29:37 UTC - in response to Message 22841.  

What i mean is that everybody is referring to C.P. making an application for the ati cards but what you all forget is that it maybe is no longer possible to make it into a ati application if its cuda based.
Untill now non of the cuda based applications has been rewritten for ati at all >.<
So saying that someone will make it is kinda wishfull thinking, it might not even be possible.


As Travis has now dropped the double precision requirement it will be probably even easier to port the code to ATI (double support is a bit awkward with ATIs Stream SDK). In the simplest case one can take the CUDA kernels and make them Brook+ by just changing the declarations, as both are just C code. They are called a bit differently, but one can handle that.
One has to be a bit careful with the order of summing up the values, especially now with single precision. The problem is that different GPUs can use different sequences, one has to test the effects. Furthermore, afaik NV GPUs are sometimes a bit less precise for divides than ATI (only the last bit is affected, if at all).
But I will see when the code is released, hopefully that will happen today. From Sunday on I will be away at a conference for 5 days.

PS:
Expect a massive speedup with single precision ;)
I plan to do the first version with the old Stream SDK 1.3 (the same as for the current 0.19e) as I know the bugs of that version and it should run on all computers which run the ATI app now. After that I will switch to SDK 1.4, which requires Catalyst 9.2 or newer (If I get the newest Catalyst drivers running under XP64).
ID: 22873 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileThe Gas Giant
Avatar

Send message
Joined: 24 Dec 07
Posts: 1947
Credit: 240,884,648
RAC: 0
200 million credit badge15 year member badge
Message 22874 - Posted: 21 May 2009, 9:57:38 UTC

Great work Cluster Physik. I look forward to reaping the benefits of your outstanding effort.

It's good to hear that modifying the CUDA app for use by an ATI card is not as big a deal as I thought it was. I just hope Travis will let us ATI folks use the anon platform in the GPU project.
ID: 22874 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profileverstapp
Avatar

Send message
Joined: 26 Jan 09
Posts: 589
Credit: 497,834,261
RAC: 0
300 million credit badge14 year member badge
Message 22877 - Posted: 21 May 2009, 10:38:26 UTC

>One has to be a bit careful with the order of summing up the values
Perhaps time to re-read that old numerical analysis book...
>If I get the newest Catalyst drivers running under XP64
I'd be happy to get them running under XP32.
Thanks for all your work, Cluster. Don't stop now. :)
Cheers,

PeterV

.
ID: 22877 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Brian Silvers

Send message
Joined: 21 Aug 08
Posts: 625
Credit: 558,425
RAC: 0
500 thousand credit badge14 year member badge
Message 22880 - Posted: 21 May 2009, 11:09:19 UTC - in response to Message 22874.  

Great work Cluster Physik. I look forward to reaping the benefits of your outstanding effort.

It's good to hear that modifying the CUDA app for use by an ATI card is not as big a deal as I thought it was. I just hope Travis will let us ATI folks use the anon platform in the GPU project.


So, with that being said by CP, would all of you that are up in arms over ATI not being released first try to understand that this is the order in which it had to be done?

The only "gotcha" I see is some muttering about the app_info.xml (Anonymous Platform) support in newer BOINC clients, but I'm a bit fuzzy on that issue, so it may not be an evil plot, and even if it were an evil plot to crush ATI support out of BOINC, that evil plot would be perpetrated by BOINC, not the individual projects, so try to aim the flaming at the right people... ;-)
ID: 22880 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
ProfileExar Kun [HoloNet]
Avatar

Send message
Joined: 12 Nov 08
Posts: 26
Credit: 1,542,686
RAC: 0
1 million credit badge14 year member badge
Message 22888 - Posted: 21 May 2009, 12:53:18 UTC

My personal internet access is down for a few days so I didn't tried, so I ask the question(s) directly : is the CUDA project operationnal or not ? Has anyone tried with a small graphic card like 8600M GS, 8600 GT, etc. ?
Star Wars BOINC Team



ID: 22888 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
JAMC

Send message
Joined: 9 Sep 08
Posts: 96
Credit: 336,443,946
RAC: 0
300 million credit badge14 year member badge
Message 22889 - Posted: 21 May 2009, 12:55:18 UTC

Thanks for staying involved with the ATI app CP... :)
ID: 22889 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dataman
Avatar

Send message
Joined: 5 Sep 08
Posts: 27
Credit: 245,439,808
RAC: 0
200 million credit badge14 year member badge
Message 22892 - Posted: 21 May 2009, 13:35:54 UTC - in response to Message 22888.  
Last modified: 21 May 2009, 13:37:03 UTC

My personal internet access is down for a few days so I didn't tried, so I ask the question(s) directly : is the CUDA project operationnal or not ? Has anyone tried with a small graphic card like 8600M GS, 8600 GT, etc. ?


I tried on a 8600GT.
"5/21/2009 6:31:51 AM Milkyway@home Message from server: No work available"

ID: 22892 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profileborandi
Avatar

Send message
Joined: 21 Feb 09
Posts: 180
Credit: 27,806,824
RAC: 0
20 million credit badge13 year member badge
Message 22894 - Posted: 21 May 2009, 13:40:32 UTC - in response to Message 22873.  

As Travis has now dropped the double precision requirement


Score :) Now I can use the 4670s I have :)

ID: 22894 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · Next

Message boards : Number crunching : MilkyWay_GPU - Almost There!

©2023 Astroinformatics Group