Message boards :
Number crunching :
MilkyWay_GPU - Almost There!
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · Next
Author | Message |
---|---|
Send message Joined: 9 Feb 09 Posts: 166 Credit: 27,520,813 RAC: 0 |
What i mean is that everybody is referring to C.P. making an application for the ati cards but what you all forget is that it maybe is no longer possible to make it into a ati application if its cuda based. Untill now non of the cuda based applications has been rewritten for ati at all >.< So saying that someone will make it is kinda wishfull thinking, it might not even be possible. Its new, its relative fast... my new bicycle |
Send message Joined: 18 Nov 07 Posts: 280 Credit: 2,442,757 RAC: 0 |
Just how exactly is Travis supposed to develop an AMD/ATI CAL application when he only has an nVidia card to work with? I think you're all forgetting the main reason for the delay: getting the app working on single precision cards. Now that it's working, I don't see any reason client-side why Cluster Physik wouldn't be able to update his CAL application for the new code. As long as CPU applications are allowed to run on the GPU project (thus allowing the application to select your ATI card), you should be fine. |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Just how exactly is Travis supposed to develop an AMD/ATI CAL application when he only has an nVidia card to work with? The main reason Mw is using ATI now is that Cluster helped make it, not Mw. The project didn't even have decent software for developement unitl recently. My thought is that the project went with developing the Cuda app b/c they got a card and are hoping Cluster will do the Ati. Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 22 Mar 08 Posts: 65 Credit: 15,715,071 RAC: 0 |
|
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
I assume that both CUDA and the "CPU" project in the end process the same data and are therefore working towards the same goal and the MW@home GPU project is not going to be loaded with completely different stuff. Brian, I think, addressed the exterior part of the BOINC system and GPU processing. In that I have spent the better part of a month pawing through the innards of the Resource Scheduler that actually decides to get work and launch it, well, I can tell you that they missed a bet that both Nick Alvarez and others suggested back when 6.6.20 was just a gleam in the distant future... we strongly suggested that the implementation of the code in this area be modularized and made, um, simple word, generic ... What that would have meant is that adding ATI or OpenCL to the way things work would be trivial ... It won't be ... The Resource Scheduler, such as it is, is the usual rats nest of code beloved by C programmers with lots of breaks to leap out of loops and return statements in the middle of the module ... all the bad things that one should learn in programmer's school not to do... So, for example, even if someone wanted to take a stab at adding ATI support, you would have to add all the code yourself ... you cannot just take a generic model, instantiate it, and have it configure to support the new resource class. had they done that ... ATI and OpenCL are slam dunks... as it is ... the concept of CUDA is enshrined is very specific code, in variable names, in class names, in every nook and cranny ... And so, those of us that had hoped that we could see a smooth and rapid inclusion of other resource classes ... well ... not going to happen. I would even make the point that some of the issues that Richard Haslegrove and I have been documenting may never have arisen had a different approach been taken. Oh, and as a home work assignment, look up loose vs. tight coupling. Much of BOINC has very tight coupling between modules... and functions ... and events ... |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Now the real question people here need to be asking is not why Dave and Travis are such evil people and boneheads for not doing ATI first, but ask the BOINC development team why they didn't follow the standard things that they do teach you in school, like modular code, proper exits, proper class instantiation and destruction, etc, etc, etc... Where I used to work there was this data engine that was supposed to handle the flow of the code execution from one module to the next. It had numbered exits and entrances to other modules and what you were supposed to do was create your code and make an entrance and exit to it and use the tool to flow to the correct spot. If it was done that way, you could use the language's built-in runtime step-mode debugger. Naturally people felt it was too complicated so they just did deep calls to other modules without going through the framework. So, to do real debugging, you'd have to launch the standard debugger that had issues where even the debugger itself would periodically crash...or take your best guess at where things were headed and put in popup messages like "1", "2", "3", and the like so that you had markers of where you were...because doing it that way you also had no capability of watching the variables. Additionally, one might also ask how many, if any, roadblocks nVidia intentionally encouraged so as to make life harder for Stream code... |
Send message Joined: 9 Feb 09 Posts: 166 Credit: 27,520,813 RAC: 0 |
As usual well spoken guys :D I seem not to be able to speak clear language but thats to be expected from a stupid dutch Its new, its relative fast... my new bicycle |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
As usual well spoken guys :D Smarter than me ... I can't hardly speak English ... much less a second language... |
Send message Joined: 12 Apr 08 Posts: 621 Credit: 161,934,067 RAC: 0 |
Additionally, one might also ask how many, if any, roadblocks nVidia intentionally encouraged so as to make life harder for Stream code... Um, well, in the areas I am thinking of, not an issue. Not even a consideration. It is where the demand for tasks are matched to the available computing resources ... I did not even touch on the work fetch where similar issues abound ... though I am under duress on the mail lists for some of my assertions. Not that I am always right ... but at least I am not wedded to my assumptions ... I can't top your story, I don't think, but when I worked on the OTH-B radar I worked on a software modification where we changed the look angle and the allowed range. In that process, almost 50% of my changes were to correct comments and "D" lines (debug print statements commented out in the FORTRAN code with the letter D vs C and when you compiled with "debug" those lines would be compiled in, activating them, like setting the options in cc config in BOINC) ... Anyways, the GE guys kept giving me flack for updating the baseline with corrected "D" lines ... I kept pointing out that the next time I had to debug and needed information calculated in that module it was kinda stupid to have to correct the debug print lines the second or third time around. Amazing to me how uncommon common sense seems to be ... :) Anyway, I for one fault not the project types for much of what befalls us ... the fault is in the rickety design that has not improved over time. Note I said the design ... yes bugs are fixed, features added ... but the design encapsulates many poor choices and every month new ones are added ... prolly why I am anathema in most places because I keep pointing at the fact that the Emperor seems to be showing his Wee-Willy-Whatsis to the crowd. |
Send message Joined: 17 Mar 08 Posts: 165 Credit: 410,228,216 RAC: 0 |
Has the admin really looked at how many folks that have spent thousands of dollars for ATI cards? I for one would not want to be on the receiving end if they don't get ATI supported.. Good luck. DD, |
Send message Joined: 27 Feb 09 Posts: 45 Credit: 305,963 RAC: 0 |
Has the admin really looked at how many folks that have spent thousands of dollars for ATI cards? I for one would not want to be on the receiving end if they don't get ATI supported.. Why should they, nobody asked people to go and spend money on an App that wasn't officially supported by the Boinc system. As far as I'm aware neither Travis nor Dave have gone around holding guns to people's head making them buy GPU to crunch with. People did this on their own merit. I'm saying this as someone that had ordered a new 4870. The App wasn't official, that's the risk people took. Mars rules this confectionery war! |
Send message Joined: 26 Jan 09 Posts: 589 Credit: 497,834,261 RAC: 0 |
|
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
What i mean is that everybody is referring to C.P. making an application for the ati cards but what you all forget is that it maybe is no longer possible to make it into a ati application if its cuda based. As Travis has now dropped the double precision requirement it will be probably even easier to port the code to ATI (double support is a bit awkward with ATIs Stream SDK). In the simplest case one can take the CUDA kernels and make them Brook+ by just changing the declarations, as both are just C code. They are called a bit differently, but one can handle that. One has to be a bit careful with the order of summing up the values, especially now with single precision. The problem is that different GPUs can use different sequences, one has to test the effects. Furthermore, afaik NV GPUs are sometimes a bit less precise for divides than ATI (only the last bit is affected, if at all). But I will see when the code is released, hopefully that will happen today. From Sunday on I will be away at a conference for 5 days. PS: Expect a massive speedup with single precision ;) I plan to do the first version with the old Stream SDK 1.3 (the same as for the current 0.19e) as I know the bugs of that version and it should run on all computers which run the ATI app now. After that I will switch to SDK 1.4, which requires Catalyst 9.2 or newer (If I get the newest Catalyst drivers running under XP64). |
Send message Joined: 24 Dec 07 Posts: 1947 Credit: 240,884,648 RAC: 0 |
Great work Cluster Physik. I look forward to reaping the benefits of your outstanding effort. It's good to hear that modifying the CUDA app for use by an ATI card is not as big a deal as I thought it was. I just hope Travis will let us ATI folks use the anon platform in the GPU project. |
Send message Joined: 26 Jan 09 Posts: 589 Credit: 497,834,261 RAC: 0 |
>One has to be a bit careful with the order of summing up the values Perhaps time to re-read that old numerical analysis book... >If I get the newest Catalyst drivers running under XP64 I'd be happy to get them running under XP32. Thanks for all your work, Cluster. Don't stop now. :) Cheers, PeterV . |
Send message Joined: 21 Aug 08 Posts: 625 Credit: 558,425 RAC: 0 |
Great work Cluster Physik. I look forward to reaping the benefits of your outstanding effort. So, with that being said by CP, would all of you that are up in arms over ATI not being released first try to understand that this is the order in which it had to be done? The only "gotcha" I see is some muttering about the app_info.xml (Anonymous Platform) support in newer BOINC clients, but I'm a bit fuzzy on that issue, so it may not be an evil plot, and even if it were an evil plot to crush ATI support out of BOINC, that evil plot would be perpetrated by BOINC, not the individual projects, so try to aim the flaming at the right people... ;-) |
Send message Joined: 12 Nov 08 Posts: 26 Credit: 1,542,686 RAC: 0 |
My personal internet access is down for a few days so I didn't tried, so I ask the question(s) directly : is the CUDA project operationnal or not ? Has anyone tried with a small graphic card like 8600M GS, 8600 GT, etc. ? Star Wars BOINC Team |
Send message Joined: 9 Sep 08 Posts: 96 Credit: 336,443,946 RAC: 0 |
Thanks for staying involved with the ATI app CP... :) |
Send message Joined: 5 Sep 08 Posts: 28 Credit: 245,585,043 RAC: 0 |
My personal internet access is down for a few days so I didn't tried, so I ask the question(s) directly : is the CUDA project operationnal or not ? Has anyone tried with a small graphic card like 8600M GS, 8600 GT, etc. ? I tried on a 8600GT. "5/21/2009 6:31:51 AM Milkyway@home Message from server: No work available" |
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
|
©2024 Astroinformatics Group