Welcome to MilkyWay@home

Does the CPU app & GPU app use the same floating-point format?

Message boards : Number crunching : Does the CPU app & GPU app use the same floating-point format?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Simplex0
Avatar

Send message
Joined: 11 Nov 07
Posts: 232
Credit: 178,229,009
RAC: 0
Message 50419 - Posted: 27 Jul 2011, 18:58:22 UTC
Last modified: 27 Jul 2011, 19:00:12 UTC

Just wondering if the CPU application use the same floating-point format as the GPU application?

Or is the CPU calculation done by using Extended_precision and the GPU calculation is done by using Double_precision_floating-point_format?
ID: 50419 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Matt Arsenault
Volunteer moderator
Project developer
Project tester
Project scientist

Send message
Joined: 8 May 10
Posts: 576
Credit: 15,979,383
RAC: 0
Message 50424 - Posted: 28 Jul 2011, 2:36:06 UTC - in response to Message 50419.  

Just wondering if the CPU application use the same floating-point format as the GPU application?

Or is the CPU calculation done by using Extended_precision and the GPU calculation is done by using Double_precision_floating-point_format?
No. The extended precision is a bad feature of the x87 FPU. I've been avoiding it like the plague. Not only is the x87 FPU slower than using SSE2, it adds inconsistency. The final results depend on whether intermediate results are kept in a register or not, so you get slightly different results between different compilers and between debug and optimized builds, which is just more difficult to work with and test. When it does have to be used (on 32 bit x86 systems, except for the critical part which uses SSE2+ if available), it currently explicitly turns off the extended precision.
ID: 50424 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Number crunching : Does the CPU app & GPU app use the same floating-point format?

©2024 Astroinformatics Group