LDBL_MAX -1.#QNAN0e+000 with MinGW?

I have tried to run this with both eclipse(CDT)+MinGW and Cygwin+GCC

[code lang=”c”] int main() {
puts("The range of ");
printf("tlong double is [%Le, %Le]∪[%Le, %Le]n", -LDBL_MAX, -LDBL_MIN, LDBL_MIN, LDBL_MAX);

but got different results:

  • In eclipse(CDT)+MinGW
    The range of
    long double is [-1.#QNAN0e+000, 3.237810e-319]∪[6.953674e-310, 0.000000e+000]
  • In Cygwin+GCC
    The range of
    long double is [-1.189731e+4932, -3.362103e-4932]∪[3.362103e-4932, 1.189731e+4932]

This is weird, and I googled it, then just found this http://www.thescripts.com/forum/thread498535.html

The LDBL_MAX of long double is machine-dependent, but why it like this in same machine? I guess it’s the problem with MinGW. Anyone hv any idea?