不知道大家有没有发现CLR在处理浮点数的时候,有点怪异,举个例子来说:
看看编译出来的IL吧:
就在一头雾水的时候,查到了MS的文档《Common Language Infrastucture(CLI)》中提到了一个没有看到过的类型:F,描述是:native size floating-point number (internal to VES, not user visible)。看看描述就觉得有点怪,为什么会有F这么一个只有VES专用的浮点类型,在IL和其他.net语言里面都没有看到过。
继续把文档看下去,发现了一个关键性的描述:
Storage for floating-point numbers (static, array elements, and fields of classes) areof fix size. Thesupported storage sizes are folat32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type.
这里说的internal floating-point type就是值F,因此即使我们在IL中看到float32或者float64,只要不是静态字段、数组元素或者类的字段,它其实都是F这个内部浮点数。至于这个F的精度,要看具体的平台,在普通的x86上,这个精度一定大于等于float64,所以,在计算堆栈上无论是float32还是float64,它的实际精度一定大于等于float64。除非已经做过范围检查,即:
static void Main(string[] args)
{
float f = float.MaxValue;
float fSum = f + f;
Console.WriteLine(fSum);
double dSum = f + f;
Console.WriteLine(dSum);
Console.ReadLine();
}
输出的结果是:{
float f = float.MaxValue;
float fSum = f + f;
Console.WriteLine(fSum);
double dSum = f + f;
Console.WriteLine(dSum);
Console.ReadLine();
}
+∞
6.80564693277058E+38
fSum是两个float的最大值相加,结果是+无穷大是个正常的结果,dSum也是两个float的最大值相加,但是结果确是float的最大值的2倍的一个double值。6.80564693277058E+38
看看编译出来的IL吧:
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
.maxstack 2
.locals init (
[0] float32 f,
[1] float32 fSum,
[2] float64 dSum)
L_0000: nop
L_0001: ldc.r4 3.402823E+38
L_0006: stloc.0
L_0007: ldloc.0
L_0008: ldloc.0
L_0009: add
L_000a: stloc.1
L_000b: ldloc.1
L_000c: call void [mscorlib]System.Console::WriteLine(float32)
L_0011: nop
L_0012: ldloc.0
L_0013: ldloc.0
L_0014: add
L_0015: conv.r8
L_0016: stloc.2
L_0017: ldloc.2
L_0018: call void [mscorlib]System.Console::WriteLine(float64)
L_001d: nop
L_001e: call string [mscorlib]System.Console::ReadLine()
L_0023: pop
L_0024: ret
}
其中的float32就是float型,而float64就是double型,L_0007到L_000a做了fSum = f + f,L_0012到L_0016做了dSum = f + f,可以看到唯一的区别就是在add操作符后面多了个conv.r8(也就是转换成double型),似乎还是看不出个所以然,反而觉得更奇怪,如果是先add在转换,那也应该是把float的正无穷大转换成double,而结果是double的值正好是float的MaxValue的2倍。{
.entrypoint
.maxstack 2
.locals init (
[0] float32 f,
[1] float32 fSum,
[2] float64 dSum)
L_0000: nop
L_0001: ldc.r4 3.402823E+38
L_0006: stloc.0
L_0007: ldloc.0
L_0008: ldloc.0
L_0009: add
L_000a: stloc.1
L_000b: ldloc.1
L_000c: call void [mscorlib]System.Console::WriteLine(float32)
L_0011: nop
L_0012: ldloc.0
L_0013: ldloc.0
L_0014: add
L_0015: conv.r8
L_0016: stloc.2
L_0017: ldloc.2
L_0018: call void [mscorlib]System.Console::WriteLine(float64)
L_001d: nop
L_001e: call string [mscorlib]System.Console::ReadLine()
L_0023: pop
L_0024: ret
}
就在一头雾水的时候,查到了MS的文档《Common Language Infrastucture(CLI)》中提到了一个没有看到过的类型:F,描述是:native size floating-point number (internal to VES, not user visible)。看看描述就觉得有点怪,为什么会有F这么一个只有VES专用的浮点类型,在IL和其他.net语言里面都没有看到过。
继续把文档看下去,发现了一个关键性的描述:
Storage for floating-point numbers (static, array elements, and fields of classes) areof fix size. Thesupported storage sizes are folat32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type.
这里说的internal floating-point type就是值F,因此即使我们在IL中看到float32或者float64,只要不是静态字段、数组元素或者类的字段,它其实都是F这个内部浮点数。至于这个F的精度,要看具体的平台,在普通的x86上,这个精度一定大于等于float64,所以,在计算堆栈上无论是float32还是float64,它的实际精度一定大于等于float64。除非已经做过范围检查,即:
static void Main(string[] args)
{
float f = float.MaxValue;
float fSum = f + f;
Console.WriteLine(fSum);
double dSum = (float)(f + f);
Console.WriteLine(dSum);
Console.ReadLine();
}
此时的dSum也会显示正无穷大,因为在强转float时,强制做了一次范围检查,被标记为正无穷大了,再把float的无穷大转换成double,还是无穷大。{
float f = float.MaxValue;
float fSum = f + f;
Console.WriteLine(fSum);
double dSum = (float)(f + f);
Console.WriteLine(dSum);
Console.ReadLine();
}