Because there's no reliable difference there. The truncation is in bits (machine's binary representation) NOT decimal digits. A difference of 1 in the final digit is probably within a bit that gets truncated.
I suggest you study IEEE floating point a bit. Also, study why computers do not generally store anything like full precision for real numbers. (Hint: you *cannot* store random real numbers in finite space. Only rationals are guaranteed to be storable in their full precision; irrationals require infinite space, unless you have a very clever representation that can store in terms of some operation like sin(x) or ln(x).)