It's not a python specific thing, it's the way computers store floating point numbers in memory. They are not exact value, only till certain precision. (32 bit precision or 64 bit precision) To know more detail check IEEE 754 standard.
in javascript: if((0.1 + 0.2).toFixed == (0.3).toFixed) { console.log("true"); } else { console.log("false"); }
Don't overcomplicate it, we use round () function when Points come into picture.
Ultimate Fact, agar interview m koi simple question dekhne mein lag rha hai toh uska answer tumhe apni soch ke opposite dena hai😂
Hey guys Dont go for round() ... because it can convert your floats into strings and if you use long precisions it will fail you by rounding. Best way to do it: method --> 1 {for more programming languages} epsilon = 1e-9 if abs((0.1 + 0.2) - 0.3) < epsilon: print(True) else: print(False) method --> 2 {for python} import math if math.isclose(0.1 + 0.2, 0.3, rel_tol=1e-9): print(True) else: print(False)
Bhaiya maja hi aagaya thankyou for this question ❤
It's not a python problem. It's a computer science problem.
In java you can do this: if( 0.2f + 0.1f == 0.3f ) System.out.println("True"); else System.out.println("False");
use this method for exact values "def check(a, b, c): if math.isclose(a + b, c): return True else: return False print(check(0.1, 0.2, 0.3))"
if (0.1 + 0.2 == 0.30000000000000004): print("True") else: print("False") This would work
from decimal import Decimal if Decimal('0.1') + Decimal('0.2') == Decimal('0.3'): print('True') else: print('False')
In C language float + float = float value
this is not a python question but a basic CS question, while adding 2 floating point numbers there is always an error. so 0.1+0.2 = 0.30000000000x where x is a random number, and that value is not equal to 0.3
round((0.1+0.2), 2) == (0.3) will give True! It's just about the storing the floating point numbers with precision
This happens because floating-point arithmetic in Python (and most other programming languages) can't represent certain decimal numbers exactly. When you add 0.1 and 0.2, the result is slightly more than 0.3 (i.e., 0.30000000000000004), so the comparison 0.1 + 0.2 == 0.3 returns False
kya khatarnak bat batayi apne
Easily bole to python main decimal ke baad 15 places ki precision hoti hai, jaise 0.100000000000000 and 0.100000000000001 bhi different value hai even though they are approx.0.1
It should be (0.1 + 0.2 >= 0.3) Or round to 1 decimal
its 0.30000000000000004 using lua i found it code : if 0.1 + 0.2 == 0.3 then print("it's correct!") else print("nahh it's false") end local a = 0.1 + 0.2 print("it's actually ".. a)
@Mujtaba_Ali_