To define a floating-point value, include a decimal point and at least one number after the decimal point.
Here are some examples:
let floatNum1 = 1.1; let floatNum2 = 0.1; let floatNum3 = .1; // valid, but not recommended
When there is no digit after the decimal point, the number becomes an integer.
If the number being represented is a whole number, such as 1.0, it will be converted into an integer.
let floatNum1 = 1.; // missing digit after decimal - interpreted as integer 1 let floatNum2 = 10.0; // whole number - interpreted as integer 10
Javascript floating-point values can be represented using e-notation.
E-notation indicates a number that should be multiplied by 10 raised to a given power.
The format of e-notation in Javascript is have a number, integer or floating-point, followed by an uppercase or lowercase letter E, followed by the power of 10 to multiply by.
let floatNum = 3.125e7; // equal to 31250000
In this example, floatNum
is equal to 31,250,000.
E-notation can be used to represent very small numbers, for example 3e-17.
By default, Javascript converts any floating-point value with at least six zeros after the decimal point into e-notation.
Floating-point values are accurate up to 17 decimal places but are less accurate in arithmetic computations than whole numbers.
For instance, adding 0.1 and 0.2 yields 0.30000000000000004 instead of 0.3.
These rounding errors make it difficult to test for specific floating-point values.
let a = 0.1; let b = 0.2; if (a + b == 0.3) { // false, avoid! console.log("You got 0.3."); }
Here, the sum of two numbers is tested to see if it's equal to 0.3.
We should never test for specific floating-point values.
The rounding errors are a side effect of the way floating-point arithmetic is done in IEEE-754-based numbers and is not unique to Javascript.