Imagine needing to handle very small or large numbers, such as the thickness of a cell membrane or the distance between galaxies. Writing out all the zeros is impractical, not to mention error-prone. This is where scientific notation becomes exceedingly useful.
Scientific notation is a way to express numbers that are too big or too small to be conveniently written in decimal form. It is used by scientists and engineers to make computations simpler and clearer.
How to Express a Decimal in Scientific Notation
Take a typical decimal number that is less than one, such as 0.009834. To express this number in scientific notation, follow these steps:
- Find a number between 1 and 10. For 0.009834, this would be 9.834.
- Count how many places you move the decimal to get that number. Here, we move the decimal three places to the right.
- Attach a power of ten that indicates the number of places you've moved the decimal point. Since we've moved it to the right, we use a negative exponent.
- Combine the two to express 0.009834 as 9.834 times a power of ten. Ultimately, you write 0.009834 as \(9.834 \cdot 10^{-3}\).