*The mother of one of my students asked me for advice on how many significant figures her son should give in his physics exams. Since it was raining, I decided to answer her question very fully . . .*

So far as **Maths** is concerned, the ruling is that you should give three significant figures unless the question specifies otherwise. In general this is interpreted to mean that an answer is acceptable if it is *capable* of being rounded to three significant figures, even if it hasn’t actually been so rounded.

Thus if the question asks for the area of a circle of radius 1, then

π, which is exact, is OK

3.14, is OK, since it has been rounded to three significant figures

3.14159 is also OK, since it *could* be rounded to 3.14, even though it hasn’t been

but 3.1 is not OK, since it *cannot* be rounded to 3.14

So far as **Physics** is concerned, the ruling is that you cannot give more significant figures in your answer than were given to you in the question, since you cannot magic up accuracy where it didn’t already exist.

No circle has a radius of *exactly* 1m. If I say a circle has radius 1m, I may mean that it has a radius of 1m* to the nearest cm*, so that its actual radius could be anywhere between 0.995m and 1.005m, giving an area somewhere between 3.11... and 3.17... square metres. Since these two values only agree to one significant figure, I should strictly speaking only give the area as 3 square metres.

Examiners don’t expect this kind of analysis, but it does illustrate the importance of not giving more significant figures in an answer than were available in the data used to calculate it.

Unfortunately, physics examiners don’t seem particularly consistent (even in the same question!) with how they deal with this. The key words appear to be “giving your answer to an appropriate degree of accuracy.” When these words appear in a question (and they don’t always appear) then you need to take particular care not to give too many significant figures.

The difficulty is that it’s not always obvious how accurately information in a question has been given. If you write “1m”, the convention is that this is correct to the nearest whole number. If you’d measured it to the nearest centimetre and found that it came out to 1m, you would write it as 1.00m.

The problem with writing “1m” is that, although this implies you’ve given it to the nearest whole number, you can’t tell how many significant figures you’re working to because, for example, you might have another measurement of 12m in the same set of data – that’s also presumably measured to the nearest whole number, too – but it has *two* significant figures. This is because 1m only has one significant figure to offer! So if you have a rectangle with sides 1m and 12m, how many significant figures are you using? In a sense you’ve contradicted yourself because if the answer is “two significant figures” then you should have written 1.0m, but then that would imply you’ve measured that side more accurately (to the nearest 10cm) than the other side. And the answer can’t be “one significant figure” because you should have written 10m and not 12m.

Don’t even get me started on 10m! Is that one significant figure (because the actual measurement was 12m and you rounded it) or two significant figures (because the actual measurement was 10.2m and you rounded it)? You can’t tell without someone making it explicit.

So the contradictory advice is: in Maths, it’s usually OK to give too many, but in Physics, it’s sometimes wrong to give too many. In Maths it’s usually wrong to give too few, but in Physics it’s better to give too few than too many, unless you get carried away and only give one significant figure, even though one significant figure may be all that you can claim in the real world.

The resolution of the apparent inconsistency is that mathematicians argue that all of their numbers are exact, so that rounding is more for convenience than anything else. Indeed, pure mathematicians always prefer exact answers where possible – for example they'd rather give the area of the circle as π – even though such answers are impractical in the real world.

Physicists argue that no measurement is exact, so that rounding is important so as not to overstate what we really know.

For mathematicians, there exists such a thing as a circle and it has an exact area. For physicists, there is no such thing as a circle, since you cannot construct a perfect one.

The physicists approach is actually important in experimental physics. If you take two measurements and use them in a calculation you may end up with two (slightly) different answers. That difference may be telling you that something interesting is happening physically. Or it may simply be because the two initial measurements differed. Even two apparently similar objects will have slightly varying dimensions, for example: not all pound coins are *exactly* the same. If you are careful with how you deal with significant figures, you can distinguish between these two cases – natural variation and interesting physical phenomenon – and so you can tell if the difference in your calculations is indicative of something physically significant.

The role of **Statistics** in all this is to help you decide if – for example – two averages are different because of the natural variation in any set of measurements, or because the sizes of the underlying quantities you are measuring are, in fact, different from each other. (One-pound coins vary slightly in mass, but they have a very different mass from two-pound coins.) We call the latter a “significant difference”. Unhelpfully, significant differences are not necessarily significant in practical terms. *Statistical* significance merely indicates that two things do not have the same value; but the difference between them may be trivial and not of *practical* importance.

For example, it may be that there is a statistically significant difference between the average global temperature measured in 1995 and the average global temperature measured in 2010. (The BBC reported a rise of 0.19°C, describing it as “significant”.) There will certainly be *a* difference because each average will necessarily come from a finite amount of measurements, and these will necessarily vary because everything varies, up and down, over time. Statisticians may establish that the difference is *statistically* significant, but all that this will mean is that the true average global temperatures were not the same. (The averages we calculated from our measurements will merely be estimates of the true averages, since to find these we would need infinitely much perfect data.) Of course, the true average global temperatures may not be the same because they differ by 0.19°C – to statisticians this is a difference. In practical terms, this difference may only have a negligible effect on the biological, chemical and physical processes on the planet, and so it would not be regarded as *practically* significant.