One thing that struck me when reading this chapter was the comment that was made about the Boltzmann Distribution. At the bottom of page 85 it says the Boltzmann distribution is exact. Normally when I read an equation in a textbook, I assume it is quite accurate, and in an ideal world the equation I’d assume the equation is exact. But I’d also assume that if I were to make some measurements, they wouldn’t agree with the theory exactly, due mainly to my measurements, but also some small effects that the theory generalises.
Furthermore, Nelson makes this statement, but then does barely anything to justify it. It is mentioned that it can be derived from very general principles, which will be detailed in a later chapter, but that doesn’t justify why it is so exact. I can derive many things which are just approximations.
Now, I believe that the Boltzmann distribution equation is exact. I’m not disputing that fact. I just feel that that comment was a bit peculiar, as it was a strong claim made, without a lot of evidence shown to back it up. I would like to know how they can be so sure about the accuracy of this law.
On a slightly different topic: I have used probability distributions before, and when I used a discrete distribution I would use no units on my probability, and if I used a continuous one I’d use inverse measurement units, but I’d never realised that the different kinds of distributions had different units.
It wasn’t mentioned specifically, but I suspect that the probability distribution over two continuous random variables would be a two dimensional probability distribution, and have the units of inverse unit A by inverse unit B. But would the probability distribution over one continuous and one discrete random variable just have units of inverse A also? Is a joint distribution over one continuous and one discrete random variable even possible?