Saturday, August 7, 2010

With Great Power Comes Great Responsibility

There was one comment that both puzzled and annoyed me in Chapter three. At the very beginning of the chapter Nelson says that not being able to predict the actual measured values of something is a blessing in disguise. He then explains this by saying no-one would want a list of positions and velocities of the air molecules in a room as this would be an absurd amount of useless information.

But I disagree with him here. Granted that measuring these values for air molecules in a room would be particularly boring, however if you could apply this to useful situations such as the folding of a protein it would be very useful. As for the comment about the amount of information, I thought that was what computers were used for; maybe I’m just lazy though.

I understand that his comment was probably an interesting way of introducing probability distributions but I still think that given the option both Biologists and Physicists would opt for the ability to make accurate predictions rather than probabilistic estimates.


  1. I think probabilistic estimates would be just fine. Even if we had data describing the initial state of every particle in a protein, I think that probabilistic measurements are the best we could hope for. The problem is not so much the storage of a large amount of data, it is the time taken to do any calculations with it, and the accuracy of our calculations

    A problem in physics that receives much attention is the 3-body problem. This is where one has 3 particles, and tries to analytically predict their motion under gravity (or if they are charged particles, electromagnetic forces). Currently, there is no analytical solution. This means if we want to predict the motions of the particles, we must use numerical techniques that only approximate thier motion.

    Now extend this problem to a protein. If we can't accurately predict the motion of 3 particles, how can we hope to accurately predict the motions of 3000 atoms in a protein? Or 10000 particles, if we include the water molecules which are necessary for it to fold?

    So even if we did have perfectly accurate initial data on the positions and velocities of the atoms in a protein, we would lose the accuracy immediately if we tried to do any calculations with it.

    I think scientists are pretty accustomed to making predictions with data that comes from a probabilistic estimate. Any measurement made has intrinsic errors. Many theories make good estimates, but do not include every little effect. And very very few mathematical problems have analytical solutions. In the remaining cases, we have to settle for approximate numerical solutions.

  2. I agree with Mitch on this situation. As he has said, there is by no means a difficulty in storing the data, it would be effectively making a use of the data. When you think about it, doing calculations with say 10,000 particles, you have to have such detailed programming and parameters involved (such as the boundaries of the environment), and then in the calculations it would involve running simulations between each step-wise motion of the particles. The computer would have to look at where each particle is headed, which particle or wall it's going to interact with next, which direction and speed it'll head off at....and for a computer to achieve all those calculations across a particular time frame of the system, would take a considerable bit of time in actuality. If you look at the simulations that websites provide as examples for gas motion, they only ever use around 10 molecules to show, and don't let the clip run for long. I think the other important to make as well, in relations to using the information for protein folding, is that along with the interactions based on momentum, there are also other types of interactions involved, such as steric hinderance, hydrophobic/hydrophilic interactions, van der Walls, electrostatic, hydrogen bonds etc etc that would have to be included in the calculations, which makes the calculation time problem even more problematic.

    I will say in relation to the knowledge of a particles velocity and position though, is that even possible though? Because, if I remember correctly, there is the Heisenburg Uncertainty principle which prevents us from knowing a particles velocity and position at the same time. I mean, I've always had some slight doubt to this principle, if you were say working with a few particles, but with something like a protein, it certainly would be somewhat impossible to know the information related to each particle.

  3. I'm not too up to date on quantum physics, but I'll see if I can explain the Heisenberg uncertainty principle. Basically, we can't see atoms. The only way we can detect the position of an atom (or an electron or some other sub-atomic particle) is by it hitting one of our instruments (or by something else hitting it and hitting one of our instruments). But by taking the measurement, we change the particle's speed or position. We can never measure its position and speed at the same time, and we cannot measure either of those without changing at least one of those measurements.

    But this is a very unsatisfying explanation. So let me make it a little tighter. Obviously, like I mentioned above, as scientists we can never hope to measure anything perfectly accurately. So this principle about never measuring position and momentum accurately seems to be nothing new. The difference about this principle is that certainty in one measurement is indirectly proportional to the certainty in the other. So if the accuracy in the position of a particle is increased, then this doesn't usually effect the accuracy in the momentum measurement. But if we get even more accurate, such that the uncertainty in position is approaching 10^-34, then our uncertainty in momentum must start increasing.

    The uncertainty principle is not just a vague statement about uncertainty. It is a mathematically representable principle: (delta)x(delta)p>=h(bar)/2. It was not fit retrospectively to existing data; it was discovered as a consequence of representing particles as a wave function. So if Heisenberg's uncertainty principle is disproven, then Schrödinger's wave function model of particles is also flawed.

    I suppose the point I am trying to make (not as convincingly as I hoped) is that Heisenberg's uncertainty principle is as valid for one particle as it is for many.

    I also wanted to say that the uncertainty principle does not just a particle’s position and momentum. It also applies to other pairs of measurements about a particle, like I think position and kinetic energy, angular position and angular momentum, and energy and time.

  4. The uncertainty principle is often taught as something inherently and fundamentally quantum mechanical; it is often not mentioned that there is also a CLASSICAL uncertainty principle. In fact there is. It is called the Cramer-Rao lower bound on the variance of an estimator. Here's a wikipedia article:

    There has been interesting work establishing that the Schrodinger equation can be derived variationally within estimation theory. See, for example:

  5. As an aside, Alan and Megan are good people to talk to about this, since they are actually in the business of simulating the atomistic motion of e.g. proteins.