Tuesday, August 31, 2010

The Statistical Postulate

The Statistical Postulate (Idea 6.4) states that:

"When an isolated system is left alone long enough, it evolves to thermal equilibrium. Equilibrium is not one particular microstate. Rather, it's that probability distribution of microstates having the greatest possible disorder allowed by the physical constraints on the system."

Many of the ideas in thermodynamics leave you with a sense of wondering how anything exciting can happen if a system is at equilibrium. However if you study a subsystem such as a single molecule the fluctuations that occur can represent a distribution of states, of which some will be high energy.

This leaves us with an idea that, at equilibrium, the total energy does not fluctuate because there are no more inputs into the system. However energy can still be transferred within and between subsystems at equilibrium.

Monday, August 30, 2010

2 Apples, a Peach and a Pear. Measuring the disorder of a sequence.

(p197 of the text.)

M, possibilities within the pool that you are drawing from = 3 (there are three different things I can draw: apple, peach or pear)
N, length of the sequence = 4
Ω, total possible sequences = N! / (N1!N2!N3!)
Where N1, number of apples in pool = 2
N2, number of peaches in pool = 1
N3, number of pears in pool = 1

Therefore Ω = 4! / 2!1!1! = 12
List of possibilities (AABC , AACB, ABAC, ACAB, ABCA, ACBA, CABA, BACA, BCAA, CBAA,CAAB, BAAC) = 12
I, number of bits = K[lnN! - ∑lnNj!] = K[lnΩ] = ln(12)/ln(2) = 3.59 bits

It can be seen that a sequence of 2 apples, a peach and a pear is not perfectly disordered as 3.59 < 4.

Sunday, August 29, 2010

Entropic forces

Chapter 6 considers ways in which a system can be 'open'. In section 6.5.2 the effect of an external mechanical force is considered. A rod of length L and area A is pushed into the system with fext along L, the free energy change depends only on TS:

This equation is from Sekur-Tetrode. We find that N, T, A and total energy are all constant - this means that the free energy change only depends on L and the force that causes this is found by:

Remembering that when free energy is minimised, no more work can be extracted we can find the maximum possible extracted work to be F - Fmin.

This force will be important when considering the apparent forces that cause proteins to fold and emphasises the usefulness of the concept of free energy.

The next section following discusses that the process of free energy transduction (of which this was one example) and states that the most efficient transduction happens when the energy is released in small controlled steps. A good example of this is the electron transport chain in ATP synthesis.

Zeroth Law

"Two boxes of gas in thermal equilibrium are most likely to divide their energy in a way the equalizes their temperature"

This fact has been known to me since high school, but I've never actually heard of it referred to as the Zeroth Law of Thermodynamics (by the way, does anyone know exactly why they called it the Zeroth Law, as opposed to the Fourth Law etc? The closest reasonings I can think of are that it's importance preceeds the well known 3, and that their order couldn't be rearranged, or that it's been labelled in the same way that computer data is, starting at position 0)

So, despite this being quite an easy concept to grasp, it did however open up a series of discussions in the text, so I felt it wise to bring up in a blog. The fundamental definition of temperature (the focus of part of our homework) is T = (dS/dE)^-1
From the text, "we define temperature abstractly as the quantity that comes to equal values when two subsystems exchanging energy come to equilibrium." This is where the equation came from, and it makes sense, since we'd expect for two systems that are at a point of zero net energy flux, we would need to have a defined shared temperature as well.

Moving on from here, the text led us into the ideas of extensive qualities (such as entropy, which can double if a system is double, halved if a system is halved etc) and intensive properties (such as temperature, which can remain the same, even if a system is doubled, halved etc).

My last point to make about temperature equilibrium however, is that size matters (that's right, to all who say that size doesn't matter, they obviously haven't come across thermal equilibrium before). It's important to remember that just because two objects of different temperature are going to come to equilibrium, it's not going to be the average of their two temperatures, it's going to be the average of their two temperatures relative to their respectives sizes. So if you had two of those medical packs that you can either heat up or freeze, and you put them together after heating up one and cooling down the other, they will find that approximate average of their temperatues. However, if you took the heated pack and placed in on an iceberg, then the hot pack will cool down to a temperature almost that of the iceberg, and the temperatue of the iceberg will be minimally raised.

Disorder and Predictability

"The disorder in a sequence reflects its probability"

This was a good quote from the text to focus on, to help understand disorder/entropy better. As the text states, the predictability of the weather is higher, due to a state recalling a previous states, and as a result, the disorder is lower. Likewise, the disorder of flipping a coin is higher, as the predictability is lower, due to each flip not recalling the previous state.
Just on a side note though, while I do see this as a good example to provide to understand the relationship between disorder and predictability, obviously in a real life context if a person has a consistent style of flipping a coin, we might be able to infer higher predictability upon the next flip if they start to show consistent trends.

The next point brought up by the text was the idea of correlated and uncorrelated events and how their collective amount of disorders are related. In correlated events (such as reading lecture notes, and listening to the lectopia of the same notes), one event can predict the other, so the collective amount of disorder cannot be equal to the sum of the two seperate events. However, in uncorrelated events (such as throwing a textbook out the window due to the frustration of failing an exam seeing how far it goes, and downing some various types of shots just because it's a friday night), because the two events cannot predict the other, the collective amount of disorder WILL be equal to the sum of the disorder in each of the seperate events.

We started off with the amount of disorder per message being:
I= Nlog2(M)
which after applying a number on conditions listed on page 196 - 197, we came to Shannon's Formula, which is:
I/N = -K * [Sum (from j = 1, to M) of P(j)*lnP(j)]

Friday, August 27, 2010


For Equation 6.6 Nelson briefly mentions a very important constant: the Planck Constant. The text states vaguely that it comes from quantum mechanics and it is used in the Sakur-Tetrode formula because it has the correct units. Here is the brief but real story.

When a young scientist Max Planck was working on developing a formula for predicting black-body radiation he happened across the concept of quantised energy. He found that heat radiation was produced in multiples of a discrete and very small number. The number is now known as the Planck constant and it has the value of ~6.63 x 10^-34 Js.
Planck derived the equation of E = hv by assuming that radiated heat was produced by atoms oscillating in harmonic motion between discrete energies. The difference between the energies were characterised by the frequency v and the energy constant h. Planck’s work was clear evidence against the previous assumption that energy was a continuous variable. His work led to Einstein’s proposal of light existing as photons as well as Schrodinger’s quantum wave mechanics.
The Planck constant also applies to energies concerning angular frequency. The reduced Planck constant (or h-bar) is equal is h/2π and is used in the equation E = ħω.

For more details see:
Radiation physicse for medical physicists
Planck Constant Wiki

Week 5 Discussion Session Notes

This week we were graced by the presence of Alan Mark, and the running of the session this week was done by Ross. The session started off with the topic of water, in particular talking about how the equation of liquid motion seem to inaccurate (on a macroscopic level), and that CGI water seems to unrealistic because of this issue.
We then proceeded to talk about the existance of Non-Newtonian fluids and how that phenomena exists. Non-Newtonian fluids come as a result of a non-linear relationship between shear and stress forces, resulting in the inability to assign a constant viscosity to the system. It was also mentioned that they are more heterogeneous in nature.
Moving on from there, it was reiterated that a small Reynold's number gives laminar flow, while a big Reynold's number (greater than or equal to 1000) gives turbulent flow. It was agreed that the equations for the Reynold's Number and viscous friction equations were the most important ones from the chapter.
Next up were discussions of bacterial motion, and how it is a case of directed motion vs Brownian motion. Note that Brownian motion still exists even in laminar flow.
In the diagram of laminar flowing water moving around the circle, the point was raised that as the water collides with the circle, there could likely be tiny amounts of turbulent flow occuring at those points of collision, due to the force of the water hitting the circle.
A plane, despite it's size, achieves laminar flow, and doesn't require the use of turbulent flow to take off from the ground. When a plane shakes, and the pilots call out "uhh, we're experiencing some turbulent motion" that's the plane experiencing turbulent flow. On a side note here as well, a helicopter is unable to travel fast than the speed of sound, or else it will break.

Up next, a good portion of the session was spent discussing the ideas of reciprocal and periodic motion, flagella, cilia and other movement related topics.
The flapper design works only if it is flexible (like a divers flipper).
During reciprocal motion, Alan told us that laminar or turbulent flow shouldn't matter due to the random motion of surrounding particles and their random collisions with the bacterium or cilia, and that it just so happens that they exist in laminar flow.
While discussing the flagella/flagella motor, it was revealed that the motor has two oscillatory motions. One rotation can cause a group of bacteria to cluster and interfere with each other, while the other rotation causes them to disperse. This technique is how they travel more effectively and competitively to their food source, since the dominant gathering of food is diffusion.

Next up, and a charming remind for Heather, is that the release of mucus from one's lungs is actually the stationary cells that line the lung walls that collect waste material within the lungs.
Finally, we talked about the torque of an unravelling DNA, how there was worry that the force of unwinding DNA would cause it to break up in a viscous liquid (not true), and that biological movies are inaccurate i.e. "walkers"

Remember, look at Poiseuille's "Life at Low Reynold's"

S(E)=kbln(Ω(E))

One equation I liked from this chapter is S(E)=kbln(Ω(E)). This equation relates the number of possible states available to a particular system (Ω) to a physically measureable quantity (entropy, S). I think there is something very profound here. Because the number of states available to most physical systems we encounter is so huge, it is hard to imagine different systems having more or less available states. So the idea of being able to quantify the number of states available to a system, and measure it by measuring the entropy of the system really impresses me.

This equation can be used to get an idea of the number of microstates available to a system. Not only has the natural log of the number of microstates been take, but the result is then multiplied by a constant that has order of magnitude of 10^-23. If we rearrange this equation to get Ω by itself, we find Ω(E)=exp(S(E)/ kb). A system with entropy 1J/K would have approximately 2.72^7.24E22, which is bigger than any calculator I can find will calculate. To give a comparison, exp(7.24E3)≈10^1042.

So is 1J/K of entropy a ridiculous amount of entropy? Let's do some rough calculations. We know that T=(dS/dE)^-1≈(∆S/∆E)^-1=∆E/∆S. So ∆S=∆E/T. Consider a beaker of one litre of water (i.e. 1000g). An energy change of 4.18kJ increases the temperature by 1K. This applies to liquid water, and is constant enough for all temperatures of liquid water for this rough investigation. If we say the water is at 300K (27C), then the change in entropy for this system, ∆S=4.18kJ/300K≈14J/K. So entropy of 1J/K is not a ridiculous amount of energy.

This is only a rough calculation, but is shows that 1J/K, on the macro scale, is a pretty small amount of entropy. However, this translates to an absolutely mind blowing number of microstates for the system. This also explains why worrying about statistical fluctuations on the macro scale is a foolish exercise.

Heat engines

Before I make my post, I’d like to draw our attention to the ‘Volunteer Needed’ post made by Ross. I meant to bring this up at the meeting on Wednesday, but I forgot. I will be at university working on my BIPH3000 project on Thursday afternoon, so I will be around at the time of this meeting. I would rather continue to work on my project at that time, but if it is not convenient for anyone else to make, I will volunteer to attend this meeting on our behalf. Let me know if any of you would rather do this though.

This chapter is complex. I think I’d understand the derivations better if all the variables for a specific measurement, like entropy, were made explicit, and proper partial derivatives were taken. I have trouble understanding what equations that look like dS=dq/T mean. However there was one part I found quite easy to understand, and that was the comparison of the biosphere to a heat engine (section 3.5.4). A heat engine is a physical system that converts heat energy to mechanical energy. This concept sounds heretical in the context of the first chapter, which suggest that heat energy is the lowest form of energy; energy can change forms, but each time the energy changes form, some of it is inevitably lost as heat energy. So a heat engine seems to suggest there is a way to convert this heat energy back into useful energy.

However, heat engines can exist. While heat is being converted to a useful form of energy, something is being irreplaceably lost: the order of the system. For a heat engine to work there must be two thermal reservoirs of different temperatures. Whatever the action of the motor is, the motor cycle has 4 necessary stages. Initially, heat energy from the warmer reservoir must be applied to another component. This component then uses the heat energy to increase its entropy, which then can be used to perform work. For the cycle to be repeatable, the component must be cooled down again, so the component is removed from contact with the warm reservoir and placed in thermal contact with the cooler reservoir. Decreasing the temperature decreases the entropy, so the component’s entropy and temperature is restored to the original state. The final step is the component is moved from the cool reservoir to the warm one again.

During this cycle, heat is transferred from the warm reservoir to the cooler one. This is where there irreversible exchange is occurring. Once enough heat energy is removed from the warmer reservoir to the cooler one to make the temperature of the reservoirs equal, there is no way to return the heat to the warmer reservoir from the cooler one (without performing work from outside the system). So what was learnt in the first chapter was not violated: heat can be made to do some useful work, but some order is lost (in this case, the original ordered distribution of heat energy).

I liked that this section was also compared to the biosphere. I don’t think the analogy is exactly true, as it is not just heat energy that the sun is radiating, and it is especially not the heat energy that is exciting the electrons in chlorophyll. But the comparison does make it easier to understand how the sun can (potentially) cause order on the rest of the solar system, as the distribution of solar energy in the solar system is at very low entropy.

Wednesday, August 25, 2010

Next assignment - due Friday September 3

Nelson chapter 6

Your turn 6A, 6B, and 6D
Questions 6.6 and 6.7

Cilia vs Flagella

I was having another read over the chapter before class, and I remembered that there was one section that I had planned to write a post on, but hadn't, which is the difference between flagella and cilia.
From biology lessons, we know that there are a number of differences that exist between eukaryotic cells and prokaryotic cells, and one of these is the difference between appendages for motion.
Most cells use cilia - 5-10 um long, 200nm diameter, and whip like. Non-stationary cells use them for motion, while stationary ones use them to sweep food or pump fluid. there are internal motors which allow for cross-motion of the cilia, and the motion is periodic. The propulsion is generated from an intuitive result of the Reynold's number (statement 5.15 in the textbook). On another note, the power stroke sends the cilium perpendicular to its axis, and the recovery stroke parallel. This motion allows for only part of its motion to be undone by the recovery stroke, allowing for overall net displacement after each stroke cycle.
Bacteria on the other hand use flagella, which unlike the cilia, are not whip like and loose, but rigid twisted rods, only 20nm thick. Propulsion is attained through rotary motion by the flagella motor, just like a boat motor.

Tuesday, August 24, 2010

DNA and viscous drag

One of the themes in Biological Physics has been: how does a complex molecule such as DNA maintain it's function under constant bombardment by thermal forces? One of the consequences of this thermal motion at the nanometre scale is the viscosity experienced at low Reynolds numbers.

Section 5.3.5 considers the viscous drag at the DNA replication fork. During DNA replication the double-helix must be unwound, this begs the question: how significant is the viscous drag caused by DNA rotation?

Rotational forces are expressed as torque for DNA:
Torque = -const x omega*eta*radius-squared*length
T = -C*w*n*L*r^2 - with units of Nm

To find the work done per turn multiply the torque by the rotation rate, w.
Wfrict = 2piC x wnLR^2

DNA polymerase has a frequency of 600 Hz, plugging this into the equation gives the energy needed to turn DNA by one rotation. This gives a value of ~ 4.7x10^-17 Jm^-1 times the length. Compare this to the energy of ATP = 8.2x10^-20 J. Considering the length of the DNA molecule being turned is tiny the energy from ATP is enough to overcome this viscous force.

Monday, August 23, 2010

Section 5.3.1 Swimming and Pumping

"Suppose you flap paddle, then bring it back to its original position by the same path. You then look around and discover that you have made no net progress."
I never actually liked that example of what an organism would be doing to move without a means of a motor like function. This here is just a bit of a side note before I move on to talking about the actual section of the text, but I thought that I would mention that this non-motor assumption relies on the organism stopping between each motion. I find that when I've gone swimming before, I've tried out consecutive forward and backwards motions, and if anything, I've never undergone zero net motion, unless I allowed myself to stop between the strokes. However, if I did periodic strokes, the first stroke moved me forward, with the second slowing me down, slightly moving me back, the third moving me further forward from where I was after the first, and this cycle continues on.

Ok, now on to the section of the chapter.
As it was read, the backwards and forwards zero net motion assumption had been made, and as it was, it actually led scientists to the discovery of how bacterial flagella work. The discovery came about as many theories had, starting as a heretical idea. However, Berg and Anderson's rotary motion theory was proven by Silverman and Simon, who used mutant E.Coli, missing the flagella, and anchoring the flagellum stumps to a cover slip, the bacteria started rotating about. I can certainly agree with the text when they say that the flagellar motor is a marvel of nanotechnology.

It is worth mentioning though that the text does go on to mention that the e.coli does in fact follow the stop and go movement assumption (that is, it stops moving before it makes its next stroke), which indicates to me that turning while moving isn't an easy task for the bacterium to do.

Finally, the last part to this blog, the uses of movement as determined by the bacterium are foraging, where the cell constantly "tastes" the environment and moves towards the highest concentration of food, attack where the cell accelerates to grab its food before it escapes, and escaping, where the cell high-tails itself out of danger.

Non-Newtonian Fluids

Refer to equation 5.4 on page 164, the viscous force in a Newtonian fluid, planar geometry.

For a fluid to be considered a Newtonian fluid, it must follow that the force is proportional to that equation. However, when a fluid doesn't follow this principle, then we are required to call them a Non-Newtonian fluid. Now, I'm sure we've all heard of this phenomena before (the naturally easiest example of one is the mixture of corn flour and water). What is really cool about Non-Newtonian fluids is the way in which force acts upon the system.
The application of a small force leaves us in a laminar flow region, where we can move the liquid around as a liquid does. When we move up to a large force however, there doesn't become such a thing is turbulent flow, there instead becomes a solid.
In an Non-Newtonian fluid, there is a non-linear relation existing between shear and strain stress, and a viscosity coefficient cannot be defined.
Common examples of these fluids are oobleck (cornflour and water), glurch (borax and white glue), ketchup, shampoo, paint, blood and silicone polymer suspensions.

Vascular Networks

Section 5.3.4 discusses vascular networks. It states that large organisms, unlike bacteria, cannot rely on diffusion to feed themselves. This is due to the relation which describes whether "stirring" (anything other than diffusion) is favourable - eqn 5.16: v > D/d where v is the velocity of "stirring", D is the diffusion constant and d is the distance. In larger organisms the ratio D/d is small (due to large increase in d).

The section started by modelling the vascular system as a simplified situation with steady, (zero acceleration) laminar (frictional forces dominate) flow of a Newtonian fluid through a straight cylindrical pipe of radius R and derived a function for the velocity as a function as the radial distance from the centre of the pipe, r. Unfortunately I couldn't follow the derivation but the section ended up with the Hagen-Poiseuille relation which describes the volume flow rate: Q = (pi*R^4)*p/8L*eta, where L is the length of the pipe and eta is the viscosity. The general form Q = p/Z, where p is pressure and Z = 8*eta*L/(pi*R^4) is the hydrodynamic resistance. This lets us realise an important point: that due to the 1/R^4 term the resistance decreases rapidly with radius. This can explain why the vascular system of humans is not just a straight cylindrical pipe with radius R, it can dilate quickly and thus increase the volume flow rate.

Volunteer needed

We are about to arrange another Student-Staff Liaison Committee meeting this year -- for Semester 2, 2010, and we need student representatives (volunteers).
Students need to nominate their representatives (2 for first year courses, and 1 for each second, third, and fourth year courses) by August 27.
The purpose of the meeting is to PROVIDE A MECHANISM FOR STUDENT SUGGESTIONS AND PROPOSALS FOR CHANGES TO COURSE STRUCTURES, COURSE CONTENT, PREREQUISITES, TIMETABLES, ASSESSMENT REQUIREMENTS, PRESENTATION MODES ETC TO BE COMMUNICATED TO TEACHING STAFF FOR POSSIBLE IMPLEMENTATION
Responsibilities of the representative are: (1) to collect student suggestions and proposals (if any) for changes to course content, prerequisites, timetables, assessment requirements, presentations etc., and (2) to present them at the meeting with the liaison committee for possible implementation.
The meeting dates and times are:
1st year courses: 5pm Wed, 1 Sep 2010.
2nd, 3rd, and 4th year students: 5pm Thu, 2 Sep 2010.
The meeting will run in the Interaction Room 424 (Physics Annex) for about 1 hour, and drinks and nibbles will be served free before the meeting.

Sunday, August 22, 2010

Time Reversal (p170)

So I’m trying to work out the concepts of time-reversibility and time-reversal invariance in a descriptive visual sense.

Working from the idea of the two plates proposed in the text book.
For a time-reversal invariant process you can use the example of rubber between the plates. If you gave the top plate a push to the right the rubber would spring back to the left. And the same motion occurs in reverse. With the rubber ‘supplying’ the initial push to the right.

However if you have a viscous fluid in between the sheets you have time-reversibility. If you push the top plate to the right, it moves to the right and stops as the movement is dissipated by friction. In reverse you can’t use the heat that is formed from the friction to propel the plate back to the left; instead you must supply the force yourself.

Is this accurate? What happens when you have a ‘turbulent flow’ liquid between the plates what happens then? Would it also be time-reversal, with a longer deceleration time due to inertia?

Navier-Stokes Equation

As Ross encourages us to make more quantitative posts, I’ve decided to make my second post this week about the Navier-Stokes equation. The text book doesn’t mention the equation explicitly, as it involves rather complicated calculus. But I’ll try to explain it in simple terms. The equation looks like:

Rho(dv/dt + v•(del)v)=f-(del)p+(del)T

Where rho is the density of the liquid, v is the flow velocity, p is the pressure, f is the force per unit volume acting on the liquid, and T is the stress tensor (A tensor is the general term for a vector or matrix. So a vector is a tensor in 1 dimension and a matrix is a 2 dimensional tensor). (del) is a differential operator; in three dimensions, (del)u=du/dx+du/dy+du/dz. (del) u is a vector that points in the direction of maximum slope in u.

This equation is a non-linear partial differential equation. The solutions to such equations are not numerical answers but functions which satisfy the equation. It takes significant mathematical skill to handle equations like these, which is I suppose why this equation is not covered in detail in Nelson. To make things more complicated, like the n-body problem I mentioned in a comment to an earlier post, this is another equation which has no known analytical solution.

Despite the Navier-Stokes equation being very difficult to solve, we can still understand what it means. First, I’m going to verify it passes dimensional analysis.
Rho has units kg/m^3, and v has units m/s. Therefore rho(dv/dt) has units kg/m^3 x m/s/s = kgm/s^2/m^3 = N/m^3. (del)v has units m/s/m, so Rho(v•(del)v) also has units kg/m^3 x m/s x m/m/s= kg/m^3 x m/s/s= N/m^3. Pressure has units N/m^2, so (del)p has units N/m^3/m= N/m^3. Page 172 tells us stress is force/unit area, so stress has the same units as pressure, so the units of (del)T must be N/m^3 also. Finally, force per unit volume must have units N/m^3. So this equation is dimensionally correct; each term has units N/m^3.

Consider multiplying every term by the unit volume. The right side of the equation describes the forces acting on the liquid. The density term becomes a term describing the mass of the volume of liquid, and the remaining term in the left side of the equation describes in a complicated way the rate of change of the velocity of the flow, over the whole liquid. This is effectively the acceleration of the flow. So you can see that the Navier-Stokes equations is really just Newton’s second law of motion, f=ma, but applied to a continuous field of matter.

Saturday, August 21, 2010

Do all liquids have a turbulent regime?

This chapter insists that there is no intrinsic length scale for liquid. Thus liquid can appear extremely viscous to objects on the nano scale. But I think we still need to be careful about the generalisation that a liquid can appear to have whatever viscosity we want just by changing the forces acting on the objects moving in the liquid. My hesitation comes from the fact that there are certain other limitations on the behaviour of liquids.

Imagine a liquid with an exceptionally high viscosity, where the critical force is a few kilonewtons. For particles moving under forces slightly lower than this, in the regime dominated by friction, motion will cease very shortly after the force applied to them ceases. All the kinetic energy of the particle would heat up the environment, as the energy is being lost to friction. Now, what if this heat was enough to cause the liquid around the particle to begin to boil? This liquid would never have a regime dominated by inertia.

This hypothetical liquid would have viscosity orders of magnitude higher than corn syrup, and I don’t think it reflects any liquid found in nature*. This example is not a situation that would come up very often, but it illustrates the flaw in thinking that any liquid can be made to appear to have any viscosity just by changing the forces acting on the particles in it.

*Pitch has a viscosity of about 200MPas, 40 million times more viscous than corn syrup. If pitch counts as a liquid, then this may be an example of a liquid with no turbulent regime. Pitch shatters when it is hit with a hammer, which may indicate what happens to a liquid without a turbulent regime if it encounters forces that would be in its turbulent regime.

Viscous Critical Force (p165)

The type of liquid flow is situation-dependent. This explains why a bacterium in water experiences laminar flow whereas we don’t. It is a consequence of the amount of force exerted by us or the bacterium.
Looking at equation 5.5 we can understand why.

f(viscous critical force) = η^2 / ρm
η units are Pa.s
ρ units are kg.m^-3

We look at the effect of applied forces with the dimensionless ratio F(applied)/f(crit)
F/f(crit) = Fρm / η^2
For a large ratio value you can see that the equation will be dominated by the density and therefore inertial effects (proportional to mass). This results in Turbulent Flow.
For a small ratio it is the viscosity and thereby the frictional effect that dominates. Result: Laminar Flow.
Obviously if you apply enough force you can also create turbulent flow from a very viscous liquid. Which is one way a bacterium can feed, by using a burst of force to create turbulent flow to capture its food. Whoever said strength isn't everything obviously wasn't a bacterium.

Friday, August 20, 2010

Hey Everyone

I thought everyone might like to see the video we saw last year in BIPH2000, which illustrates the experiment on page 162. The video is at
http://video.google.com/videoplay?docid=-4535320633959087386#

I love this video.

Thursday, August 19, 2010

Maximum Caliber

Here is a link to the article I mentioned about the fundamental principles upon which Fick's laws can be derived. It's written by Ken Dill, who also wrote one of the supplementary textbooks. It is easy to follow, and discusses the principle in terms of simple model where the transport of fleas between two dogs is considered:

http://dx.doi.org/10.1119/1.2142789

Wednesday, August 18, 2010

Weekly Discussion (Wk4)

Today’s discussion began by working out some of the confusion surrounding Question 4.2 regarding probabilities and viral genomes and of course a short discussion regarding Mythbusters and mirrors.

Moving onto actual topics we looked into the solution of Equation 4.22 regarding relaxation of concentration jumps. It was stated that anytime a rate of change depends on what is physically present it is an exponential function; examples are radioactive decay or bacterial culture growth. The solution ∆c (t) = ∆c(0) e ^(-t/τ) can be used to calculate concentration differences over time where the concentration jump is an initial condition. The derivation of this solution was supplied by Mitch and if you really want a copy of it let me know. Berg’s textbook was also mentioned as it has an interesting calculation for the number of surface receptors that are required for oxygen saturation in a cell.

We next discussed Figure 4.11b and how a uniform concentration gradient could be maintained (wouldn’t it eventually equilibrate?). The answer was no... If you have a source and sink setup for example a single bacterium (sink) in a lake (infinite oxygen source). In this type of arrangement a constant concentration equals a constant flux. It was stated that entropic forces follow probability and an article was mentioned regarding the derivation of Ficks law and the statistics behind it.

We considered the restraints of dihedral angles, atomic repulsion, etc of proteins when modelling polymers in a random walk, what would happen if you added more constraints and whether a random walk is a good model.
Matt described how it is possible to model a polymer with a random walk. As the atoms themselves ‘jitter’ independently leading to the random movements of a walk. However Mitch stated that a random walk did not account for electrostatics within the polymer, and would thus underestimate how quickly to same-charge atoms would more apart. Seth somewhat settled the debate by stating that the addition of more constraints would change the distribution because you know more about the system. A random walk assumes probability is the only thing that matters whereas a protein will interact with itself. Thus protein folding is not accurately modelled by a random walk. However you can use a random walk with a funnel to be more accurate. But as Calvin pointed out this introduces problems with local minima. If protein folding was a random walk it would be another Levinthals Paradox. This is avoided however in cells because they employ mechanisms to ensure fast, correct folding (such as the addition of sugar molecules to the protein sequence to influence local folds). These mechanisms in turn make secondary structure prediction even harder than just going from sequence to protein. Interestingly many proteins have a 2 state folding. In that they are either unfolded or folded.

During the protein discussion we diverged into how Boltzmann’s distribution is used mainly in terms of temperature and not energy (a dirty trick of thermodynamic reasoning). Explained by the following.
P(E) = e^(-BE)/Z Probability of energy existing
Z = Σe^(-BE) Sum of all possible energies
B= 1/Kb*T Easy to think of Z(B)
It is also evident when thinking of equilibrated systems. If A is in equilibrium with B what is equal is the temperature.

We were introduced to another member of the committee, Ross and some points of the course profile were cleared up. It was decided that the assignment marking for Chapter 1 went to Mitch, Chapter 3 to Matt and Chapter 4 to Heather. The scientific paper discussions will be held in the last week of semester and your paper selection needs to be verified beforehand. In preparation for the final exam we will be given a copy from last year. The problems will be quantitative, involving calculation questions not short answer topics. Also the blog posts can be less of the philosophical questions and more just pointing out important equations and concepts of the text.
Friction of table vs water.

It was decided (after much debate, placing of hands flat out on the table and the mention of cold weather, metal poles and tongues) that it was not necessarily due to van der Waals interactions but mainly that the table is rigid. Meaning that self-diffusion in the table is zero which leads to higher friction.

How long is a long run?
To explain this Central limit theorem was introduced.
Fractional error = 1/sqrt(N)
Therefore 10 runs, 1/sqrt(10) ~ 30% error
The gas law has this built in as you are dealing with 10^23 atoms, however you can’t talk about individuals very accurately.

And ending today’s long report remember: Fluctuations matter in small systems, that is what Biophysics is about.

Blog posts

It is great everyone is contributing so much.

don't fell that your posts need to be highly original or profound. Simple summaries of material from the text are helpful. For example, look at some of these posts I wrote on my blog about the text. Writing these posts helped me focus on the essential features and hopefully are helpful to others.

Also, it is good to keep emphasizing quantitative aspects of what we are learning (especially equations and graphs of experimental data). This is what distinguishes biological physics from biochemistry.

For reference, it will be good to start adding labels to posts, e.g.,
chapter number, diffusion, physics vs. biology, random walk, DNA, ...

Protein Folding and Random Walks

First, I replied to a post by Matt below. Then, I noticed that Calvin had invoked a similar theme. The issue: protein folding and random walks.

Protein folding is not an entirely "random" walk. If it were, then there would be no resolution to Levinthal's "paradox". However, this does not mean that the concept has no use for protein folding.

Modern thinking on the issue is that protein folding can be thought of as a random walk on a "funnelled" landscape: see e.g. the following 1997 perspective by Ken Dill:

http://www.nature.com/nsmb/journal/v4/n1/abs/nsb0197-10.html


A slightly more technical paper on the resolution of Levinthal's paradox has been written by Zwanzig, Szabo and Bagchi
http://www.pnas.org/cgi/content/abstract/89/1/20

The Death of Vitalism

Nelson brings up vitalism in chapter 4, and suggests that Einstein's theory of Brownian motion was a deciding blow against the idea that biological material had a "vital force".

This was only one blow of many. Previously, I had heard that another severe blow to the idea was dealt by the Wöller's total sythesis of the urea molecule. Obviously, urea is a biological material (it is a main form of nitrogenous waste in humans). When Wöller showed you could synthesize it from non-living starting materials, this argued against "vital force".

However, it can also be argued that the impact of Wöller's work is overestimated: There are articles in J. of Chem. Educ.:
http://dx.doi.org/10.1021/ed041p452
http://dx.doi.org/10.1021/ed042p394

Random Walks - some thoughts.

Having failed to get to sleep tonight, I decided just to think about this chapter, and the applications of random walks, and if I could think of any interesting phenomena to mention. And then this first one came to me.
Say you have two different chemicals, which react with each other, and the volumes of each are such that they will both completely react. When you mix these together, I want to ask the question, how long will it really take for these two to mix together? Because if you think about it, we associate reaction completion when we can no longer see, hear, smell any changes going on, when our reaction measuring equipment no longer detects any significant readings. BUT, if our molecules are undergoing brownian motion, just how long could at least one molecule of each chemical play cat and mouse with each other?
Second thought of the night.
I was reading over the polymer section, and I remember having learnt about the radius of gyration last semester in a physical chem course, when we were studying polymers that had adsorbed to a surface. However, though we also learnt brownian motion, it wasn't a concept that they explicitly linked to the radius of gyration phenomena (they had been linking it more to the physical and chemical properties). Now having read the chapter though, it actually starts to make a bit more sense how it works, and also how i can extend that knowledge also to say the folding of a protein.
Third and final thought.
Having just written that second though, a third though came to me, which is that Brownian Motion can actually have order to it, such as when a protein is folding. When a protein folds, obviously all the atoms making it up are still experiencing Brownian type motions, but due to the specific situation it is, that motion actually becomes controlled, like if you will, a dog on a leash. Marvellous what our universe holds in its mysteries for us of what it can achieve.

Tuesday, August 17, 2010

Biological polymers and probability distributions

The section on the random walks of polymers got me thinking about protein structure. If you assume that any unit in a polymer chain can take any position in space at length L away from the preceding unit, the conformation of the polymer can be modelled as a random walk, the same can be said when self-avoidance is taken into account (Fig. 4.8). However we known from a Ramachandran plot for proteins that there are certain allowed dihedral angles between amino acids. My question is: how many constraints can we put on the position of one monomer relative another and still find the random walk model of conformation useful?

The regions in the Ramachandran plot correspond to units of secondary structure e.g. alpha helices, beta sheets. Can these units of secondary structure have random walks acting on them as a whole and how much of an effect does the jostling of the constrained monomers (within the 2ndary structure) have on the thermal motion?

Smart Biological Architecture

This post is in relation to a point made in problem 4.3
"A few cells in your body are about a meter long. These are the neurons running from your spinal cord to your feet." This statement, as you'll have read while looking at the problem, is in relation to the idea that the body has developed an efficient transport system to move proteins, organelles etc to the peripheries of the body, over the conventional diffusion process, so as to create a more efficient system. If we were to think about the process of diffusion across a meter, then we certainly know that it would take a rather long time.
e.g. if we assume the diffusion down the leg lasted for 1m exactly, and the viscosity for the sake of this equation was the same as water (=1*10-3), and assume 1-dimensional motion, then it would take 500 seconds, which when you compare it to how fast axons can act, it really is a massive difference.

Having started on this point though, it would also be good to provide some other examples. The one I'll mention is the movement of oxygen around the body. There are some insects and other small animals which actually move oxygen etc around their body through the use of diffusion. However, we as humans have a much more advanced method of moving air around, in which we have defined blood channels (vessels, arteries, capillaries) which can move the oxygen around in a more quick, efficient way. Again, if you look at the problem discussion I made above, it took 500 seconds to move something from the top of the leg to the bottom, so imagine if we were trying to move something around the whole body, especially when it is something as crucial as oxygen.

Physicists vs Biologists

I've been noticing a bit of a recurring theme in science and this book, whereby a notion initiated by a biologist has either had to be then proven and explained by a physcisist, or has been blown out of the water by physicists. I give two example.
1) the chapter focus question - the biologists are going out of their minds, wondering how they could possibly make any useful predictions of the nanoworld, when that same world is full of complete randomness. Here steps in the physicist going, "no worries, if we know the collective activity, we can make predictions about the collective movement......the individuals by themselves aren't necessary here"

2) Though the biologists got a step up on the physicists with Robert Brown discovering the phenomena of brownian motion, it was Einstein who solved the problem and who slaved over the equating of Avogadro's number.

Obviously I don't mean to be entirely paying out the biologists right now, but it is true that a lot of the things we known in biology have been influenced largely by biologists like Einstein, who decided that solving a biologists problem (or possibly shaming the biologist) was more important than finishing his own thesis

Read this before wednesday meeting

Please make sure you read (or re-read) the course profile (esp. the assessment) before tomorrow. I will go over it in the meeting just so everyone is clear on what is required.

Homework Deadline Extended

I have discussed this issue and others with Ross. To be fair, since I told you what the homework was on Monday, you should have until Friday this week to turn it in.

Monday, August 16, 2010

Distinguishing Randomness from Chaos

I for one always get confused when trying to distinguish the difference between randomness and chaos. So the following is an attempt to understand these.

A chaotic system is a complex nonlinear system that is highly sensitive to initial conditions. These systems are not random/disordered but are ruled by a deterministic behaviour. Simplified models of such systems generate erroneous results as initial errors accumulate during iteration.
So... in other words ‘chaotic’ systems are just highly complex and cannot be modelled accurately using simplified models or approximated initial data. They are simply called chaos because back in the age scientists couldn’t deal with such complexity and assumed that the system was unpredictable.

In contrast, the random walks we’ve been looking at this week are not individually predictable but are associated with a probability distribution. However using this distribution, predictions can be made about the properties of a collection of random walks.

Random walks can explain a lot.

The simple idea of taking seemingly random events and grouping them to create a predictable distribution is genius. I always knew Einstein was awesome. It’s got me thinking about other day-to-day processes which operate around this principle/diffusive behaviour, or don’t as the case may be.
As it had been a rather wet week my first thought was rain (I love rain). The pattern droplets make on concrete seems like the perfect image of a diffusion process at a boundary. Perhaps using the right experimental setup one could measure the permeability of concrete using the diffusion model. Although this may be an interesting concept it probably wouldn’t be very practical and I’m sure road-workers have their own faster, more accurate method. My second thought was about something I also love, Tea. Obviously the teabag is the perfect membrane model and it would be relatively easy to determine its permeability (thus leading to the careful creation of the perfect cup of tea. Delicious!).
After running out of further interesting ideas and a quick google check, it seems random walks are useful for a number of things including population genetics, studying the human brain, the share market and much more (It’s even used in art).

Chap. 4 Homework

The problem set for Chap. 4 is: 4.2 (a-c only), 4.3, 4.7. Sorry for the delay...

Supplemental Reading - "Random Walks in Biology" by Howard C. Berg

It is a good idea mention a nice place to go for more reading on the issues invoked in Chap. 4 of Nelson (and continuing for the next few chapters). "Random Walks in Biology" by Howard C. Berg is a concise and well written classic. It is easy to read and less than 150 pages in length. Best of all, it is in Dover editions - you can get it on Amazon for about $30 (only $20 for a Kindle edition).

Sunday, August 15, 2010

Dimensional analysis and natural units

I found a blog post that discusses dimensional analysis and introduces the concept of natural units - that is setting numerical constants to 1. This is supposed to make conceptualising equations easier and give a more physical meaning to them. I think this method would make doing the maths much easier when problem solving.

What do you guys think?

Link: http://telescoper.wordpress.com/2010/03/05/the-joy-of-natural-units/

Free Lunch

I think it is pretty amazing that cells have evolved to take advantage of the thermal motion of molecules. As Nelson says on page 128, diffusion is free. Cellular organisms can transport molecules across themselves by diffusion, which costs them no energy at all.

In some cases, where there is an abundance of a particular substrate, bacteria can survive by consuming all of this substrate that is in close proximity to it, then waiting for the substrate to diffuse back to it, rather than moving itself to the higher concentration of substrate. So it seems, at least on the bacterial world, there is such thing as a free lunch.

It makes me wonder though, what if molecules were bigger, or if thermal motion was weaker, in such a way that diffusion wasn’t an efficient way for molecules to be transported in life processes? In section 4.6.2 we calculated an upper limit on the metabolism of a bacterium. What if atoms were bigger relative to the diffusion constant, such that the minimum size of a bacterium was too big for oxygen metabolism to occur at a significant rate?

These kind of thought experiments can never be tested, but it is interesting to wonder at how likely life as we know it is. Furthermore, what even more complex and incredible forms of life are not and will never be in existence because the values of universal constants are the way they are?

Saturday, August 14, 2010

Random Walks

I found the discussion on random walks quite interesting. It is such a simple concept; I can’t believe it took Einstein to realise it is a satisfactory explanation for Brownian motion. I think this demonstrates the lack of statistical intuition that most people, even scientists, suffer from.

What I think people fail to appreciate is that in a process that is that consists of steps that are independent, the steps are genuinely independent of each other. Each step has absolutely no idea what the outcomes of the events preceding it were.
Say a person flips a fair coin five times, and gets tails every time. If we were asked to predict what the result of the next flip is, at least part of us would be tempted to say a heads is more likely. An average person might justify this thought as the coin “being due for a heads”. However, we know that the result of the next flip is completely independent of the previous flips. The coin does not know that it’s “due for a heads”. It has the same chance as it always did of resulting in a heads when flipped.
Assuming that the next trial in a series of independent events will be more likely to result in a value that brings the average of the results towards the mean expected outcome of all the events previous, rather than the known likelihood of the results of the event, is called the Monte Carlo Fallacy. When this fallacy is avoided, it is much easier to accept that random motions can result in a visible net displacement, if the observer waits long enough.

What I found even more interesting is that the random walk has structure on all length scales. The idea of an object with structure on all length scales is impressive. It is like a fractal. Like a fractal, whose total perimeter can be calculated, the length of a random walk can also be calculated. It seems necessary that the path of a molecule should have structure on all length scales though. Considering a single molecule in a very large sample of water, like a lake, or an ocean, or the air molecules in an atmosphere, then if enough time passed, the particle should have the possibility of being anywhere in the mixture. The position of the particle should not be restricted to a subsection of the space, because there is nothing to enforce that restriction. So the path traced by the particle has structure even on the scale of the size of the medium it is in. There should be no restriction on the length scale of the path of a particle, other than the size of the medium that the particle is in.

I suspect that the concept of the random walk has not been fully explored, and I think that this idea will be used again in future chapters.

Meeting Summary 13th of August

Here is a summary of the things we talked about in our meeting on Friday. Unfortunately my notes are not in chronological order, so the notes probably won’t reflect the order you remember things from the meeting.

We started off by discussing assignment question 3.2, which lead into a discussion about the Boltzmann distribution. In the Boltzmann distribution, the statistic E/kbt is sufficient to describe the distribution. This means knowing the statistic, such as the average kinetic energy of the particles in a room, means knowing everything about the distribution of that statistic.

If a distribution requires another statistic to describe the distribution, then the second statistic is a second moment. The second moment is usually the variance. A Gaussian distribution has a second moment since it depends on the statistic (x-mu)^2/sigma^2, where mu is the average value and sigma is the variance. A distribution can have more moments, like a measure of the symmetry about the mean.

We compared classical and quantum mechanics, and discussed the classical uncertainty principle. The Schrödinger equation can be derived from information theory. Quantum mechanics looks like classical mechanics but with probabilistic objects rather than definite ones.

We discussed the feasibility and flaws in ‘black box’ thought experiments (as described in section 3.2.2), and performing them in reality.

We discussed what it might feel like to experience no external pressure, and consequently what space would feel like. The Russian cosmonaut in-training who experienced very low pressures was mentioned. The episode of Mythbusters which tested whether a diver using an old time diving suit would be crushed into his helmet if the tube which equalised the pressure was blocked, was brought up also.

Finally we discussed the information entropy of a DNA nucleotide, and if the heat produced by a computer was due to the loss and gain of information entropy. This also lead to a discussion of optical computers.

It was confirmed that we are having an exam at the end of the semester, and the meeting next week will be at the normal time, for 2 hours again.

Thursday, August 12, 2010

Stop the dance part two

I've watched one of the lectures on statistical mechanics Seth posted earlier and found it interesting. It got me thinking about information and entropy (again). I think I am slowly getting my head around it.

The lectures gave a general overview of many of the thermodynamic properties and how they relate to systems and probabilities. However the most interesting part I found was when temperature was introduced. Normally when thinking about thermodynamics or any chemical or biological system one of the first things we think of is temperature (along with pressure and density). However temperature is not strictly defined it is somehow the average kinetic energy of the molecules in the system - it is an emergent property. The lecture gave it a new definition which was the derivative of energy w.r.t entropy (dE/dS - or the partial derivative). Both energy and entropy are two properties that can be measured for single molecules.

This led to a discussion on Landaur's principle which is a measure of the minimum energy released when one bit of information is "destroyed": E ≥ kT ln 2. I assumed because in information theory a value can (usually) be either 0 or 1 that that is where the 2 comes from. So perhaps the minimum energy that can be released as heat when one base pair is "destroyed" is E ≥ kT ln 4. I wonder if this is the correct way of thinking about this?

Wednesday, August 11, 2010

Another Crucial Idea of Ideal Gases

http://www.download3k.com/Install-Nasanbat-Namsrai-Ideal-Gas-in-3D.html

In my search for an interactive program for Ideal Gases, I came across this nice little program, which entails another distributive phenomena of Ideal Gases, which is the Maxwell Distribution, which shows the distribution of molecules at a particular velocity. The peak of the distribution is of course shifted right and lowered as you increase the temperature (due to the increased kinetic energy of the molecules), and the peak is raised as you put in more molecules (due to a higher spread).

This app also shows the relation between pressure, temperature and volume, with isobaric, isothermal and isochoric conditions which you can place on the system.

Recombination YouTube

http://www.youtube.com/watch?v=SAqGKWz109M
I was googling some of the concepts from this chapter to find some links to interactive websites or video clips, and I came across this nice little video on recombination. It doesn't go too into depth on the phenomena, but it does raise an interesting point about what happens during recombination.

After the two strands have crossed over, there is actually a point where the two chromosomes are disconnected from each other, so as to form two new ones, via either a vertical or horizontal cut. What's interesting to notes is that the chromosomes require the vertical connections to be severed in order to perform correct recombination, yet the video eludes to the phenomena of the horizontal connections being able to be severed, enabling even more elaborate chromosomes to be formed (though they aren't recombinant of the two genes, as the video explains). It would be interesting to see if this random phenomena does definitely occur (the video never definitively defined if it does of doesn't), and if it does, how common/random is it?

Sunday, August 8, 2010

Stop the dance

I thought problem 3.3 would make a good discussions for the blog. The problem states that by altering the temperature, extreme cooling and then slow heating, that the hereditary information of the virus is not lost. This tells us that information is not stored in the form of heat and that the information is not significantly degraded in the temperature range of 0-310 K or in terms of avg. kinetic energy: 3/2Kb T = [0, 6.5 x 10^21 J (6.5 pNnm)].

So for the information to be destroyed the chemical bonds (potential energy) between the nucleotides must be broken down (energy transduction to heat) this would tend to happen with greater probability above 6.5 pNnm. Is there a way you can describe the information content in terms of the potential energy between the nucleotides?

Saturday, August 7, 2010

Did You Feel That?

I was reading through the chapter readings this morning, and I got to the section talking about the motion of gas particles, specifically the part where it said that a gas particle will move at an average speed of 500m/s. Now, obviously this is an incredibly fast speed, especially for something that exists at such an incredibly small size. However, there was something that struck me after I read that, which is that constantly we're being hit by these particles, yet never feel them (a point I can quickly justify through the fact that a particle is nothing compared to a human, and that you couldn't even compare a particle hitting a human to a bug hitting the windscreen). The point I want to make from this, is that even though we know we're constantly being hit by these particles, and that it never feels like we're being hit, what happens if we stop being hit by these particles? What happens if we put ourselves in say a vaccuum, where we have no particles to hit us? Would it feel different?
The answer to this would obviously come down to how many particles are in fact hitting us at any given moment, or the average amount that we are always in contact with, but I thought that this would be a interesting thing to put out there.

Measuring the Molecular Velocity Distribution

Just a quick question (or few).
It’s in regard to the experiment described in 3.2.2 where the distribution of molecular velocities is measured using a box of gas with a pinhole, a series of rotating discs and a detector.

It says that only molecules with the selected speed can pass through to the detector. What happens to the molecules that escape through the pinhole but aren’t at the selected speed? Do they bounce straight back into the box?
I would assume that in order for this to happen the pinhole would have to be small enough so that the molecules passed through at an orientation perpendicular to the discs. But wouldn’t that restrict the ‘randomness’ of the experiment? Or do they hold the experiment over such a large time to accommodate this?
Alternatively are the molecules just lost into the vacuum if they are not at the correct velocity? And if this does happen wouldn’t that also skew the obtained results?
I must be missing something.

With Great Power Comes Great Responsibility

There was one comment that both puzzled and annoyed me in Chapter three. At the very beginning of the chapter Nelson says that not being able to predict the actual measured values of something is a blessing in disguise. He then explains this by saying no-one would want a list of positions and velocities of the air molecules in a room as this would be an absurd amount of useless information.

But I disagree with him here. Granted that measuring these values for air molecules in a room would be particularly boring, however if you could apply this to useful situations such as the folding of a protein it would be very useful. As for the comment about the amount of information, I thought that was what computers were used for; maybe I’m just lazy though.

I understand that his comment was probably an interesting way of introducing probability distributions but I still think that given the option both Biologists and Physicists would opt for the ability to make accurate predictions rather than probabilistic estimates.

Friday, August 6, 2010

Boltzmann and Joint Probability Distributions

One thing that struck me when reading this chapter was the comment that was made about the Boltzmann Distribution. At the bottom of page 85 it says the Boltzmann distribution is exact. Normally when I read an equation in a textbook, I assume it is quite accurate, and in an ideal world the equation I’d assume the equation is exact. But I’d also assume that if I were to make some measurements, they wouldn’t agree with the theory exactly, due mainly to my measurements, but also some small effects that the theory generalises.
Furthermore, Nelson makes this statement, but then does barely anything to justify it. It is mentioned that it can be derived from very general principles, which will be detailed in a later chapter, but that doesn’t justify why it is so exact. I can derive many things which are just approximations.

Now, I believe that the Boltzmann distribution equation is exact. I’m not disputing that fact. I just feel that that comment was a bit peculiar, as it was a strong claim made, without a lot of evidence shown to back it up. I would like to know how they can be so sure about the accuracy of this law.

On a slightly different topic: I have used probability distributions before, and when I used a discrete distribution I would use no units on my probability, and if I used a continuous one I’d use inverse measurement units, but I’d never realised that the different kinds of distributions had different units.

It wasn’t mentioned specifically, but I suspect that the probability distribution over two continuous random variables would be a two dimensional probability distribution, and have the units of inverse unit A by inverse unit B. But would the probability distribution over one continuous and one discrete random variable just have units of inverse A also? Is a joint distribution over one continuous and one discrete random variable even possible?

Thursday, August 5, 2010

Hereditary Molecule

Like the first chapter, the 3rd chapter is quite entertaining and easy to read. However, like the first chapter, it also seems to jump around topics a bit. But I can see the underlying connection between the sections; that the molecular world must be understood with statistical laws due to the thermal motion of the world.

I really enjoyed reading section 3.3, which describes the history and development of the field of genetics. I had heard of many of the scientists mentioned, but I had never read their stories in a chronological order, which meant I failed to realise how each one affected each other until now. I also didn’t realise how much physics (and physicists) helped deduce the hereditary material. Because it is what I have been taught as I grew up, it now seems so obvious that the hereditary material in cells is a macromolecule. But it seems that in a number of points in history, if the scientists weren’t so lucky, it would have taken even a few decades more to understand that DNA is the hereditary molecule. It makes me wonder what things that we don’t understand now that we will look back on and feel foolish for not realising straight away.

We know now though that DNA is not the only mechanism of inheritance. I read a few parts of the 2nd chapter that caught my eye, and one part discussed other sources of hereditary information. Naturally DNA is the only source that is both necessary and (almost) sufficient for producing a daughter cell, but the others can pass on other characteristics of the parent. When a cell splits, the daughter cell takes some of the cytoplasm with it. This includes some of the proteins and other macromolecules in the cytoplasm. This is necessary for the daughter cell to survive, since DNA with no other biomolecular infrastructure, like RNA polymerases, cannot produce a living cell. However, this means if there are unwanted molecules in the cytoplasm, then they will also be passed on to the next generation. While a particular unwanted molecule may not coalesce to a stage which is detrimental to a cell during its life, daughter cells of this cell will be generated with a higher concentration of these unwanted molecules. Diseases like Huntington’s disease exist because of this hereditary mechanism. It is also for this reason that identical twins are even more identical than clones, since identical twins originate from the same cytoplasm.

Cell differentiation is another inherited characteristic that is not encoded by DNA. Once a stem cell differentiates into another cell, all the offspring of that cell are will be that kind of cell, despite having the same DNA as the original cell. More details can be found in Nelson, 2.3.4’ Track 2.

Homework this week

Questions 3.1-3.4 of Nelson. Enjoy!

Wednesday, August 4, 2010

Simulation of brownian motion and diffusion

With regard to questions that came up under the friction post.
This is a nice simulation which helps visualise what happens.

A new twist on the chicken and the egg

I was talking to Alan after I came back from our meeting and he posed a question to me, which I'm now going to pass on to you guys: Which has the highest free energy, the chicken or the egg?

Meeting Summary Aug. 4 2010

OK, here's what I have scribbled down by way of "minutes". BTW, I'm going to farm this job out next week, so one of you should step up or I will assign it (to someone other than Mitchell, who already got kudos for setting the blog up).

Big Question: Would plants be here without us? Maybe, in the sense that if we were suddenly erased, they might stay alive. However, if we (i.e. animals) were never here, would plants have evolved? There are reasons to think not. Good question.

We discussed several appearances of "detail" in biology - particularly with regard to hereditary disorders. Sickle-cell anemia is an example of a genetic distribution that does not conform to Hardy-Weinberg distribution (resting distribution for Mendelian single-allele heredity in absence of selection). Reason is that heterozygotes are malaria-resistant. Other examples of bad genes also appear, and in many cases can be traced back to "bottleneck" effects - the legacy of past times when the population became very small. Megan put forward a name: "Six Eves Hypothesis", and mentioned that sub-saharan africans have more diversity than non-africans and that humans have less diversity than other animals. This indicates bottlenecks were important.

Incidentally, This is also probably why Tay-Sachs disease is more common in european jews, which have definitely been through population bottlenecks and whose rate of gene transfer with outside populations is restricted by the details of how judaism is passed down (through the mother). This is likely acute in the orthodox population, where conversion may not be recognized at all.

The above issues were connected to the Biologist/Physicist dichotomy, because these are all cases where past details matter. (Incidentally a good word to associate with this sort of phenomenon is "non-markovian". I'll explain that later.)

Another example of where "physicist-like" thinking goes wrong is Levinthal's paradox. Protein sequences are not chosen from a random heteropolymer-like distribution, but have been selected for foldability.

Reindeer feet don't freeze; neither do arctic fish. The reason is lipid composition, which is interesting because lipids aren't hereditary material.

Equilibrium is the state which is maximally non-commital with respect to information not contained in a set of constraints. It is TRANSITIVE. This implies that to know everything about one part of a system at equilibrium, you have to know everything about the entire system. The only way that a part of the universe can be at zero degrees is if the whole universe is at zero degrees. The statement of equilibrium spreads out what you don't know over everything in the equilibrium.

We agreed to meet Friday at 11-1. We can't do this, because the room isn't free until noon. I propose that we do it at noon, and only meet for an hour.

Tuesday, August 3, 2010

Web Trawling for Physics Fun

So I’ve been searching the net for interesting discussion topics since it feels like we’ve already maxed out our first chapter. After much surfing I found a website that had some interesting challenges and thought experiments.

http://www.phy.duke.edu/~hsg/physics-challenges/challenges.html

I’ve picked a few that relate somewhat to our recent chapter and thought it would be fun to see what everyone thinks.

1. Can One Boil Water With Boiling Water?
A pot of water is brought to a steady boil on a stove and then a thin plastic cup of room temperature water is suspended in the boiling water so that no part of the cup touches the pot (see above figure). Will the water in the cup start to boil if you wait long enough?
My thinking is Yes. The heat would transfer through the plastic cup as it would have a higher heat capacity of water anyway.

2. Does a neutrally buoyant balloon rise or fall as the temperature increases?
Consider a balloon filled with helium gas and then weighted so that it remains motionless in the center of a sealed box of air at room temperature and atmospheric pressure. If the box is slowly and uniformly warmed so that the temperature everywhere inside increases by a small amount, determine whether the balloon will rise, fall, or remain in the same place.

As a guess the balloon would remain in the same place. Although the increase of thermal energy of the gas would cause the number of collisions to increase the collisions would be happening at all sides of the balloon generating no net movement.

3. Equilibration of Two Birthday Balloons
Consider two identical spherical birthday balloons, one of which is inflated to 2/3 its maximum diameter and the other inflated to 1/3 its maximum diameter. What happens when the openings of the two balloons are connected to each other by a straw so that air can flow back and forth between the two balloons?

The easy answer would be that they equilibrate and stop there and I’m thinking the elasticity of the 2/3 balloon would cause the air to move from that one to the 1/3 balloon but that it would continue to ‘donate’ its air to the 1/3 balloon until it was empty. My basis for that is the initial rush of air would cause the 1/3 balloon to expand more than required and create a vacuum which in turn sucks in more air. Maybe...

I’m probably wrong on all three accounts, but let me know what you think.

Also I was looking at TED.com and found a video of Richard Feynman (won the Nobel Prize for particle physics and quantum mechanics) discussing how the ‘jiggling’ of atoms can explain many things. Check it out:
http://www.ted.com/talks/richard_feynman.html

Friction

The relation of the viscous friction coefficient and the diffusion coefficient to thermal energy is a very cool example of gaining deeper insights by playing around with formulae. I would like to know how it is that the energy is transferred from the moving particle to the fluid. My guess is that the random movements of the particle and the random movements of the fluid are somehow coupled and have the same vibrational modes. If correct this makes sense to me. However I wonder how this causes a slow down in the particle i.e. how is kinetic energy of a single particle degraded into heat?

On the "quality" of energy

Some interesting concepts were raised in the first chapter of Nelson. Most interesting to me was the relation between total energy, free energy, entropy and the conversion between forms of energy. It refers to a "quality" of an energy and seems to suggest that this quality is related to the entropy and the ease with which this energy is transferred from one form to another.

This raises some questions:

Which forms of energy are "better" than others? Is there a hierarchy of energy?
How to do you measure the entropy of photons?
If heat is the lowest form of energy how can it be so readily used in power stations?

It seems to me that talking in terms of quality is an insufficient concept.

Are We Redundant?

We all know the age-old process of plants capturing the sun's solar energy, creating chemical energy which animals then consume to generate waste etc etc. To add to this we've just read about how it's actually the order or energy 'quality' that is consumed in cyclic process where energy is radiated in, used then radiated out (See Diagram 1.2).

My question is, if the Earth consisted of only plants, would the 'biosphere' still be maintained?

One could argue that we evolved simply to soak up this excess 'ordered' energy that the plants produce; that we in fact are responsible for maintaining the Earth in a stable cyclic system. We consume the plant products and generate more radiative heat that escapes the Earth. In doing so not only are we maintaining the carbon dioxide supply that plants also rely on but also preventing the planet from becoming first an overgrown jungle followed by a dead rock depleted of the necessary supplies for life.

On the other hand we could be considered as parasites that consume the energy wealth of plants; and without our interference that they would be perfectly able to maintain the ordered biosphere of the planet.

Of course considering egotism the former option would be preferrable. Considering evolution however it is doubtless that plants would have become equipped to complete this energy cycle if animals did not exist.

But it comes down to whether this energy cycle is necessary for life to be maintained. Are we actually completing a vital and elegant process or are we providing a redundant service? Would life on Earth still be balanced without the usage of plant matter and the production of animal waste?