“If you give me two free parameters, I can describe an elephant. If you give me three, I can make him wiggle his tail.” –Eugene WignerI think it’s supposed to say that two free parameters are enough to describe most situations. Any more than that are unnecessary, and don’t actually allow for a more complicated model. This is a good point; if a set of data only has two degrees of freedom then any more degrees of freedom don’t increase the accuracy of the model. Too many degrees of freedom and you start modelling effects in the data that aren’t really in the model, such as experimental errors.
Any number of equations and models could be found to describe a given set of data. This section summarised what makes some more interesting than others. If the data set is exceptionally accurate, and a simple model with few parameters can be fit to the data, within in the low uncertainty, then that model can suggest a relationship in the data. What the parameters physically mean can then be deduced.
The opposite approach can also be taken. A model can be made based built out of physical principles, and then applied to a data set which it attempts to describe. The model is successful if the parameters it predicts match the parameters described by the data.
I personally think this second approach is a better, as it reveals the physical principles underlying the data. However, when investigating a completely new phenomenon, where the processes behind the model are unknown, the first approach can be better. This method gives a scientist attempting to understand the phenomenon a foundation to build a model from.
Does anyone have any other ideas about which scientific model construction process is better?