Team:Paris/Modeling/Histoire du modele
From 2008.igem.org
(→BOB: based on bibliography approach) |
(→Why a double model is an absolutely necessary base to work with?) |
||
Line 4: | Line 4: | ||
= Why a double model is an absolutely necessary base to work with? = | = Why a double model is an absolutely necessary base to work with? = | ||
- | + | As in the industry, where one is asked to propose various technical solutions while developing a project, we decided to propose two models in the mathematical description process. In fact, with a single mathematical model, the description and results obtained are most often biased, by the assumptions that ground the model. | |
- | <br> | + | <br><br>Furthermore, one does not build a model uniquely so as to transpose biological information to the abstract formalism of mathematicians. A model has to be thought depending on the information one wants to get from it. |
- | + | <br><br>Last but not least, the most precise one model is, the more parameters are involved. Straightforward arithmetics make it clear: adding equations seems to be a better interpretation of the reality, but what with adding parameters one looses accuracy. What is the optimum equilibrium? | |
- | <br> | + | <br><br>Hence the need to choose approximations adapted to the information we want to get: what effects do we decide to neglect, what degree of precision do we need? Thus, part of the model lies in the understanding of the choice that have been made. We shall hereby draw the parallel with the history of physics. A first model was built, often called the classical theory. Then, it was discovered that this theory was not sufficient to describe every phenomenon. A new theory, quantum physics, was then developed, where the “old” effects still found their place. In a way, this is the train of thought we wished to follow. For example, let’s consider the FIFO subsystem. In the BOB approach, the key point developed was the sum effect of FlhDC and FliA over FliL, FlgA and FlhB. This aspect appears as well, alongside the description of chemical effects, in APE model. However, it is known that one chooses to use classical physics or quantum physics depending on what he wishes to prove. |
- | + | <br><br>We therefore concentrated on choosing two reluctant models that would be complementary, since they fulfill different goals. Both give different purposeful pieces of information about our biological system. | |
- | <br> | + | |
- | + | ||
- | <br> | + | |
- | + | ||
= What are the respective goals fulfilled? = | = What are the respective goals fulfilled? = |
Revision as of 02:11, 5 October 2008
IntroductionWhy did we come up with two models? Indeed, this might be an interesting question… However, is this a reluctant one? We should rather question the choice of a single model! We shall here describe the story of our model, and show why it appeared absolutely essential to us to build this dual approach, where both models interact between themselves and beget constructive and purposeful exchanges with the wet lab. Why a double model is an absolutely necessary base to work with?As in the industry, where one is asked to propose various technical solutions while developing a project, we decided to propose two models in the mathematical description process. In fact, with a single mathematical model, the description and results obtained are most often biased, by the assumptions that ground the model.
What are the respective goals fulfilled?
BOB: based on bibliography approachDue to the time constraints, we needed to get quickly a firm ground on which we could work, so as to be able to understand how our biological system could behave and to give direction to the lab. We then needed a model for which we had an good idea of the parameters involved and that would enable us to understand the dynamics involved, as well as the respective influences of the different genes of the cascade.
APE: A Parameter Estimation Approach
comparison: what model should I choose in which case?It is not a mystery that the pet hate for a mathematician consists in determining the parameters he wishes to use. As we saw throughout the previous explanations, when one decides to go deeper in his mathematical translation of reality, he automatically adds new parameters. Assuming that for example one gets a 10% error when determining a parameter, what is the error made when he has three times more parameters? We directly understand that there is an optimization question that lies under this phenomenon.
Akaike (followed by others) made up a criterion that discriminates a model if the error grows, and discriminates it as well if the number of parameters grows. We obtained interesting results (link to akaike page) applied to our system, but the results in themselves shape the hidden part of the iceberg! We can use these criterions to choose what model is more relevant depending of the data we have at our disposal. The tip of the iceberg stands there: we can choose our model depending on what we intend to do! |