Team:Paris/Modeling/Histoire du modele
From 2008.igem.org
(→What are the respective goals fulfilled?) |
(→BOB: based on bibliography approach) |
||
Line 19: | Line 19: | ||
== BOB: based on bibliography approach == | == BOB: based on bibliography approach == | ||
- | + | Due to the time constraints, we needed to get quickly a firm ground on which we could work, so as to be able to understand how our biological system could behave and to give direction to the lab. We then needed a model for which we had an good idea of the parameters involved and that would enable us to understand the dynamics involved, as well as the respective influences of the different genes of the cascade. | |
- | + | <br><br> | |
- | + | That is the reason why we intended to find models in bibliography. This has provided us with coherent values for our parameters. We were then able to make up a suitable reasoning. The most concrete use of this can be seen in the way we put the genes FliL, FlgA and FlhB (faire lien avec l’autre page). In fact, we found an interesting model of these interactions in [ref biblio]. Whether this could be adapted for our system since the parameters were obtained in conditions that might be different to those we use, was of petty interest! Indeed, what we could use was that FlhDC and FliA influenced in different ways over the three class two genes (see lien vers page BOB). This understanding helped us deciding the order of the FIFO genes, since we got firmly established arguments that led us to thinking this was the best order. | |
- | + | <br><br> | |
+ | Furthermore, this model enabled us to explain which step of the model had little chance to be realized, and which had a greater chance of success. This was of the utmost importance in the strategy of the project. Knowing that we had no infinite time at our disposal, this helped us fix our goals and our priorities. | ||
+ | <br><br> | ||
+ | Last but not least, it is important to understand our thought process. Rather than trying to describe the biological process that occurs in a gene cascade, we acted in an engineer way: gene A begets the expression of gene B. Thus, strange it be seen, finding quantitative parameters in this context enabled us to build qualitative, though useful, reasoning. We had no real interest in checking whether the oscillation period would be one hour or one hour and a half. Nevertheless, we wanted to have an idea how we could biologically ease the oscillations. | ||
== APE: A Parameter Estimation Approach == | == APE: A Parameter Estimation Approach == |
Revision as of 02:11, 5 October 2008
IntroductionWhy did we come up with two models? Indeed, this might be an interesting question… However, is this a reluctant one? We should rather question the choice of a single model! We shall here describe the story of our model, and show why it appeared absolutely essential to us to build this dual approach, where both models interact between themselves and beget constructive and purposeful exchanges with the wet lab. Why a double model is an absolutely necessary base to work with?
What are the respective goals fulfilled?
BOB: based on bibliography approachDue to the time constraints, we needed to get quickly a firm ground on which we could work, so as to be able to understand how our biological system could behave and to give direction to the lab. We then needed a model for which we had an good idea of the parameters involved and that would enable us to understand the dynamics involved, as well as the respective influences of the different genes of the cascade.
APE: A Parameter Estimation Approach
comparison: what model should I choose in which case?It is not a mystery that the pet hate for a mathematician consists in determining the parameters he wishes to use. As we saw throughout the previous explanations, when one decides to go deeper in his mathematical translation of reality, he automatically adds new parameters. Assuming that for example one gets a 10% error when determining a parameter, what is the error made when he has three times more parameters? We directly understand that there is an optimization question that lies under this phenomenon.
Akaike (followed by others) made up a criterion that discriminates a model if the error grows, and discriminates it as well if the number of parameters grows. We obtained interesting results (link to akaike page) applied to our system, but the results in themselves shape the hidden part of the iceberg! We can use these criterions to choose what model is more relevant depending of the data we have at our disposal. The tip of the iceberg stands there: we can choose our model depending on what we intend to do! |