Team:Newcastle University/Original Aims

From 2008.igem.org

(Difference between revisions)
Line 5: Line 5:
== Original Aims ==
== Original Aims ==
-
There standard practice in biology today, synthetic or not, is to treat bioinformatics as any other tool to achieve an end. A problem is determined, and the biologist works out a possible avenue of exploration. Sometimes, this involves the use of bioinformatics tools, such as BLAST searches or phenotypic trees. Once the method of exploration is established, the biologist rarely goes back to bioinformatics approaches to analyze her results.
+
The standard practice in biology today, synthetic or not, is to treat bioinformatics as any other tool to achieve an end. A problem is determined, and the biologist works out a possible avenue of exploration. Sometimes, this involves the use of bioinformatics tools, such as BLAST searches or phenotypic trees. Once the method of exploration is established, the biologist rarely goes back to bioinformatics approaches to analyze her results.
The concept of parallel evolution treats bioinformatics not only as a tool in biology, but as a viable but limited method for exploring a problem. Wet-lab biology is expensive in terms of time, money, and manpower. A single bioinformatician can test the same situation with no equipment other than a computer, and run many iterations of the same experiment within seconds, rather than the weeks that the labs may take.
The concept of parallel evolution treats bioinformatics not only as a tool in biology, but as a viable but limited method for exploring a problem. Wet-lab biology is expensive in terms of time, money, and manpower. A single bioinformatician can test the same situation with no equipment other than a computer, and run many iterations of the same experiment within seconds, rather than the weeks that the labs may take.
-
However, the bioinformatics is only as good as its simulation. The field expands daily with new information, all of which must be incorporated into the simulation in order for it to give useful results. Much of bioinformatics is backed by hard data from wetlabs. Running an experiment 1000 times is only useful if you know what all the variable are, and what they should be.
+
However, the bioinformatics is only as good as its simulation. The field expands daily with new information, all of which must be incorporated into the simulation in order for it to give useful results. Much of bioinformatics is backed by hard data from wetlabs. Running an experiment 1000 times is only useful if you know what all the variables are, and what they should be.
The most sensible approach to the problem of these different yet complementary methods is to play one off of the other, taking advantage of the strengths while minimizing the limitations.
The most sensible approach to the problem of these different yet complementary methods is to play one off of the other, taking advantage of the strengths while minimizing the limitations.
Line 25: Line 25:
* The input layer to the neural network would be represented by the two-component genes, activated by the peptides from the four gram-positive pathogens. Each peptide would represent a node. We shall refer to them as parts.
* The input layer to the neural network would be represented by the two-component genes, activated by the peptides from the four gram-positive pathogens. Each peptide would represent a node. We shall refer to them as parts.
-
* The output layer would represent the three fluorescent proteins; GFP, YFP and mCherry.
+
* The output layer would represent three fluorescent proteins; GFP, YFP and mCherry.
* The hidden layer would represent an assortment of different transcription factors from the different pathogenic bacteria.
* The hidden layer would represent an assortment of different transcription factors from the different pathogenic bacteria.
* The user would specify the inputs and the outputs; i.e. in the presence of said peptide, said fluorescent protein will light up.  
* The user would specify the inputs and the outputs; i.e. in the presence of said peptide, said fluorescent protein will light up.  

Revision as of 21:13, 28 October 2008

Bugbuster-logo-red.png
Ncl uni logo.jpg


Newcastle University

GOLD MEDAL WINNER 2008

Home Team Original Aims Software Modelling Proof of Concept Brick Wet Lab Conclusions


Home >> Original Aims

Original Aims

The standard practice in biology today, synthetic or not, is to treat bioinformatics as any other tool to achieve an end. A problem is determined, and the biologist works out a possible avenue of exploration. Sometimes, this involves the use of bioinformatics tools, such as BLAST searches or phenotypic trees. Once the method of exploration is established, the biologist rarely goes back to bioinformatics approaches to analyze her results.

The concept of parallel evolution treats bioinformatics not only as a tool in biology, but as a viable but limited method for exploring a problem. Wet-lab biology is expensive in terms of time, money, and manpower. A single bioinformatician can test the same situation with no equipment other than a computer, and run many iterations of the same experiment within seconds, rather than the weeks that the labs may take.

However, the bioinformatics is only as good as its simulation. The field expands daily with new information, all of which must be incorporated into the simulation in order for it to give useful results. Much of bioinformatics is backed by hard data from wetlabs. Running an experiment 1000 times is only useful if you know what all the variables are, and what they should be.

The most sensible approach to the problem of these different yet complementary methods is to play one off of the other, taking advantage of the strengths while minimizing the limitations.

Our modelling was focused upon Computational Intelligence (CI) approaches. We used neural networks and evolutionary computation to design our system.

Overview.GIF
An overview of our complete system.

Our aim was to develop a strain of Bacillis subtilis with the ability to detect a range of extracellularly secreted quorum-sensing peptides, and indicate through reporter genes linked to the receptor-ligand cascade which of these are in its growth medium. Our B. subtilis has a range of genes engineered that enable detection of these peptides. The issue that we had to overcome is the fact that we have more quorum-sensing peptides to detect than fluorescent proteins to show their presence.

We will produce a workbench that will incorporate a parts repository, constraints repository and an evolutionary algorithm (EA). The EA will take input from the parts repository and constraints repository to evolve a neural network simulation. The fittest model will be used to generate a DNA sequence which will implement the neural network in vivo. This DNA sequence will be synthesized and cloned into the B. subtilis chassis. One of our outcomes will be a range of neural network node BioBrick devices which can be combined to form the in vivo neural network.

We had three layers of complexity that fit into the neural network structure, and we also wanted the potential user to be able to enter both the inputs and outputs to the network. The plan can be summed up as follows:
  • The input layer to the neural network would be represented by the two-component genes, activated by the peptides from the four gram-positive pathogens. Each peptide would represent a node. We shall refer to them as parts.
  • The output layer would represent three fluorescent proteins; GFP, YFP and mCherry.
  • The hidden layer would represent an assortment of different transcription factors from the different pathogenic bacteria.
  • The user would specify the inputs and the outputs; i.e. in the presence of said peptide, said fluorescent protein will light up.

This was the starting point from which the construct was designed.

Drylab Approach

An important part of our approach is bioinformatics. We will produce a workbench that will incorporate a parts repository, constraints repository and an evolutionary algorithm (EA). The EA will take input from the parts repository and constraints repository to evolve a neural network simulation. The fittest model will be used to generate a DNA sequence which will implement the neural network in vivo. This DNA sequence will be synthesized and cloned into the B. subtilis chassis. One of our outcomes will be a range of neural network node BioBrick devices which can be combined to form the in vivo neural network.


Wetlab Approach

Newcastle’s 2008 iGEM team is aiming to transform plasmids grown in Escherichia coli into Bacillis subtilis (a gram-positive bacterium), with the aim of integrating this stably into the B. subtilis chromosomal DNA. By doing this, we hope to create an organism with the ability to detect a range of extracellularly secreted quorum-sensing peptides, and indicate through reporter genes linked to the receptor-ligand cascade which of these are in its growth medium.

A visual output was chosen for detection of pathogens as this allows for rapid detection without a requirement for specialist equipment. The aim was to have different fluorescent protein outputs relating to the presence of different pathogenic bacteria, and different combinations of these bacteria. In line with the neural network concept of the evolutionary algorithm, we wanted to map numerous inputs to limited outputs. Four pathogens can be detected; we wanted different outputs for each of these and each combination of these.