Team:ETH Zurich/Tools/Automated Wiki

From 2008.igem.org

(Difference between revisions)
(Parsing Sites)
(Parsing Sites)
Line 64: Line 64:
That way, the menu or a layout image that is placed on every page can be edited in one place and then updated on every other page.<br>
That way, the menu or a layout image that is placed on every page can be edited in one place and then updated on every other page.<br>
-
The plan was to also parse the text for key words like e.g papers etc. and automatically create the correct links on the wiki.
+
Our plan was to also parse the text for key words like e.g papers etc. and automatically create the correct links on the wiki.
=== Writing into MIT Wiki ===
=== Writing into MIT Wiki ===

Revision as of 02:29, 30 October 2008

Contents

Motivation

At the beginning of our project planning phase we were thinking about how to deal with the wiki. One major shortcoming showed up when we were browsing the wikis of previous iGEM competitions - a lot of them simply didn't work anymore.

The problem is that large parts of the old wikis are stored on external servers - e.g. JavaScript menus that are stored on students' private homepages.
A good example for this is last year's ETH team wiki:

Javascript.JPG

As you can see, the .js file is stored on an external server. As soon as this file is gone, the whole wiki page won't work anymore, because the navigation is gone. And if you browse through wikis of even earlier years, you can see that this has already happened to a lot of pages.

This year's situation

When we then started to think of ways how to design the wiki, it turned out that the people responsible for this wiki are aware of the situation - embedding external content into a wiki page did not seem to work anymore.

While this solved the problem of missing content, it gave rise to a new problem. Wiki syntax is very simple and therefore easy to handle. While this appears as a benefit when editing pages, it goes along with a lack of flexibility concerning design- or a lot of work if you want to have a nice design anyways. The wiki syntax makes it harder to separate design and content - like it is the case for our wiki page right now. Since we have a lot of people responsible for the content and only few work on the design, we agreed on editing pages like this:

<!-- PUT THE PAGE CONTENT AFTER THIS LINE. THANKS :) -->
your text goes here
<!-- PUT THE PAGE CONTENT BEFORE THIS LINE. THANKS :) -->



How to overcome this situation

While the reasons why external content should not be allowed on the wiki were obvious, we still didn't want to settle with "just" using the regular wiki editing. Therefore, we had the idea to use some kind of software to overcome the shortcomings listed above. We wanted to introduce a "middle man" who performs editing tasks for the user and create a division of content and syntax while maintaining wiki only syntax on the MIT wiki site.

The automated wiki

Software / Progamming Language

The automated wiki is written in C# as a webapplication that needs an IIS server to run on.
As a database it uses Mysql.
The software can be divided into three major parts:

User Interface

Page editing with the automated wiki
The user interface uses the [http://www.fckeditor.net/demo FCKEditor Tool] as an input interface. You can test it [http://www.fckeditor.net/demo here] and see what it can do.

A short list of benefits compared to regular wiki editing:

  • regular users don't have to deal with any wiki syntax.
  • "What you see is what you get" editor.
    • Drag and drop available
    • Image editing in place (drag and drop or via properties dialog)
    • Formating via pointing
  • Pasting and converting text from Word or HTML pages!


Selecting/uploading images with automated wiki
Drag and drop editing/resizing images with automated wiki


Parsing Sites

At this point we stopped developing the automated wiki. This part is mainly responsible for adding the layout to the text.
To achieve this, a layout consisting of a regular html page is read out of the database containing various parsing tags.
For every parsing tag the program then enters the correct page content.
That way, the menu or a layout image that is placed on every page can be edited in one place and then updated on every other page.

Our plan was to also parse the text for key words like e.g papers etc. and automatically create the correct links on the wiki.

Writing into MIT Wiki

Menu out of Mysql Database
Finally after all the pages are generated, those which have been changed get updated at the MIT wiki.

To do this the application simply mimics a regular user edit by logging in with a regular user account and editing page per page.

To see an example of a page generated with the automated wiki, click here.

Why we didn't use it in the end

There were two reasons why we didn't use the automated wiki in the end:

  • Within the team not everyone felt compfortable with the idea that we need an extra tool for editing the wiki, because this also brings an additional source for errors/complications.
  • For some reason, the MIT wiki team decided to loosen up the allowed syntax again, so now we are back at the situation that everything - from html to javascript and also flash - is allowed. Along with these changes, also the main reason why we introduced the automated wiki in the first place was gone...

Download

Media:ETH Auto wiki.zip Keep in mind that the development of this tool stopped in the middle of the project. However, you might be able to use some ideas/parts of it.