How to build a website?

We are about to start redesigning our library website. At the same time I want to rework the technical basis of the site.

Currently the site is static html. I want to separate presentation from data, as well as making it easy to update for the staff responsible for the data. The kind of thing I’m thinking about is that the staff who change the opening hours on the notice board, should also change them web. However, I don’t think they should need to know html to do this.

I’m also interested in the possibility of some new technologies such as blogs and wikis. Obviously a blog might be great for news (with an rss of course), and a wiki might be a way of providing faq type answers to our users (and even get their input to the faqs)

Anyway, none of this is really rocket science, but I’ve got some limitations and questions.

Firstly – the limitations. I really need to stick within the technology infrastructure supported by the College. This is basically MS based, with the library web pages sitting on W2K server, using IIS. Other library systems (LMS, federated search, link server) all run on UNIX and Apache.

Secondly, I’m not sure if we should use xml and xslt to acheive the aim of separating data from presentation, just use xhtml and css, or do a database driven site – or use a database backend to support either of the other methods.

Finally, I’ve come to realise that I’ve always thought of the library web presence as having several components: The OPAC; The Resource Discovery Tool; the Website. However, it seems clear that visitors only perceive a single web presence, and don’t understand why the ‘opac’ is separate to information about our opening hours. Part of the challenge is going to be integrating our ‘applications’ into our website in a reasonably seamless way, but without overwhelming the user with too many options at once.

Although we are in the early stages of the design – looking at stakeholders and content needs – and I want any technical solution to support the needs not drive them – I am eager to get some of the issues sorted out now.

Discussion board standards

While taking part in the VLEs: Beyond The Fringe… And Into The Mainstream it occured to me that the discussion group software could be better from the ‘reading’ point of view.

In fact, had the discussion group had an rss feed, I could have read the postings in a much more convenient fashion, and kept up with the 4 separate discussions that were going on.

I’d got further than this though. RSS obviously doesn’t quite serve the needs of bulletin boards (threading, sequencing etc.), but surely it wouldn’t be difficult to define discussion groups as xml output rather than html, and have a simple messaging format to be able to post as well as read posts.

It’s just occured to me, that, of course, the existing news readers do this – so why are e-learning systems not delivering standard bulletin board formats so I can ‘subscribe’ in my news reader? On the other hand, does discussion board software from outside the e-learning sphere support this? What are the problems?

It suddenly seemed clear to me that if in the future (as some people suggest) learners will be more picky about where they do qualifications, and they will buy courses online from a variety of sources, they will need some way to ‘aggregate’ their courses in a single environment (rather than the current practice where each institution is running their own ‘learning environment’, and if the user is going to take courses from 2 institutions, they have to interact with 2 learning environments).

Since there is also talk of ‘exploding’ the VLE/LMS into it’s components parts, and discussion board system which is readable by a standard news reader seems like a sensible idea? I’m just wondering about how complex it needs to get… Perhaps a bulletin board software supporting RSS is a better idea? I’ve gone round in a circle on this – obviously needs more thought and research.

‘Personal’ researcher

The introduction of federated search engines (e.g. MetaLib) seems to open up an opportunity for some kind of ‘automatic researcher’. I’m thinking of a piece of software that would do sequential searches on a variety of sources, and put together a ‘reading list’ of relevant references.

Just to describe how this might work:

The researcher puts together a list of keywords, and defines a starting point (e.g. a list of databases).
The federated search engine does a search on the databases specified by the researcher
From the results, the software could compile a number different search ‘facets’ to then continue to search on these facets. These facets could be, for example, subject words not specified in the original list and author names.
Alternatively it could something like find all the papers which cite, or are cited by, a paper retrieved by the original search.

The effectiveness of this kind of functionality would depend on the databases available for cross-searching, how effectively the results can be ‘relevance’ ranked, and how much structure their is in the retrieved records (the more context available for the retrieved records, the better I guess).

In combination with a link-server (e.g. SFX) and local library catalogue, you can even see this being able to prioritise the material easily available to the researcher…

I think all the pieces are actually already in place for this, but the functionality isn’t quite there yet. I wonder if anyone would be interested in funding a bit of research in this area… – a couple of months work with a federated search engine supplier should really be enough to get this up and running.

(I’ve used the Ex Libris products as examples here, just because these are the ones that I am familiar with, so I can kind of see how it could work using them. I’m sure the similar products from other vendors could do the same kind of thing)