Presenting Telstar

A few years ago at the Hay-on-Wye literary festival I went to see Lawrence Lessig present on Copyright law (I know how to have a good time!). It was a transformational experience – not in my view of copright and intellectual property (although he had very interesting things to say about that), but in my understanding of how you could use Powerpoint to illustrate a speech. As you can see from my later comment on the eFoundation’s blog – I was left both amazed and jealous. If you want to see a version of this presentation by Lessig (which is well worth it for the content alone) you can see his TED talk.

I think I was a OK presenter, and I don’t think I was particularly guilty of just reading out the slides – but I would definitely say my slides tended to be text and bulletpoint heavy. To illustrate – this is a reasonably typical presentation from that time:

Lessig’s example really made me want to change how I approached using slides. Going back to my desk, and browsing the web, I came across the Presentation Zen blog, and from there Garr Reynold’s tips on using slides. On the latter site I remember particularly being struck by the example under tip 2 (Limit bullet points and text), where the point that the presenter wants to communicate is “72% of part-time workers in Japan are women” (I have no idea if this is true by the way). The immediate impact of the slide that simply had the characters 72% on it in a huge font was something I really noticed. This lead to my style evolving, and you can hopefully see the difference in a more recent presentation I did on ‘Resource Discovery Infrastructure’

I’m definitely happier with this latter set of slides, but there are some issues. Without me actually talking, the second set of slides have a lot less meaning than the first. I’ve also found that sometimes I end up stretching for a visual metaphor, and end up with pictures that only tangentially relate to what I’m saying (I find signposts particularly flexible as a visual metaphor). In some cases the pictures became just something to look at while I talked.

So, when I had the opportunity to present a paper on the project I’m currently working on (Telstar) at ALT-C, and they actually mentioned Lawrence Lessig in their ‘guidelines for speakers’, I decided I wanted to try something slightly more ambitious (actually the guidelines to speakers wound me up a bit, since it included a suggest limit of 6 slides for a 12 minute talk – this may have influenced what happened next).

I wanted to really have a slideshow that would punctuate my talk, give emphasis to the things I wanted to say, catch the attention of the audience, and try out a few things I’d had floating around my head for a while. So I went to town. I ended up with 159 slides to deliver in 12 minutes (it actually took me more like 10 minutes on the day).

The whole process of putting together the slideshow was extremely frustrating and took a long time – for a 12 minute talk it took several days to put the presentation together – and writing the talk was not more than half that. Powerpoint is simply not designed to work well in this way – all kinds of things frustrated me. An integration with Flickr would be nice for a start. Then the ability to standardise a size and position for inserted pictures. Positioning guides when dragging elements around the slide (Keynote has had this for years, and I think the latest version of Powerpoint does as well). Basic things like the ability to give a title to a slide (so it shows in the outline view) without having to acutally add text to the slide itself. A much better ‘notes’ editing interface.

I also realised how closely I was going to have to script the talk. This isn’t how I’ve normally worked in the past. Although I’d have a script for rehearsal, by the time I spoke I would be down to basic notes and extemporise around these. This works if you basically have a ‘point per slide’ approach – but not when you have slides that are intended (for example) to display the word you are saying right at the moment you say it – in that instance if you use a synonym, the whole effect is lost (or mislaid).

So, after I’d got my script, and my slides, I started to rehearse. Again, the issue of syncing the slides so closely to what I was saying was an issue – I had to get it exactly right. I had a look at various ‘presenter’ programs available for the iPhone, thinking this could help, and came across some ‘autocue’ apps. I tried one of these, and after a bit of a struggle, got the text of my talk (with indicators where I was to move on the slides using the word [click]). The autocue worked well, although I found having to control the speed, pause it etc. could be distracting – so I had to play around with the speed, and putting in extra spacing to try to make it as close to my natural pace of delivery as possible.

I recorded myself giving the presentation so I could load it on my ipod and listen to it and rehearse along with it in the car. (I started recording myself presenting a few years ago and do find it really helpful in pointing up the places I don’t actually know what I’m saying)

Finally I was ready, and I gave the presentation to a polite audience in Manchester. How did it go? I’m not sure – I got some good questions, which I guess is a good sign. However, I did feel the tightly scripted talk, delivered with autocue, resulted in a much less relaxed and engaging presentation style – I didn’t really feel I connected with the audience, as I was too busy worrying about getting all the words right, making sure the autocue didn’t run away with me, and that I was clicking the mouse in all the right places! If you were there, I’d be interested in some honest feedback – was it all too much? Did it come across I was reading a script? What did you think? (I hope, at least, I managed to avoid falling foul of Sarah Horrigan’s 10 Powerpoint Commandments – although it may have been bad in several other ways)

I knew that when I came to put this presentation online it would be completely useless without the accompanying narration – so I decided I should record a version of the talk, with slides, to put online. This was a complete nightmare! Firstly I tried the built-in function in Powerpoint to ‘record a narration’. Unfortunately when you do this, Powerpoint ignores any automatic slide timings you have set – which were essential to some of the effects I wanted to achieve.

I then decided I’d do an ‘enhanced podcast’ – this is basically a podcast with pictures. I used GarageBand (on a Mac) to record my narration, while running the powerpoint on a separate machine. Once I’d done this, I exported all the slides from powerpoint to JPEG, and imported into GarageBand, and by hand, synced them to the presentation. This worked well, and I was really happy – right up until the point that I realised GarageBand automatically cropped all the images into a square – losing bits of the slides, including some of the branding I absolutely had to have on there. So that was another 2 hours down the drain.

I then though about using ‘screen capture’ software to capture the slideshow while it played on the screen, and my narration at the same time. The first one I tried couldn’t keep up with the rapidly changing slides, and the second crashed almost before I started.

I finally decided that iMovie would be the easiest thing to do – I’d re-record the narration with GarageBand, and use the ability of iMovie to import stills and use them instead of video, syncing their duration with the narration track. It took several attempts (not least because the shortest time iMovie will display any image seems to be 0.2s – and I had some images that were timed to display for only 0.1s – I eventually had to give up on this, and settle for the 0.2s for each image, which means that there is a slightly long pause at one point in the presentation)

Overall I’m much more pleased with this recorded version than with the live performance – which I think lacked any ‘performance’. The autocue application worked really well when sitting in front of a computer talking into the microphone. There are still some issues – you may notice some interference on the track, which comes from my mobile phone interacting with some speakers I forgot to turn off. However I think it works well, and actually as a video as opposed to a ‘slidecast’ is more portable and distributable than a ‘slidecast’ I think. It’s on YouTube, and there is also a downloadable version you can use on your PC, or your portable device.

Finally, once I’d put the video on YouTube, I was able to add Closed Captioning (using the free CaptionTube app – although not bug free) – and here, having the script written out was very helpful, and it wasn’t too difficult to add the subtitles (although I do worry whether some of them are on the screen just a bit too briefly).

Would I do it again? I suspect that I was a little guilty this time of putting style before substance – I’m pleased with the video output, but I felt the live presentation left something to be desired. Perhaps if I’d known the script better, and hadn’t been relying on the autocue to make sure I was keeping to the script, it might have been better. But, I guess that it isn’t suprising that something that works on screen is going to be different to something that works on stage.

I think the other thing that I’ve realised, is that although my powerpoint may be prettier, I’m probably still just an OK presenter. If I’ve got good content I do an OK job. Perhaps what I need is to look at how I present – my writing, and what you might call my stage presence I guess – afterall, if I get that right, who is going to care about the slides?

Anyway, after all that, here it is – if you are interested…

I’d be interested to hear what you think …

Twitter – a walk in the park?

This week I’ve been at the ALT-C conference in Manchester. One of the most interesting and thought provoking talks I went to was by David White (@daveowhite) from Oxford, who talked about the concept of visitors and residents in the context of technology and online tools.

The work David and colleagues have done (the ISTHMUS project) suggests that moving on from Prensky’s idea of ‘digital natives and immigrants’ (which David said had sadly been boiled down to in popular thought as ‘old people just can’t do stuff’ – even if that wasn’t what Prensky said exactly), that it was useful to think in terms of visitors and residents.

Residents are those who live parts of their life online – their presence is persistent over time, even when they aren’t logged in. On the otherhand Visitors tend to log on, complete a task, and then log off, leaving no particular trace of their identity.

The Resident/Visitor concept isn’t meant to be a binary one – it is a continuum – we all display some level of both types of behaviour. Also, it may be that you are more ‘resident’ in some areas of your life or in some online environments, but more a ‘visitor’ in others.

I think the most powerful analogy David drew was to illustrate ‘resident’ behaviour as people milling round and picnicing in a park. They were ‘inhabiting’ the space – not solving a particular problem, or doing a particular task. It might be that they would talk to others, learn stuff, experience stuff etc. but this probably wasn’t their motivation in going to the park.

On the otherhand a visitor would treat an online environment in a much more functional manner – like a toolbox – they would go there to do a particular thing, and then get out.

David suggested that some online environments were more ‘residential’ than others – perhaps Twitter and Second Life both being examples – and that approaching these as a ‘visitor’ wasn’t likely to be a successful strategy. That wasn’t to pass judgement on the use or not of these tools – there’s nothing to say you have to use them.

David also noted that moving formal education into a residential environment wasn’t always easy – you can’t just turn up in a pub as a teacher and start teaching people (even if those same people are your students in a formal setting) – and that the same is true online, An example was the different attitudes from two groups of students to their tutors when working in Second Life – in the first example the tutor had worked continually in SL with the students, and had successfully established their authority in the space. In the second example a tutor had only ‘popped in’ to SL occasionally, and tried to act with the same authority – which grated on the students.

At the heart of the presentation was the thesis that we need to look much more at the motivations and behaviours of people, not focus on the technology – a concept that David and others are trying to frame – currently under the phrase ‘post-technical’. Ian Truelove has done quite a good post on what post-technical is about.

Another point made was that setting up ‘residential’ environments could be extremely cheap – and you should think about this when both planning what to do and what your measures of ‘success’ are – think about the value you get in terms of your investment.

The points that David made came back to me in a session this morning on Digital Identity (run by Frances Bell, Josie Fraser, James Clay and Helen Keegan). I joined a group discussing Twitter, and some of the questions were about ‘how can I use Twitter in my teaching/education’. For me, a definite ‘resident’ on Twitter, this felt like a incongruous question. I started to think about it a bit more and realised, there are ‘tool’ like aspects to Twitter:

  • Publication platform (albeit in a very restrictive format)
  • Ability to publish easily from mobile devices (with or without internet access)
  • Ability to repurpose outputs via RSS

This probably needs breaking down a bit more. But you can see that if you wanted to create a ‘news channel’ that you could easily update from anywhere, you could use Twitter, and push an RSS version of the stream to a web page etc. In this way, you can exploit the tool like aspects of Twitter – a very ‘visitor’ approach.

However, I’d also say that if you want to do this kind of thing, there are probably better platforms than Twitter (or at least, equally good platforms) – perhaps the WordPress Microblog plugin that Joss Winn mentioned in his session on WordPress (another very interesting session).

For me, the strength of Twitter in particular is the network I’ve built up there (something reinforced by the conference as I met some of my Twitter contacts for the first time – such as @HallyMk1, who has posted a great reflection on the conference – although I should declare an interest – he says nice things about me). I can’t see that you can exploit this side of Twitter without accepting the need to become ‘resident’ to some degree. Of course, part of the issue then becomes whether there is any way you can exploit this type of informal environment for formal learning – my instinct is that this would be very difficult – but what you can do is facilitate for the community both informal learning and access to formal learning.

As an aside, one of the things that also came out of the Digital Identities session was that even ‘visitors’ have an online life – sometimes one they aren’t aware of – as friends/family/strangers post pictures of them (or write about them). We all leave traces online, even if we don’t behave as residents.

The final thread I want to pull on here is a phrase that was used and debated (especially I think in the F-ALT sessions) “it’s not about the technology'”. This was certainly part of the point that David White made – that people’s motivations were much more important than any particular technology they would use to achieve their goals. He made the point that people who don’t use Twitter don’t avoid doing so because they aren’t capable, or don’t understand, they just don’t have the motivation to use it.

Martin Weller has posted on this and I think I agree with him when he says “I guess it depends on where you are coming from” – and I think the reason that the phrase got debated so much is that the audience at ALT-C is coming from many different places.

I’m guilty of liking the ‘shiny shiny’ stuff as much as any other iPhone owning geek – but the thing that interests me in this context is what the impact is likely to be on education (or more broadly to be honest, society) – I’m not in the position of being immediately concerned about how the Twitter or iPhones or whatever else should be used in the classroom.

I do think that we need to keep an eye on how technology continues to change because I think a very few technologies impact society to the extent that our answers need to change – but the question remains the same whatever – how are we going to (need to) change the way we educate to deal with the demands and requirements of society in the 21st Century.

IceRocket Tags:

SCHoMS

I’m at a session of the SCHoMS (Standing Council of Heads of Media Services) about recording lectures this morning. Aside from some technical problems delaying the start (some amount of schadenfreude seeing AV salesmen struggling with the technology) some interesting presentations – just brief summaries here.

Mediasite

Mediasite is a product from Sonic Foundry (now the only product from Sonic Foundry).

Allows recording, storage, management and delivery of lectures/sessions. Integrates with Crestron panels. Also has API for VLEs or other integration.

Captures all VGA outputs – so video and data. Can put in ‘bookmarks’ to link video to the data display at that time – so can easily jump through presentation to each slide and linked video (but requires manual process during the presentation to sync these together).

Now concentrating on the content management aspects. Especially concentrating on search and retrieval aspects – currently supports search of any text content – including OCRing any text content under visualisers and document cameras. They are expecting to launch phonetic searching in the next few weeks so that any mentions of words in video or audio files are also picked up.

Anycast

Anycast Station is a all-in-one Live Content Producer. IT takes feeds from cameras or data. Can control up to 16 cameras with presets. Can be mixed etc. on the fly, or, in conjunction with hard disks attached to the back of the unit, can be re-mixed in post production.

Looks like a nice piece of kit, but do we have the expertise and staff resource to use it? However, the presenter is talking about kitting out a studio with Anycat HD Cameras, Anycast lighting, and the Anycast station, for 60k-70k – which sounds quite cheap.

Impact Marcom

Uses Windows technology to deliver cost effective solutions. In terms of product, offering something similar to Mediasite above, but arguing that can be done more cost effectively by using Windows media server, and Windows media player.

Essentially Impact Marcom are not selling a product, but rather offering a consultancy package to acheive the same result. It doesn’t look like they have the same kind of content management to offer as media site, but could be a lower budget way of acheiving the recording and streaming – estimates about 10k for equpiment if you don’t already have appropriate servers and encoders, then something in the region of 3 days training.

SCHoMS

I’m at a session of the SCHoMS (Standing Council of Heads of Media Services) about recording lectures this morning. Aside from some technical problems delaying the start (some amount of schadenfreude seeing AV salesmen struggling with the technology) some interesting presentations – just brief summaries here.

Mediasite

Mediasite is a product from Sonic Foundry (now the only product from Sonic Foundry).

Allows recording, storage, management and delivery of lectures/sessions. Integrates with Crestron panels. Also has API for VLEs or other integration.

Captures all VGA outputs – so video and data. Can put in ‘bookmarks’ to link video to the data display at that time – so can easily jump through presentation to each slide and linked video (but requires manual process during the presentation to sync these together).

Now concentrating on the content management aspects. Especially concentrating on search and retrieval aspects – currently supports search of any text content – including OCRing any text content under visualisers and document cameras. They are expecting to launch phonetic searching in the next few weeks so that any mentions of words in video or audio files are also picked up.

Anycast

Anycast Station is a all-in-one Live Content Producer. IT takes feeds from cameras or data. Can control up to 16 cameras with presets. Can be mixed etc. on the fly, or, in conjunction with hard disks attached to the back of the unit, can be re-mixed in post production.

Looks like a nice piece of kit, but do we have the expertise and staff resource to use it? However, the presenter is talking about kitting out a studio with Anycat HD Cameras, Anycast lighting, and the Anycast station, for 60k-70k – which sounds quite cheap.

Impact Marcom

Uses Windows technology to deliver cost effective solutions. In terms of product, offering something similar to Mediasite above, but arguing that can be done more cost effectively by using Windows media server, and Windows media player.

Essentially Impact Marcom are not selling a product, but rather offering a consultancy package to acheive the same result. It doesn’t look like they have the same kind of content management to offer as media site, but could be a lower budget way of acheiving the recording and streaming – estimates about 10k for equpiment if you don’t already have appropriate servers and encoders, then something in the region of 3 days training.

Identity Management and Learning 2.0

Just reading Andy’s post of the same title. I think that you could argue (if you wanted to play Devil’s Advocate, or are particularly partial to arguing) that Athens has actually been a bad thing in that it has been too effective, and actually held back investment in  other (perhaps more institutionally based) authentication/authorisation solutions in the UK. I’ve always wondered why solutions like ezproxy have much higher takeup in the US than in the UK – and Athens is surely the answer?

On the Shib front, although it is clearly where we are going with JISC at the moment, I can’t help but feel that we really ought to be seeing demand driven from somewhere other than library resources. For access to library resources in the UK HE sector, Shib seems like overkill – it certainly goes way beyond anything we need to do in terms of controlling access to this type of resource at the moment.

Shibboleth was originally championed by the Grid computing contingent in JISC, but this seems to have disappeared a bit recently – or I’ve just stopped paying attention. For example the ESP-GRID project http://www.jisc.ac.uk/whatwedo/programmes/programme_middleware/project_espgridjuly04.aspx – this was meant to report in March 2006, but the project website seems empty.

Anyway, based on some work I’m currently involved with which is looking at e-learning across 3 institutions, I can see some potential for Shib – at least in the next few years. Here, you can imagine Shib being used to allow access to relevant resources depending on your role in each organisation. I don’t think that the ‘personal learning’ environment will be realised fully in the next 5 years, so some time yet for federated authentication/authorisation to be of use.

Also, there is a question – will ‘personalised’ mean not hosted? Perhaps HE institutions will be the providers of personalised learning portals (i.e. the environment is personalised, and perhaps transportable, but provided by a single institution) which will allow consumption of relevant material from learning objects etc. across a federation – then something like Shibboleth might make perfect sense.

Just to go back to an earlier comment of Andy’s, that he was worried that blogs might stifle discussion. I started to leave this posting as a comment on the eFoundations blog, but ended up blogging it instead. The problem with this is that its a hell of a lot harder to follow discussion when it stretches across several blogs than when it is focussed on a single blog.

Identity Management and Learning 2.0

Just reading Andy’s post of the same title. I think that you could argue (if you wanted to play Devil’s Advocate, or are particularly partial to arguing) that Athens has actually been a bad thing in that it has been too effective, and actually held back investment in  other (perhaps more institutionally based) authentication/authorisation solutions in the UK. I’ve always wondered why solutions like ezproxy have much higher takeup in the US than in the UK – and Athens is surely the answer?

On the Shib front, although it is clearly where we are going with JISC at the moment, I can’t help but feel that we really ought to be seeing demand driven from somewhere other than library resources. For access to library resources in the UK HE sector, Shib seems like overkill – it certainly goes way beyond anything we need to do in terms of controlling access to this type of resource at the moment.

Shibboleth was originally championed by the Grid computing contingent in JISC, but this seems to have disappeared a bit recently – or I’ve just stopped paying attention. For example the ESP-GRID project http://www.jisc.ac.uk/whatwedo/programmes/programme_middleware/project_espgridjuly04.aspx – this was meant to report in March 2006, but the project website seems empty.

Anyway, based on some work I’m currently involved with which is looking at e-learning across 3 institutions, I can see some potential for Shib – at least in the next few years. Here, you can imagine Shib being used to allow access to relevant resources depending on your role in each organisation. I don’t think that the ‘personal learning’ environment will be realised fully in the next 5 years, so some time yet for federated authentication/authorisation to be of use.

Also, there is a question – will ‘personalised’ mean not hosted? Perhaps HE institutions will be the providers of personalised learning portals (i.e. the environment is personalised, and perhaps transportable, but provided by a single institution) which will allow consumption of relevant material from learning objects etc. across a federation – then something like Shibboleth might make perfect sense.

Just to go back to an earlier comment of Andy’s, that he was worried that blogs might stifle discussion. I started to leave this posting as a comment on the eFoundations blog, but ended up blogging it instead. The problem with this is that its a hell of a lot harder to follow discussion when it stretches across several blogs than when it is focussed on a single blog.

eLearning 2.0 – Plus ca change?

Tony Karrer asks on http://elearningtech.blogspot.com, “Is eLearning 2.0 Meaningful?

I’ve been involved in eLearning somehow since about 2000, and I’m not sure that eLearning 2.0 is any different from the kind of talk around at that point – ‘let’s have less shovelware‘, or ‘guide on the side, rather than sage on the stage’. These calls for learner centred approaches to teaching seem similar to the concepts behind eLearning 2.0.

At a recent talk (blogged below) Matthew Pittinsky from Blackboard suggested that eLearning 2.0 was about (among other things) social networks. This, I would suggest, is nothing new – surely the concept of a school, college or University is about a community of learning, where social networks form, and you learn from your peers as well as from you teacher or tutor.

Matthew asked where the scholarly equivalent of Facebook or Furl was – but the truth is that academics have long shared information within their communities via papers, books and conferences. In the virtual world, email is now a mainstay of the academic community.

So – what is different about eLearning 2.0? To some extent I believe that eLearning 2.0 is simply a continuation of what went before – we have to continue to press for learner centred teaching, which engages the students in a dialogue with their tutor and their peers. If we can use ‘eLearning 2.0’ to sell this, and get the community engaged in the Web 2.0 tools that can support it, then that’s great.

eLearning 2.0 – Plus ca change?

Tony Karrer asks on http://elearningtech.blogspot.com, “Is eLearning 2.0 Meaningful?

I’ve been involved in eLearning somehow since about 2000, and I’m not sure that eLearning 2.0 is any different from the kind of talk around at that point – ‘let’s have less shovelware‘, or ‘guide on the side, rather than sage on the stage’. These calls for learner centred approaches to teaching seem similar to the concepts behind eLearning 2.0.

At a recent talk (blogged below) Matthew Pittinsky from Blackboard suggested that eLearning 2.0 was about (among other things) social networks. This, I would suggest, is nothing new – surely the concept of a school, college or University is about a community of learning, where social networks form, and you learn from your peers as well as from you teacher or tutor.

Matthew asked where the scholarly equivalent of Facebook or Furl was – but the truth is that academics have long shared information within their communities via papers, books and conferences. In the virtual world, email is now a mainstay of the academic community.

So – what is different about eLearning 2.0? To some extent I believe that eLearning 2.0 is simply a continuation of what went before – we have to continue to press for learner centred teaching, which engages the students in a dialogue with their tutor and their peers. If we can use ‘eLearning 2.0’ to sell this, and get the community engaged in the Web 2.0 tools that can support it, then that’s great.

Discussion board standards

While taking part in the VLEs: Beyond The Fringe… And Into The Mainstream it occured to me that the discussion group software could be better from the ‘reading’ point of view.

In fact, had the discussion group had an rss feed, I could have read the postings in a much more convenient fashion, and kept up with the 4 separate discussions that were going on.

I’d got further than this though. RSS obviously doesn’t quite serve the needs of bulletin boards (threading, sequencing etc.), but surely it wouldn’t be difficult to define discussion groups as xml output rather than html, and have a simple messaging format to be able to post as well as read posts.

It’s just occured to me, that, of course, the existing news readers do this – so why are e-learning systems not delivering standard bulletin board formats so I can ‘subscribe’ in my news reader? On the other hand, does discussion board software from outside the e-learning sphere support this? What are the problems?

It suddenly seemed clear to me that if in the future (as some people suggest) learners will be more picky about where they do qualifications, and they will buy courses online from a variety of sources, they will need some way to ‘aggregate’ their courses in a single environment (rather than the current practice where each institution is running their own ‘learning environment’, and if the user is going to take courses from 2 institutions, they have to interact with 2 learning environments).

Since there is also talk of ‘exploding’ the VLE/LMS into it’s components parts, and discussion board system which is readable by a standard news reader seems like a sensible idea? I’m just wondering about how complex it needs to get… Perhaps a bulletin board software supporting RSS is a better idea? I’ve gone round in a circle on this – obviously needs more thought and research.

To LMS or not to LMS

I’ve become increasingly unconvinced about the benefits of LMSs – such as Blackboard and WebCT. Basically these environments seem to put unnecessary restrictions on how material is made available, and how it is accessed, without adding much benefit.

It’s interesting that these pieces of software are called ‘Learning Management Systems’. In the UK, the idea of the Virtual or Managed Learning Environment’ took off, and still there is a tendency to refer to LMSs as VLEs or MLEs. This, for me, is to miss an important distinction. Blackboard, WebCT and the like are correctly called ‘Learning Management Systems’, as they somehow try to ‘manage’ the learning material. I’m not sure this is helpful, certainly not in the context of UK Higher Education.

So, I believe we should strive to create a virtual or managed ‘learning environment’, but we don’t need an LMS to do so. This should also make it easier to integrate library resources into the material, as there are no artificial barriers to doing this, and you aren’t tied into one particular technology.

So what do we need from a VLE? At the moment our needs are pretty simple:

Web space for courses
Ideally we need to be able to restrict viewing privileges to the students on the course. However, this may not be necessary in all cases…
Discussion group/bulletin board software
Email lists for courses
Ease of publishing/uploading material

I’d like to be able to provide tools for easy content creation by academics. Weblog software would seem ideal for this purpose – but I’m not sure about supporting this (if we were to install Movable Type or something). Possibly Microsoft’s ‘Sharepoint’ software would be worth investigating. Otherwise, perhaps we just need to treat this as another area where web content management software is needed.