Bouncing, Chunking and Squirrelling

I’ve made some brief notes on this talk by Lynn Silipigni Connaway (OCLC) about the behaviours of digital information seekers. I have to admit I found lots to disagree with in this presentation – but food for thought as well!

[Update: See comment from Dr Connaway below with some further information about the work she was summarising in this presentation – including the very important point that the themes she covered were common themes from the 12 studies that were reviewed rather than her opinions – sorry if this isn’t clear from the notes]


Lynn carried out a JISC funded analysis of 12 user behaviour studies conducted in the US and UK, all published within the last 5 years. 5 of the studies came out of OCLC, and others from JISC, and the RIN User Behaviour Project.  A brief summary of this is available at http://www.jisc.ac.uk/publications/reports/2010/digitalinformationseekers.aspx

Essentially users want access to digital content. Convenience dictates choice between physical and virtual library. Even in situations where the difference is very small – e.g. walking over to a reference desk, or sitting at a desk in the library and asking the question via a virtual service – the users will do the latter because it is more convenient.

Found users spent very little time using the content (while they are in ‘seeking’ mode I think this means) – they ‘squirrel’ downloads – get  quick chunks of information.  They tend to visit resource for just a few minutes, and tend to use very basic search. There was no evidence that more advanced searching was needed.

Tended to use snippets from e-books, viewing only a few pages, using Google-like interfaces. Used ‘Power Browsing’ rather than doing more finessed searches. Users really valued ‘human resources’ – liked face to face (e.g. with librarians) [not sure how this works alongside previous statement about convenience?]

Users tended to associate libraries with collections of books, but on otherhand felt that the more digital content the better.

Tended to find ‘faculty’ praise physical collection – and when asked what they wanted, they said ‘wine & beer & easy chairs’!

Electronic databases not perceived as library sources – although there is an awareness that the University pays for access to content.

Users frustrated with locating and accessing full-text copies.

Found Information literacy skills were lacking – not kept pace with digital literacy. Researchers generally self-taught and have (often misplaced) confidence in their skills. General people stick with what is familiar. Found that doctoral students take cues from their professors/supervisors – will do what they seem their ‘seniors’ doing – and this is probably what will get passed on in turn.

Found that the more familiar people were with a subject area, the broader they will be in their searching – they don’t want to miss anything, and they trust their judgement over those who might index the resources. Those less familiar with an area, will be more specific with their searching.

Found people often turned to general search engines to get overview of an area.

Users:

  • Value database and other online sources
  • Do not understand what resource available in libraries
  • Cannot ditinguish between database held by a library and other online sources
  • Library OPACs difficult to use

Searh behaviours vary by discipline

Desire seamless process from Discovery to Delivery. Sciences most satisfied, Social Science and Arts & Humanitites have serious gaps – particularly difficult to find:

  • Foreign Language materials
  • Multi-author materials
  • Journal backfiles

Inadequately catalogued resources result in underuse

Library ownership of sources important – “where can I get this?”

Differences exist between the catalogue data quality priorities of users and librarians.

‘One size fits noone’

Conclusions

  • Simpler searches & power browsing
  • Squirreling of downloads
  • Natural language
  • Convenience very important
  • Human resource valued
  • D2D of full-text digital content desired
  • Transparency of ranking results
  • Evaluative information included in catalog
  • More robust metadata

Implications for librarians:

  • Serve different constituencies
  • Adapt to changing user behaviours – look at 12 year-olds now
  • Offer service in multiple formats
  • Provide seamless access to digital resources
  • Better branding/marketing of our services …

Implications for library systems:

  • Build on and integrate search engine features
  • Provide search help at time of need – e.g. Chat and IM help during search
  • Adopt user-centered development approach

What does this mean for libraries?

  • Keep talking
  • Keep moving – and we need to move faster
  • Keep the gates open – make it easier to get to stuff
  • Keep it simpler

4 thoughts on “Bouncing, Chunking and Squirrelling

  1. Interesting. Any idea what is meant by “Inadequately catalogued resources”? Presumably that means missing/wrong basic information rather than inconsistent rule interpretations, but there’s also a lot of grey areas. E.g., are subjects useful?

    “Differences exist between the catalogue data quality priorities of users and librarians.” A similar question too of what these priorities mean. I would love to have thought that this was researched thoroughly before RDA was written. I suppose FRBR did and looked at the big picture to some extent.

  2. I think ‘inadequately catalogued’ meant that the way we catalogue stuff didn’t fit with the way that users searched for stuff – one quote given from a student was “When I search the library catalogue, I first think of a word that I’d never use, and then base my search around that – for example if I wanted to search for stuff on cars, I’d think what is a word for ‘car’ that I’d never use – I know ‘automobile’ – and then I’ll find stuff”

  3. Thanks Owen, an excellent recap of my presentation but I would I would like to emphasize that these are not my opinions but the common themes identified in the 12 reports that were analyzed. I also don’t share the same opinion as indicated in all of those themes – they were findings of others in addition to two of our studies – so I’d welcome your own views.

    My OCLC colleague, Dr. Timothy Dickey, and I both read and analyzed the studies using the content analysis methodology, counting the occurrences of concepts and themes. We only reported the common and most prevalent themes, as well as the contradictory themes. As stated in my presentation, some of the 12 studies did not report sample sizes nor were all of the demographics of the subjects the same in the 12 studies. Context and situation are very important when studying user behaviours. However, many of the studies ignored or did not address context or situation.

    In response to some of the other comments/questions:
    “Inadequately catalogued” meant several things. Some users would like more metadata, such as table of contents with titles and authors for collections, more natural language subject headings, etc. “Transparent ranking” means that the users want to know how the library catalog is ranking their results. They believe that Google ranking is more transparent/clearer to understand than how catalogs rank. As mentioned in my presentation, this is interesting, since many catalogs allow users to determine how to rank retrievals.

  4. I’ve not had opportunity to do more than glance through the Report by for JISC by Lynn Silipigni Connaway and Timothy J Dickey of OCLC Research referenced from the link you give – http://www.jisc.ac.uk/publications/reports/2010/digitalinformationseekers.aspx – but a few comments on meta-analysis of user experience studies.

    First, there is always a lot of value to be had by standing back as trying to summarise what empirical studies can tell us beyond our own experience. As I mentioned when we met yesterday, I can recall some meta-analysis of of classroom size and pupil achievement studies [many and varied and some just plain poor] and in my student days the classic was of smoking and lung cancer studies [never true experiments but with increasing clinical and methodological rigour].

    Inevitably these studies have to do at least two things, and then a third. First, what is the substantive summary message in these studies – do they say the same thing or not.? Second, how confident can we be about that/those summary message(s)?

    Following on is a third matter – just how well/badly were those studies in design and commission? A delicate matter but some brutality is sometimes needed. This is especially the case if those studies do not allow a third party to judge the inevitable limitations of design and commission.

    Perhaps a new blog post – here or elsewhere – should take up the task of amplifying these three things, especially the third as too many user experience studies lack the methodological rigour we expect in other scholarly literature. Perhaps we should prompt the authors to produce a short sharp methodological piece – I’m sure an RSS (as in Royal Statistical Society) group could be persuaded to host a talk – that was less discrete than “Evidence produced by multiple studies is limited by the common problem that some studies have small sample sizes and purposive samples”. There also needs to be some independent evaluation of ‘log analyses’ – that is the large-scale analysis of Web logs and logs of usage from publishers and other service providers.

    I’ll end this meandering to amplify the authors’ conclusion that “Many of the findings presented in this meta-analysis could be used as hypotheses for subsequent testing and generalization; therefore, the next logical step is to further explore and quantify these findings by conducting large, random-sample online and interview surveys.” This is another way of saying that we have had lots and lots of exploratory investigations, but we now need to be a bit more clincial and have a lot more methodological rogour – that includes essential requirement that a third party (statistician) could judge adequacy of design and commission.

    just my tuppence …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.