This session being moderated by Alan Thomson from the THE – not sure how this is going to work, it’s a big panel (all the speakers from today = 10).
Alan comparing research assessment to a high stakes poker game…
So, Alan is going to take questions, and then put them to the panel for discussion…
Q: (Ruth Jenkins, Nottingham) Are the funding councils going to a deal to get access to underlying data from Web of Science and/or Scopus
A: HEFCE would like to do this, although complicated – more than one version of the data (from the publishers directly, or with intermediaries such as Evidence Ltd or Leiden who have customised versions of the data)
In Australia, there have been some consortium deals, and some national deals for data.
Thomson Reuters are seeing the need for access to some of the underlying data, and are looking at how they can support this (using APIs, via system integration etc.) – want to make the right financial and technical arrangements to make a system work.
Follow up question about the pros and cons of using Google Scholar to generate citation data – directed at Les Carr from Southampton, as they have done some work on this. As part of his depts response to the RAE they looked at citation data and how this could be used to show that their dept was the ‘best’. They believe they need to take information from many difference sources – including Google Scholar.
Charles Oppenheim comments that there have been studies on the quality of data from Google Scholar vs other systems, and they generally find that the quality of data from Google Scholar is not good enough for it to give reliable information.
Q: What is the scope of the HEFCE pilot (what disciplines? all staff?)
A: HEFCE want to start off with the widest possible data set – so envisage pilot will be as wide a scope as possible. For the pilot they want institutions to tell them about all staff. However, if not enough institutions can manage this, they will look at different approaches.
Q: (Rosarie Coughlan, NUI Galway) What work has been done in bibliometrics in social sciences and humanities?
A: Anthony from Leiden says, you need to look at detail – for example psychology can really be regarded as a ‘science’ from the point of view of bibliometrics, but not sociology. Anthony from Leiden suggests that as long as the coverage is above zero, you should try bibliometrics, as they have found that there can be some.
Jonathan from Evidence Ltd less sanguine about this – the key thing for him is that researchers need to be involved in defining the measures that are relevant to them. Evidence Ltd. did some work in this area – they asked those assessing bids etc. how they assessed whether a piece of research was ‘excellent’ or not. They found the factors weren’t that different to natural sciences. However, Evidence Ltd. found that this did not mean the same measures were applicable.
Jonathan is concerned that the DIUS announcement starts ‘metrics creep’. Although Jonathan believes that at a high level there will be good correlations, you need a great deal of confidence in the system. If you undermine confidence in the judgements, then you get serious problems.
Linda from Australia they found that introducing book and book chapter citation to historians and political scientists they suddenly had a picture that they recognised, so they had more confidence.
Q: (Me!) What behavioural effects will the introduction of bibliometrics have?
A: Anthony from Leiden says, there will be behavioural effects, but we don’t know what – this needs careful assessment. There are likely to be different behavioural effects in short term and long term behaviours.
Anthony believes that changes will only be significant enough to impact on the validity of bibliometrics in the longer term, and believes that researchers won’t let it go that far as it impacts so much on the fundamentals of scholarly communication.
A comment from Hywel from Leicester (and backed up from Kings), that more junior researchers leave their names off early papers, so that they can cite later (presumably to avoid self-citation? I didn’t understand this). Anthony from Leiden defends the adjustment for self citation again, but maintains you should keep the self-citation as a separate factor, so you can see when it is low or high and needs further investigation. However, again the issue here is not that it makes a difference, but that people are changing their behaviour (already!) because they think it will have an effect.
I added to the question mentioning the research indicating possible link between Open Access availability and increased citation. James from Thomson says that his personal research (not Thomson view) shows that OA leads to accelerated citation rate, but not increased over the longterm – but research on this is in early stages, and only just seeing high quality research in this area.
Q: Question about why there has been no mention of the ‘h factor’ today?
A: Anthony makes point that there is no discipline specific factor in the h factor, and there is a strong correlation between career age and h factor, so not a good measure to use.
Q: (THE) Will Arts, Humanities and Social Sciences be included in pilot?
A: HEFCE want to include all disciplines where there is moderate citation coverage. They will be asking institutions if they would like to include those areas with less coverage.
Q: (Research Fortnight) The USA rejected bringing bibliometrics into assessment after resistance from the scientific community. Is there still the possibility that the REF could not use bibliometrics?
A: Graeme from HEFCE slightly avoiding the question, but feels that the sector accepts that a system with a metrics based component can work (I think that ‘metrics based’ gives quite a lot of wriggle room here)
Q: How is negative citation going to work? How do you ‘attack’ a piece of work without adding to it’s citation rate?
A: Linda says that the effects by these edge cases disappear in large amounts of data. Also a panel of expertise is needed to make sure that any exceptions are caught.
Jonathan noting that citation analysis has been used to assess ‘Impact’ rather than ‘Quality’ for a reason. Impact is an apposite word.
Another comment that if we are making a selection of what work is being put forward for the REF, then universities are very unlikely to put forward papers that are being cited for all the wrong reasons.
Q: I didn’t quite get this one, but about the ‘centralised database’ vs ‘local’ (departmental based) databases.
A: Have to persuade via quality of central offering
Follow on comment about costs – it’s OK for research geared institutions like King’s (and of course, of that matter my own, Imperial) – but what about institutions that have not focused on research or don’t have the same level of resources?
A comment from the floor – a university cannot afford not to have a centralised system as part of their basic infrastructure.
Q: A question about inter-disciplinary areas of work – how are these affected?
A: Linda saying that essentially all subjects turn out to be inter-disciplinary in terms of where they publish. You need to analyse this (looking at individual articles, and where they have been published) and then aggregate at the unit level – and it’s not that difficult to do.