Discovery Summit: Paul Walk keynote

Paul talking about Open and Closed – not licensing or access, but about ‘open world assumption’ vs ‘closed world assumption’

Paul describes characteristics of ‘open world’:

  • Incomplete information
  • Schema-less data
  • Web technologies – http; html5; rdf
  • Platform independence; scales well; cross-context discovery potential

Closed world characteristics:

  • Complete information
  • Schema-based data; Records
  • Web tech – http delivering to native apps
  • Performance; contextualised discovery; quality; curation

Need to decide when to apply each of these approaches – strengths and weaknesses

Web still best available foundation of what we are doing, but still need to manage resources; quality etc.

Quote from Leslie Lamport “a distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable”

As a developer why should I trust your API – that it will work, that it will continue to work – if you don’t use it yourself as the service owner? See Paul’s blog post on this.

APIs are not best thought of a machine-to-machine interfaces. APIs are interfaces for developers! Talk to developers who are likely to use your API. Developer is to API as ‘user’ is to UI.

Yesterday Paul hosted a meeting for developers to get their point of view [which I was fortunate enough to attend]. Some things that came out of this:

  • please don’t build elaborate APIs which do not allow us to see all of the data or its extent 
  • Offering an API which delivers incomplete data is usually self-defeating – that is, don’t hold data back because you are worried about its quality

Introducing this afternoon’s sessions:

Emerging technologies – Graph based data (see work by Facebook, Google, BBC etc.)

Reasons for aggregation – to avoid systems/network latency; showcase; ‘web scale’ concentration; …

Data quality issues – concern about data quality can prevent release of data (which consumers don’t like); but poor data quality erodes trust and can affect reputation; reconciling these things is a major challenge

 

Discovery Summit Keynote: Maura Marx

Maura Marx is from the Digital Public Library of America.

Interesting comment from Maura in this session of ‘Are we failing users’ – she said DPLA “think about Developers as users a lot” – something that came up at the Summit pre-event focussed on developers yesterday – think this is really important thing for libraries/archives/museums to take on board.

Maura giving some background to the creation of the DPLA – looking to create “an open, distributed network on comprehensive online resources”. DPLA have been working on how to make something useful and sustainable. Workstreams for technical, legal, economic, governance, users. Spent time recruiting people from cultural heritage as well as education, law, government, technology etc.

DPLA turned down money to digitise a load of content – this is not what it is about – have to change the way libraries work, not about creating a pile of digital stuff.

Lots of meetings and workshops – very open process – which is challenging. DPLA will be ‘launched’ April 18-19  2013. DPLA is about a platform not a ‘portal’ or destination. Encouraging people to build on DPLA platform – getting beyond perception of DPLA as ‘portal’ is a struggle.

DPLA work on metadata built on Europeana work. DPLA statements on metadata:

  • The DPLA asserts that metadata are not copyrightable, and that applying a license to them is not necessary.
  • To the extent that the law determines a copyright interest exists, a CC0 license applies.
  • The DPLA asserts no new rights over metadata at the DPLA level

Digital Hubs pilot project – “taking first steps to bring together existing US digital library infrastructure into a sustainable national digital library systems” – focusing on provision in 7 regions

DPLA has strong emphasis on building Community.

Discovery Summit 2013 – a foreword

I’m at the British Library for the next couple of days for the JISC/BL Discovery Summit. This is an event that brings together work from the last 4 years which started with the snappily named “Resource Discovery Task Force” which was asked to consider how ‘resource discovery’ (finding resources in libraries, archives, museums etc.) could be improved for UK HE researchers and others.

I’m facilitating various sessions at the event, but I’ll try to blog some sessions as well. But I thought I would start by publishing a presentation I did for the first meeting of the Resource Discovery Task Force back in November 2008. I was asked to consider “What if we were starting from scratch”. This was written 5 years ago, and I’ve left the text as it was – so changes in direction and thought in the last five years are not part of this presentation – but I think I still believe in the fundamentals of what I said here, and I think the last 5 years work on ‘Discovery’ have borne this out.

What if we were starting from scratch

What if we got rid of services, standards, methods and mechanisms that are currently make up our resource discovery infrastructure? In 10 minutes!

How far back do we go? To the very first ‘library catalogues’ – written in Cuniform? Or possibly just as far the Library at Alexandria (3rd century BC). In a comment on my blog Joy Palmer (MIMAS) said “how far do we get to go back for a clean slate here? couple of hundred years? before the emergence of bureacratic cataloguing and classification practices? probably not;-)”

Well, I’m not going to go back that far. In fact, I’m going to start by going back just as far as 1931 when Shiyali Ramamitra Ranganathan published his ‘5 Laws of Library Science

In a recent blog post Lorcan Dempsey reflects that although there have been numerous attempts to ‘update’ the 5 laws they are not particularly convincing, and that this is perhaps because that the laws as they are continue to capture the fundamental challenges of what we do.

All 5 laws are worth exploration, and several of them touch on the intimate relation between resource discovery and access to those resources which is an aspect of the infrastructure I believe is vital, but I’m not going to consider in this presentation. Focusing solely on the ‘Resource Discovery Infrastructure’, I think that the last two are particularly relevant.

The 4th Law – Save the Time of the User.

Ranganathan was concerned with shelf arrangements, recent acquisitions shelves, signposting and shelf labelling. In one example he notes that the activity “may look like a great expense when considered from the isolated library-point-of-view, … can be seen to be really economical from the larger community-point-of-view” He also related saving time of the user to saving the time of the staff – what is efficient for the user is efficient for the library.

One of the major time sinks for a user in our current Resource Discovery environment is the number of places you have to look for stuff.

The 5th Law – A Library is a growing organism – or perhaps because we can consider in this context there is more than one library – libraries are growing organisms. This the law that Lorcan Dempsey was specifically commented on in his blog post, considering how we need to think about effective service in a networked environment. Ranganathan commented “an organisation which may be suitable for a small library may completely fail when the library grows big” Ranganathan asked of the library catalogue “let us examine what form the Fifth Law would recommend for it”. He suggests that libraries pioneered the use of loose-leaf binding to update catalogues, and then the use of cards – with one entry per card – describing the card index as “another epoch making contribution of the library profession to the business world in general” I believe that these principals outlined by Ranganathan can inform how we should design a Resource Discovery Infrastructure that servers the resource explorers. Before I come back to the question of ‘starting from scratch’, I think it may be useful to just reflect briefly on how we have got to where we are today.

I want to briefly go back further to the end of the 19th Century, when Melvil Dewey was involved in the introduction of a standardised form of catalogue card, and catalogue card cabinet (the fact that Dewey also setup the company that sold the cards …

… and the cabinets …

… and even special machines to type the cards may have had an influence in this!

The Library of Congress started to publish it’s catalogue records on these standard sized cards and by this method could distribute them to other libraries.

This was so successful, that Charles Cutter, who produced a seminal work on building a printed dictionary catalogue quickly had to revise it to take account of the card catalogue. By the time the 4th edition of Cutter’s work was published, he prefaced it by saying “any new library would be very foolish not to make its catalog mainly of them [LoC cards]”

This is, I believe, the start of our current Resource Discovery infrastructure – and we have been stuck in this mold ever since. Although we now have computerised catalogues we have never got away from the idea that these records are physical objects – discrete things that are copied for local use. It is for this reason, that when we try to follow Ranganathan’s 4th law to ‘save the time of the reader’ we try to put these copies back together – and encounter problems of inconsistency and duplication. So what do we need to do differently?

One tempting scenario is to think that we should stop ‘copying the cards’, and have a single ‘catalogue’ – it wouldn’t be trying to bring together disparate records – it would be the only copy. When I posted the question on my blog “What if we started from scratch“, one respondent said: “would there be a ‘master’ UK catalogue of books … The trad library catalogue would just be a ‘view’ of this”.

However, I do not believe this is realistic -we don’t have this kind of control over resources and resource discovery systems – we would always have things springing up outside the ‘big store’. Essentially this approach ignores Ranganathan’s 5th law – the library is a growing organism.

To move beyond the card catalogue, we need to look to a more recent figure – Tim Berners-Lee. I believe that if Ranganathan were to look at the problem now, he would recognise that HTML and the web represented that next step that Resource Discovery Infrastructure needed to take to go beyond the card catalogue – but for some reason librarians and archivists have not been in the forefront in the adoption of this as an approach to Resource Discovery, but have treated merely as a new medium with which they need to integrate their existing infrastructure.

When Tim Berners-Lee first described the requirement for a ‘global hypertext system’, he said “Information systems start small and grow. They also start isolated and then merge. A new system must allow existing systems to be linked together without requiring any central control or coordination.” – I think the first part of this statement is a restating of Ranganathan’s 5th law, for a networked age.

I would contend that the Web is the most effective system for disseminating information yet conceived (we might consider the brain an even more effective networked information system?).

What does all this tell us about starting from scratch?

We need to build a linked environment.

The way I think of it is each catalogue record should be a hypertext document. This can both link, and be linked to. When a library adds an item to its catalogue, it can do this by creating a new hypertext catalogue record, OR by linking to an existing one.

As many libraries do this, some records receive more inward links. Not all inward links will go to the same record necessarily – perhaps two or more key ‘nodes’ will appear in the network for a single bibliographic item – but this doesn’t matter, it is ultimately self correcting, and self correctable – afterall, you only need one connection between the two parts of the network for it to be clear that all the connected items are the same.

It also opens up the possibility of explicitly recording that ‘these two things are the same’

I’d stress this doesn’t have to be the WWW – just a linked environment.

You would need to start thinking not just in terms of ‘catalogues’ – what your library has – but also in terms of ‘indexes’ – what do you want to index? To index your library, you would crawl all the records in your library, AND all the ones you link to – you could take this further, and grab extra information from others if you wanted – but the ones that are most linked to, would be the more ‘authoritative’ – allowing records from small, but specialist, suppliers to be more authoritative than those from the traditional ‘authorities’

A Union catalogue could be built by doing a wider crawl – and the ultimate Union by crawling everything. Yes, there would be duplication, but this would be on a smaller scale than currently – and the more links you add, the less duplication there is (more data)

Although I’ve approached this in what is perhaps a Library-centric way, a linked environment allows links to anywhere – so you can link to (and index) resources outside your domain – building an index to serve your users. -standards might be needed – but in some ways links would provide aspects of this, crosswalks could be built from practice rather than theory. Diversity would be embraced and used.

The idea of a linked resource discovery infrastructure requires us to change the way we think about our ‘catalogues’ (whether that be library, archives, journal indexes or other). We’ve treated these as isolated instances – if you have one sheep, you have a pet – you can focus all your attention on it and care for it, but if you put it out to fend for itself, it is going to struggle.

We need to think more like shepherds – treating our resource descriptions as a flock – putting it out to graze, and gathering it back in (via indexing) when we need to.

I’m convinced that Ranganathan would see a linked environment as the next step – the next ‘epoch making contribution’ – he may not have been able to anticipate the information revolution that computers would bring, but he said “What further stages of evolution are in store for this Growing Organism  – the library – we can only wait and see… who knows that a day may not come … when the dissemination of knowledge, which is the vital function of libraries, will be realised by libraries even by means other than those of the printed book”. There is no doubt in my mind that he would have embraced the opportunity offered by a linked information environment  – and this is what we need to do now.

Introduction to APIs

[UPDATE October 2014: Following changes to the BNB platform and supported APIs, this tutorial no longer works as described below. An updated version of this exercise is now available at http://www.meanboyfriend.com/overdue_ideas/2014/10/using-an-api-hands-on-exercise/]

On Wednesday this week (6th Feb 2013) I spent a day at the British Library in London talking to curators about data and the web. The workshop was a full day and we covered a lot of ground  – from HTML to simple mashups to Linked Data. One of the things I wanted to do during the day was to get people to use an API – to understand what challenges this presented, what sort of questions you might have as a developer using that API, what sort of things you should think about when creating an API, and hopefully to start to get a feel for what opportunities are created by providing an API to resources.

Since we had a busy day, I only had an hour to get people working with an API for the first time, so I wanted to do something:

  • Simple
  • Relevant to the audience (Curators at the British Library)
  • Requiring no local installation of software
  • Requiring no existing knowledge of programming etc.
  • That produced a tangible outcome in an hour

The result was the two exercises below. We got through exercise 1 in the hour (some people may have gone further but as far as I know everyone completed exercise 1) and so I don’t know how well exercise 2 works – but I’d be very interested in feedback if anyone gives it a go. The exercises use the British National Bibliography as the data source:

Exercise 1: Using an API for the first time

Introduction

In this exercise you are going to use a Google Spreadsheet to retrieve records from an API to BNB, and display the results.

The API you are going to use simply allows you to submit some search terms and get a list of results in a format called RSS. You are going to use a Spreadsheet to submit a search to the API, and display the results.

Understanding the API

The API you are going to use is an interface to the BNB. Before you can start working with the API, you need to understand how it works. To do this, first go to:

http://bnb.data.bl.uk/search

You will see a search form:

BNB search form

Enter a search term in the first box (‘Search store:’) and press ‘Search’. What you see next will depend on your browser. If you are using Google Chrome you will probably see something like this:

BNB RSS

If you are using Internet Explorer or Firefox you will see something more like:

BNB RSS in Firefox

At the moment this doesn’t matter, we are interested in the URL, not the display right now.

Look carefully at the URL – see how the search terms you typed in are included in the URL. The example I used is:

http://bnb.data.bl.uk/items?query=the+social+life+of+information&max=10&offset=0&sort=&xsl=&content-type=

The first part of the URL is the address of the API. Everything after the ‘?’ are ‘parameters’ which form the input to the API. There are six parameters listed and each one consists of the parameter name, followed by an ‘=’ sign, then a value.

The URL and parameters breakdown like this:

URL Part Explanation
http://bnb.data.bl.uk/items The address of the API
query=the+social+life+of+information The ‘query’ parameter – contains the search terms submitted
max=10 The ‘max’ parameter is set to ’10’. This means the API will return a maximum of 10 records. You can experiment changing this to get more/less results at one time
offset=0 The ‘offset’ parameter tells the API which the first record you want to be included in the results. It is set to ‘0’ meaning that the API will start with the very first record.
sort=&xsl=&content-type= Other parameters – you can see that these reflect the other parts of the form at http://bnb.data.bl.uk/searchThe parameters are:

  • sort
  • xsl
  • content-type

None of these are set and are not going to be used in this exercise.

 

Going Further

If you want to find out more about the API being used here, including documentation on all the search parameters, documentation is available at:

http://docs.api.talis.com/platform-api/full-text-searching

The output of the API is displayed in the browser – this is an RSS feed – it would plug into any standard RSS reader like Google Reader (http://reader.google.com). The BBC have a brief explanation of what an RSS feed is (follow the link). It is also valid XML. The reasons browsers display it differently (as noted above) is that some browsers recognise it as an RSS feed, and try to display it nicely, while others don’t.

If you are using a browser that displays the ‘nice’ version, you can right-click on the page and use the ‘View Source’ option to see the XML that is underneath this.

While the XML is not the nicest thing to look at, it should be possible to find lines that look something like:

<item rdf:about="http://bnb.data.bl.uk/id/resource/011380365">
<title>The social life of information / J. S. Brown</title>
<link>http://bnb.data.bl.uk/id/resource/011380365</link>

Each result the API returns is called an ‘item’. Each ‘item’ at minimum will have a ‘title’ and a ‘link’. In this case the link is to more information about the item.

The key things you need to know to work with this API are:

  • The address of the API
  • The parameters that the API accepts as input
  • The format the API provides as output

Now you’ve got this information, you are ready to start using the API.

Using the API

To use the API, you are going to use a Google Spreadsheet. Go to http://drive.google.com and login to your Google account. Create a Google Spreadsheet

The first thing to do is build the API call (the query you are going to submit to the API).

First some labels:

  • In cell A1 enter the text ‘API Address’
  • In cell A2 enter the text ‘Search terms’
  • In cell A3 enter the text ‘Maximum results’
  • In cell A4 enter the text ‘Offset’
  • In cell A5 enter ‘Search URL’
  • In cell A6 enter ‘Results’

Now, based on the information we were able to obtain by understanding the API we can fill values into column B as follows:

  • In cell B1 enter the address of the API (see the table above if you’ve forgotten what this is)
  • In cell B2 enter a simple, one word search
  • In cell B3 enter the maximum number of results you want to get (10 is a good starting point)
  • In cell B4 enter ‘0’ (zero) to display from the first results onwards

The first four rows of the spreadsheet should look something like (with your own keyword in B2):

Spreadsheet 1

You now have all the parameters we need to build the API call. To do this you want to create a URL very similar to the one you saw when you explored the API above. You can do this using a handy spreadsheet function/formula called ‘Concatenate’ which allows you to combine the contents of a number of spreadsheet cells with other text.

In Cell B5 type the following formula:

=concatenate(B1,"?query=",B2,"&max=",B3,"&offset=",B4)

This joins the contents of cells B1, B2, B3 and B4 with the text included in inverted commas in formula. N.B. Depending on the locale settings in Google Docs, it is sometimes necessary to use semicolons in place of the commas in the formula above.

Once you have entered this formula and pressed enter your spreadsheet should look something like:

Spreadsheet 2

The final step is to send this query, and retrieve and display the results. This is where the fact that the API returns results as an RSS feed comes in extremely useful. Google Spreadsheets has a special function for retrieving and displaying RSS feeds.

To use this, in Cell B6 type the following formula:

=importFeed(B5)

Because Google Spreadsheets knows what an RSS feed is, and understands it will contain one or more ‘items’ with a ‘title’ and a ‘link’ it will do the rest for us. Hit enter, and see the results.

Congratulations! You have built an API query, and displayed the results.

You have:

  • Explored an API for the BNB
  • Seen how you can ‘call’ the API by adding some parameters to a URL
  • Understood how the API returns results in RSS format
  • Used this knowledge to build a Google Spreadsheet which searches BNB and displays the results
Going Further

  • Try varying the values in Cells B3 and B4. Can you see how you could use these together to make a ‘page’ of results?
  • Try changing the search term in Cell B2. What happens if you use multiple words? Do you know why?

HINT: Look at the URL created in Cell B5 – can you see what’s wrong? Try doing a multi-word search using the search form at http://bnb.data.bl.uk/search and look at the URL produced – what’s the difference?

Can you work out how to avoid the multi-word search problem? Have a look at the ‘substitute’ function documented on this page https://support.google.com/drive/bin/static.py?hl=en&topic=25273&page=table.cs

If you want to know more about the ‘importFeed’ function, have a look at the documentation at http://support.google.com/drive/bin/answer.py?hl=en&answer=155181

Exercise 2: More API – the full record

Introduction

In Exercise 1, you explored a search API for the BNB, and displayed the results. However, this minimal information (a result title and a URL) may not tell you a lot about the resource. In this exercise you will see how to retrieve a ‘full record’ and display some of that information.

Exploring the full record data

The ‘full record’ display is at the end of the URLs retrieved from the BNB in Exercise 1 above. Click on one of these URLs (or copy/paste into your browser). If possible pick a URL that looks like it is a bibliographic record describing a book, rather than a subject heading or name authority.

An example URL is http://bnb.data.bl.uk/id/resource/010712074

Following this URL will show a page similar to this:

BNB full record html

This screen displays the information about this item which is available via the BNB API as an HTML page. Note that the URL of the page in the browser address bar is different to the one you clicked on. In the example given here the original URL was:

http://bnb.data.bl.uk/id/resource/010712074

while the address in the browser bar is:

http://bnb.data.bl.uk/doc/resource/010712074

You will be able to take advantage of the equivalence of these two URLs later in this exercise.

While the HTML display works well for humans, it is not always easy to automatically extract data from HTML. In this case the same information is available in a number of different formats, listed at the top righthand side of the display. The options are:

  • rdf
  • ttl
  • json
  • xml
  • html

The default view in a browser is the ‘html’ version. Offering access to the data in a variety of formats gives choice to anyone working in the API. Both ‘json’ and ‘xml’ are widely used by developers, with ‘json’ often being praised for its simplicity. However, the choice of format can depend on experience, the required outcome, and external constrictions such as the programming language or tool being used.

Google Spreadsheet has some built in functions for reading XML, so for this exercise the XML format is the easiest one to use.

XML for BNB items

To see what the XML version of the data looks like, click on the ‘xml’ link at the top right. Note the URL looks like:

http://bnb.data.bl.uk/doc/resource/010712074.xml

This is the same as the URL we saw for the HTML version above, but with the addition of ‘.xml’

XML is a way of structuring data in a hierarchical way – one way of thinking about it is as a series of folders, each of which can contain further folders. In XML terminology, these are ‘elements’ and each element can contain a value, or further elements (not both). If you look at an XML file, the elements are denoted by tags – that is the element name in angle brackets – just as in HTML. Every XML document must have a single root element that contains the rest of the XML.

Going Further

To learn more about XML, how it is structured and how it can be used see this tutorial from IBM: http://www.ibm.com/developerworks/xml/tutorials/xmlintro/

  • Can you guess another URL which would also get you the XML version of the BNB record?
  • Look at the URL in the spreadsheet and compare it to the URL you actually arrive at if you follow the link.

The structure of the XML returned by the BNB API has a element as the root element. The diagram below partially illustrates the structure of the XML.

BNB XML Structure

To extract data from the XML we have to ‘parse’ it – that is, tell a computer how to extract data from this structure. One way of doing this is using ‘XPath’. XPath is a way of writing down a route to data in an XML document.

The simplest type of XPath expression is to list all the elements that are in the ‘path’ to the data you want to extract using a ‘/’ to separate the list of elements. This is similar to how ‘paths’ to documents are listed in a file system.

In the document structure above, the XPath to the title is:

/result/primaryTopic/title

You can use a shorthand of ‘//’ at the start of an XPath expression to mean ‘any path’ and so in this case you could simply write ‘//title’ without needing to express all the container elements.

Going Further

  • What would the XPath be for the ISBN-10 in this example?
  • Why might you sometimes not want to use the shorthand ‘//’ for ‘any path’ instead of writing the path out in full? Can you think of any possible undesired side effects?

Find out more about XPath in this tutorial: http://zvon.org/comp/r/tut-XPath_1.html

Using the API

Now you know how to get structured data for a BNB item, and the structure of the XML used, you can extend the Google Spreadsheet you created in Exercise 1 to display more detailed data for the item.

Google Spreadsheets has a function called ‘importXML’ which can be used to import XML, and then use XPath to extract the relevant data. In order to use this you need to know the location of the XML to import, and the XPath expression you want to use.

In Exercise 1 you should have finished with a list of URLs in column C. These URLs can be used to get an HTML version of the record. To get an XML version of the same item, you simply need to add ‘.xml’ to the end of the URL.

The XPath expression you can use is ‘//isbn10’. This will find all the isbn10 elements in the XML.

With these two bits of information you are ready to use the ‘importXML’ function. In to Cell D6, type the formula:

=importXml(concatenate(C6,".xml"),"//isbn10")

This creates the correct URL with the ‘concatenate’ function, retrieves the XML document, and uses the Xpath ‘//isbn10’ to get the content of the element – this 10 digit ISBN. N.B. Depending on the locale settings in Google Docs, it is sometimes necessary to use semicolons in place of the commas in the formula above.

Congratulations! You have used the BNB API to retrieve XML and extract and display information from it.

You have:

  • Understood the URLs you can use to retrieve a full record from the BNB
  • Understood the XML used to represent the BNB record
  • Written a basic XPath expression to extract information from the BNB record

Going Further

  • How would you amend the formula to display the publication information?
  • Now you have an ISBN for a BNB item, can you think of other online resources you could link to or use to further enhance the display?
  • How would you go about bringing in an additional source of data?

To see one example of how this spreadsheet could be developed further see https://docs.google.com/spreadsheet/ccc?key=0ArKBuBr9wWc3dEE1OXVHX2U2YTkyaHJxWjI2WTFWLUE&usp=sharing

  • What new source has been added to the spreadsheet?
  • What functions have been used to retrieve and display the new data?
  • Why is the formula used more complex than the examples in the two exercises above?