CULIB - Cambridge University Libraries Information Bulletin

ISSN 0307-7284    Edited by Kathryn McKee, Mary Kattuman, Lyn Bailey and Kate Arhel

Issue 77, Michaelmas 2015: Cataloguing and classification

CONTENTS


EDITORIAL

This issue looks at cataloguing and classification. Cataloguing rules and standards are always being tweaked and expanded on but the last few years has seen some huge changes as RDA became the norm for many libraries, including Cambridge. More changes are on the horizon and Thomas Meehan provides us with an introduction to Linked Data and what it might mean for the MARC format. Colin Higgins reviews the implementation of RDA and discusses whether it has solved all the issues we had with AACR2, especially in relation to the cataloguing of film.

However, cataloguing doesn't have to be all back office work as a team in Edinburgh have been working on a crowdsourcing tagging game to help them improve the descriptions of their image collections. The results increase their metadata and at the same time they are engaging with a wide range of people and improving access to their collections.

In the 21st Century how do you go about tackling the descriptions of a large private manuscript collection? This was the dilemma facing Suzanne Reynolds as she set about work on the collection at Holkham Hall and she describes the interesting solutions she developed.

Finally, not forgetting classification, Helen Murphy from the English Faculty Library describes the advantages and disadvantages when you need to change your in-house classification system. This is not so easy when dealing with a section covering Shakespeare.

<< Back to CONTENTS


LINKED DATA AND CATALOGUING

Cataloguers are currently living in interesting times, with new cataloguing rules, new material types, new discovery software, and new workflows as vendor-supplied records and shelf-ready schemes become more common. Whereas MARC – used to encode catalogue records on computer systems – has so far withstood all these upheavals, linked data is not only on course to displace it, but to profoundly change cataloguing practices, software, and perhaps even its scope, ever further.

What is Linked Data?

If the web we are familiar with is a web of documents for largely human consumption, linked data attempts to build a web of data for computers to work with instead. Whereas a HTML page – such as a library catalogue display – is easy for people to read and understand as long as they understand the language it is written in, computers find it much harder to extract much meaning from it. For instance, which table or paragraph element represents the title of a book, which is the ISBN, which elements are people, and what role did each person play in relation to which book? Linked data attempts to build such a semantic web with real machine-readable meaning.

Linked data is not a single technical standard as such but in 2006 Tim Berners-Lee – who of course was instrumental in founding the web itself – laid down some basic principles:

  1. Use URIs as names for things.
  2. Use HTTP URIs so that people can look up those names.
  3. When someone looks up a URI provide useful information, using the standards (RDF, SPARQL)
  4. Include links to other URIs so that they can discover more things.

A HTTP URI is really just a familiar web URL used not just as an address to find a document, but as an identifier for something. Whereas http://en.wikipedia.org/wiki/Evelyn_Waugh is the location for a Wikipedia article about Evelyn Waugh, the URI http://bnb.data.bl.uk/id/resource/006984277 identifies the author himself and was created by the British National Bibliography (BNB) for that purpose.

When we look up the Wikipedia article, we expect something in HTML our browser can understand, but Berners-Lee's third principle suggests providing something a computer can better read and understand, such as RDF. The BNB has done that so a computer looking at that URI will see, amongst a great deal more, the following piece of data in RDF:

<http://bnb.data.bl.uk/id/resource/006984277>

<http://www.bl.uk/schemas/bibliographic/blterms#hasCreated>

<http://bnb.data.bl.uk/id/resource/006984277>.

This is a single statement with three parts, called a triple. RDF is all written in such triples. The triple above consists of three URIs: the first one identifies Evelyn Waugh; the third one identifies a 1928 edition of Decline and Fall; and the second one specifies the relationship between the author and the book, i.e. that Evelyn Waugh created it. Visiting any of these URIs reveals more information such as names, further works, dates, and publishers, including more links to fulfil Berners-Lee's final principle. Providing links is fundamental to both the web of documents and the web of data. What's more, if you point your browser at any of those URIs (simply remove the angle brackets and type them in) you will be given information humans can read too.

Linked Data and Libraries

Linked data by no means originated in libraries and is used by many non-library organisations, including Ordnance Survey, the BBC, and Wikipedia, who publish a linked data version of the encyclopedia called DBpedia. However, libraries are now keenly involved in creating it. The linked data version of the BNB was first published in 2011, the same year Cambridge University Library experimented with converting a large amount of MARC data and releasing it as linked data. The following year, OCLC started experimenting with publishing linked data embedded within web pages using a vocabulary designed primarily for and by search engines such as Google and Bing called schema.org. This can look simplified for library use but obviously has powerful backing; OCLC are also trying to extend this for better library use. A number of authority files have been openly published as linked data including OCLC's VIAF and Library of Congress's name and subject authorities.

A number of libraries have gone further and tried to use linked data at the core of their systems, most notably the National Library of Sweden and the Oslo Public Library. However, the vast majority of libraries are still using MARC for their cataloguing input, storage, and exchange, something that many feel has to change.

MARC Must Die!

Libraries have been using one form or other of MARC since the 1960s to store catalogue records on computers. Even many librarians not fortunate enough to be cataloguers will be familiar with its system of numbered fields, indicators, and single-letter subfields. MARC has been enormously successful, especially in sharing records, but has changed little in that time, despite the rise of web searching and display for which it and, to be fair, the cataloguing rules, were never designed. In 2002, Roy Tennant of OCLC wrote an article suggesting that:

The problems with MARC are serious and extensive, which is why a number of us are increasingly convinced that MARC has outlived its usefulness.

The problems with MARC can be summarised as its reliance on text to identify things like people and places, its close marriage to cataloguing rules that depend on specific punctuation for display, and the fact that it is used only in libraries. When RDA was introduced in 2011, the same year the BNB started publishing linked data, the US testing committee specified work must start on a replacement for MARC. In consequence, the Library of Congress is working on a linked data-based system called BIBFRAME (short for the Bibliographic Framework Initiative).

BIBFRAME

BIBFRAME was initially developed by the Library of Congress with the consulting firm Zepheira. It rather notoriously simplifies the FRBR model on which RDA is based, partly in an attempt to widen its remit beyond traditional cataloguing. It has also drawn some criticism in seeking self-sufficiency in its vocabulary. Although re-using others' vocabularies is seen as good practice in linked data, BIBFRAME chooses not to. For instance, an RDF vocabulary called FOAF was initially developed for internet users to describe themselves but has developed into a rich way to describe people. Libraries also need to describe people (e.g. authors) so it can make sense to re-use that work rather than starting again. The BNB uses FOAF in its data, amongst a number of other common vocabularies, including Dublin Core. This in turn makes it easier for non-librarians familiar with FOAF or Dublin Core to understand and make applications to use the BNB data.

It is not certain when BIBFRAME will be completed. It is still under development, and discussion papers continue to be published. Despite the Library of Congress's centralised approach and BIBFRAME's unfinished state, there are now also several offshoots, including from the US National Library of Medicine. Zepheira has gone on to independently develop a "lite" version of BIBFRAME, set up initiatives to promote the visibility of library data, and offer training and conversion to a number of US libraries.

The Future

Where libraries have the opportunity and daring, linked data, even if experimental, has already arrived; for widespread day-to-day activity, however, it will likely take much more time as systems, software, and processes need to slowly change. Linked data is likely to leave library catalogue data and standards looking far more fragmented than the current uniform world of MARC records. But it will also become less isolated and more closely integrated with the rest of the world and other parts of the library: a URI shared between a catalogue, a repository, and Wikipedia on a common web framework raises enormous and exciting possibilities.

Selected References

Berners-Lee, Tim. Linked Data: Design Issues
http://www.w3.org/DesignIssues/LinkedData

British Library. Linked Open Data
http://www.bl.uk/bibliographic/datafree.html#lod

European Library. Case Study: Cambridge University Library delivers linked open data and enrichments.
http://www.theeuropeanlibrary.org/tel4/newsitem/5450

Library of Congress. Bibliographic Framework Initiative.
http://www.loc.gov/bibframe/

OCLC. OCLC adds Linked Data to WorldCat.org.
https://www.oclc.org/news/releases/2012/201238.en.html

Rekkavi, Asgeir. RDF Linked data cataloguing at Oslo Public Library.
http://digital.deichman.no/blog/2014/07/06/rdf-linked-data-cataloguing-at-oslo-public-library/

Tennant, Roy. MARC Must Die.
http://lj.libraryjournal.com/2002/10/ljarchives/marc-must-die/

Thomas Meehan
Head of Current Cataloguing
Senate House Hub
University College London


Send us your comments on this article

<< Back to CONTENTS


RDA POST-IMPLEMENTATION: HAS IT MADE CATALOGUING BETTER? THE EXAMPLE OF FILM

In 1998, the Library Association, the American Library Association, and Canadian Library Association published the proceedings of a recent conference. Only 53 people had participated in The International Conference on the Principles & Future Development of AACR, but its influence has been profound. That same year, the long-gestated Functional Requirements for Bibliographic Records: Final Report (FRBR) was signed off by the International Federation of Library Associations and Institutions (IFLA). FRBR aimed to conceptualize bibliographical description using a model borrowed from software engineering.

From these two publications, a new cataloguing standard would be forged: Resource Description & Access (RDA). It responded to a perceived dissatisfaction with the second edition of the Anglo-American Cataloging Rules (AARC2), with a model whose analytical rigour could be expressed using flowcharts. With its unfamiliar vocabulary, promiscuous attitude to rules, confusing structure, and philosophical abstractions, RDA has been the biggest change to cataloguing since the computerization of libraries.

It can hardly be a co-incidence that 1998 was also the year when the dot-com boom started to look like a bubble. Twelve months earlier, two PhD students at Stanford University had registered the domain name Google.com. In 1998, they incorporated a new company whose goal was 'to organize the world's information and make it universally accessible and useful.' That year, too, one of my University lecturers turned up late for our epistemology class. 'I'm worried I've done something terrible,' he confessed. 'I've just bought a book from the internet. There's this thing called Amazon…'

RDA was thus a standard born of desperation, frustration, pseudo-philosophy, and organizational momentum. It has seen more discussion, more planning, more testing, and more revision than any cataloguing guide in history. And it's a genuinely radical departure from everything which preceded it. The lineage between Antonio Panizzi's Ninety-One Cataloguing Rules (1841) to AACR2r (2002) is relatively direct. With RDA, it is broken; RDA has its own foundation.

Two years after its implementation by the Library of Congress, and then, domino-like, by most of the Anglophone national and research libraries, it's worth asking how far the goals of RDA's devisers have been met. Have the fundamental rule revisions, thought to be necessary for what RDA calls the 'digital world and an expanding universe of metadata users', improved cataloguing? I have an interest in the cataloguing of films on optical disc. So I thought it might be interesting to survey RDA, post-implementation, through the lens of film.

Standards of bibliographic description have always struggled with audio-visual material. When Charles Ammi Cutter published his Rules for a Printed Dictionary Catalog in 1876, his first 'Object' was 'To enable a person to find a book' (my emphasis). The Anglo-American Cataloguing Rules (1967) covering 'Non-book materials' were largely drawn from the Library of Congress' own in-house guide, and subjected to only limited outside scrutiny. Though AACR2 was developed partly to accommodate non-book materials, it also tried to describe these materials according to broadly similar rules, often overlooking film's technical complexity.

RDA claims to have 'the scope needed to support comprehensive coverage of all types of content and media' (RDA 0.3.1). So how well does it help cataloguers describe the most common A/V object in our libraries – films on DVD?

That depends. RDA offers too much advice, and too little. It's less prescriptive than earlier rulebooks – it permits, and even encourages, the application of local rules, and defers to the judgement of the cataloguer. But because it's been written by committee, and tries to be comprehensive, it undermines its own principles. Panizzi's Rules were five pages long. Cutter's ran to 165 pages, AACR 400 pages, and AACR2, 640 pages. So far, the unfinished RDA runs to 1,056 pages in its print version. This extra content baffles and frustrates more often than it clarifies. Let me give two examples (there are many others).

Say you want to find out who to record in your 'statement of responsibility' (usually, those whose creative input into the resource has been greatest). It's easy when cataloguing a book – with some caveats, you transcribe the author or editor whose name appears on the title page. But with a film?

It's an important question, since the statement of responsibility appears next to the title in a catalogue display. AACR2 advised recording those with 'a major role in creating a film (e.g., as producer, director, animator)'. It provided four examples, which also gave the names of a writer and, confusingly, two editors. It's an evasive guideline, but it trusts cataloguers know what they're cataloguing, and know what their users need.

RDA goes much further. It instructs cataloguers to transcribe a statement which relates 'to the identification and/or function of any persons, families, or corporate bodies responsible for the creation of, or contributing to the realization of, the intellectual or artistic content of the resource.' The verbosity is typical of RDA; unlike the codes written by Panizzi, Cutter, Lubetzky and Gorman, RDA lacks both style and elegance.

Among the examples, only one seems to describe a film. But unlike AACR2's illustrations, it doesn't give the title. All we get is: 'directed and produced by the Beatles.' Fans of the Fab Four may realize that the Beatles didn't actually direct and produce any of their films. So it's impossible to map the example to a real-world resource. In other words, we can't watch the film to see how the rule has been applied.

It gets worse. The text then advises that those 'who have contributed to the artistic and/or technical production' don't belong in a statement of responsibility, but elsewhere in the record. Who, one wonders, remains to be transcribed?

Where AACR2 is concise and clear, RDA is wordy and confusing. Where AACR2 trusted you to understand what you were cataloguing, RDA holds your hand. But all too often it's like one of those overconfident guides who shepherd tour-groups around Cambridge Colleges, talking all sorts of nonsense. One gets the suspicion that the authors of RDA didn't actually know much about film. That limited knowledge turns out to be a dangerous thing.

One example: a film's aspect ratio, the ratio of the horizontal width of the image to its height, is an important component of its style. Wider formats (usually 1.85:1 or 2.39:1) offer different artistic and technical possibilities. Narrower formats are often chosen by filmmakers to evoke the feel of Hollywood's Golden Age, since before 1953, nearly every film was shot at 1.37:1, the so-called 'Academy ratio'.

RDA's rules governing aspect ratio are particularly unhelpful and misleading. They require cataloguers to choose between three terms, which are mathematically neat, but otherwise unknown to cinema. 'Full screen' must be used for ratios of less than 1.5:1, 'wide screen' (note the space) for ratios of 1.5:1 and above, and 'mixed' 'for resources that include multiple aspect ratios within the same work'.

The Statement of International Cataloguing Principles, published by IFLA in 2009, advises cataloguers to draw their descriptive vocabulary from the discipline they're describing. It's been a core principle of cataloguing for most of the past century, but is ignored by RDA here. It wouldn't have taken long to check a Hollywood handbook (perhaps the American Association of Cinematographers' manual), and see that 'widescreen' is a single word, and used to describe any film wider than the Academy ratio. RDA's arbitrariness risks making our records look silly to those familiar with the typical vocabulary. Incidentally, AARC2 was much less prescriptive, asking only that a cataloguer's transcription of aspect ratio be succinct.

These complaints may seem minor, but as we know, cataloguing lives or dies upon things as miniscule as the placing of a full-stop. The two flaws I've outlined are two flaws among many. For instance, RDA does little to help describe DVD regional encoding (which makes discs bought in one jurisdiction unplayable in another) or audio playback characteristics; continues to recommend the use of the odd word 'videodisc'; and fails to include directors in its list of 'creators' – recording them is thus entirely optional.

I don't doubt that RDA has done a great service to the cataloguing community. AACR2 descriptions take place mostly at the 'manifestation' level, to use the terminology of FRBR. That is, they permit description of an object's physical characteristics. RDA opens up the possibility of describing at the 'work' and 'expression' levels, which deal with intellectual or artistic activity and content. The accompanying new MARC21 fields which enable the coding of film's technical features have been a boon to A/V cataloguers. But in attempting to be all things to all people, RDA has also burdened and baffled us in equal measure.

Colin Higgins
Librarian
St Catharine's College

[Author of Cataloging and Managing Film and Video Collections (American Library Association, 2015). He blogs on cataloguing and classification at https://cutterslaststand.wordpress.com/]


Send us your comments on this article

<< Back to CONTENTS


METADATA GAMES: ENGAGING WITH OUR DIGITAL COLLECTIONS

Metadata game screenshotFor the last year, we've been working on the development of a crowdsourcing tagging game to improve the description of our digital image collections and to engage with the Library's diverse community of users. The game harnesses the knowledge, enthusiasm and competitiveness of 'the crowd' to tackle a challenge facing libraries and heritage organisations across the world: how to make large, particularly legacy, digital collections accessible and discoverable through local databases and search engines. Many of our collections suffer from poor descriptive metadata: this game enables users to add tags to our images, gaining points for the number and quality of tags added and to compete with others to gain a top score. As well as providing us with valuable data, this project has allowed us to further develop our engagement with existing users, and reach out to other groups who had not previously accessed the collections.

Our initial motivation was a functional one: to improve the descriptive metadata of our digital images in order to make the collections more discoverable online. However, as the project progressed, our motivations evolved. We saw the potential of the game not only as a tool for engaging with our existing users, but for bringing the collections to a new audience.

We have approximately 25,000 high quality digital images (http://images.is.ed.ac.uk/), a collection which has been developed through a combination of funded project work, documentary photography and requests from readers in the Centre for Research Collections (CRC). As a result, the levels of descriptive metadata vary dramatically: project-funded digital images, such as the recently digitised Roslin Glass Slides collection, tend to have excellent metadata because cataloguing is generally included within the project scope, whereas digitisation requests from the reading room often only have very basic descriptions. The information for these images comes when the reader fills out a photography request form: they enter information about the item from which the image came, such as the title, author or shelfmark, but tend not to include any information to describe the contents of the image itself. We lack a professional cataloguer for our digital images, so the often patchy information provided by the reader is the only metadata we have for this collection. We have attempted to overcome this issue by employing interns and volunteers to enhance our data, but the sheer size of the collection means their work can only cover a small number of images.

The result of this is that it is often very difficult for users, and search engines, to discover our digital image collections. If, for example, a user wanted to find all the images of turtles that we have in the database, they would type the word 'turtle' into the search box and all the images which contain the word 'turtle' somewhere in the metadata would be returned. However, one of our best images of a turtle, from the Natural History of Carolina, Florida and the Bahama Islands, would not be returned because the word doesn't actually appear anywhere in the record. Our project seeks to overcome this issue by crowdsourcing descriptive tags and re-uploading these to the database, thereby ensuring all of our images can, at a bare minimum, be discoverable at a local level.

Turtle from the Natural History of Carolina, Florida and the Bahama Islands

Over the past year we have seen a logical progression as we aim, ultimately, to develop a self-contained platform which will require minimal staff input and involve the entry of tags to the platform by a willing, engaged and diverse audience. We initially held internal staff sessions, from which we gained feedback and made alterations, and then took it to the student population during our 'Pop Up Library', where students were encouraged with lollipops, coffee vouchers and the opportunity to beat their friends to a top score. After the success of these sessions in the Main Library, we took the game to new locations on the University campus then held a session in Edinburgh City Council's Central Library, a seminal moment as the game was taken off campus for the first time. Between these events, we launched the game publicly on our library labs site (http://librarylabs.ed.ac.uk/). In the year since, we have harvested 9,537 tags from 166 users of a diverse range of ages and backgrounds, including academics, students, members of staff and school pupils.

Metadata Games on tour

We have found that different people have different motivations for taking part in the game. Some like to participate in order to get the highest score, while others enjoy the addictive nature of tagging. Some players like to use the game as a tool to explore the collections, while others require additional incentives (such as coffee and sweets) to participate. We have also developed a version of the game geared towards users with more specialist knowledge of a given collection: this game includes additional fields which allow the player to input more technical and advanced information such as transcriptions, translations and details about the creator or location of the image. Having this additional version has enabled us to provide for different levels of ability.

While working on our own game, we have created a strong relationship with Tiltfactor (http://www.tiltfactor.org/), a design lab based at Dartmouth College which researches and develops games and has an interest in the role of play in investigating issues and ideas. In December 2014 we added around 2,500 of our images to their Zen Tag game: this has provided us with 9,000 additional tags and, perhaps more importantly, widened access of our collection to user groups in the US and across the world. We are now in discussions with Tiltfactor about how we can continue to collaborate in order to further develop a metadata game plugin that can be used by other institutions.

So far we have integrated 1,337 tags into our online collections portal (http://collections.ed.ac.uk), meaning hundreds of our digital images are now more discoverable online. In displaying these, we decided to differentiate between formal metadata (author, shelfmark etc.) and the crowdsourced tags in order to make it clear that the latter are a complement to, rather than a substitute for, existing professionally created metadata.

We have built in a quality control element to the game to ensure that the tags which are re-uploaded to the system are suitable. Part of the game requires players to assess the quality of other players' tags – only those which have been deemed 'good' a certain number of times and pass a staff assessment are then uploaded to the database. We also have a list of words to remove before uploading: stop words, inappropriate content, numbers, and words selected for removal by collection curators.

The project has generated a good amount of publicity for the library, helped us improve the description of our images and enabled us to develop a strong collaborative relationship with Tiltfactor at Dartmouth. The area where impact has been most important, we feel, has been in the engagement with users and raising awareness of our collections to people who had previously had no relationship with them. Our main intention now is to continue to develop the game, and to formally launch it so that students and other interested parties can continue to enjoy learning more about our collections and contributing to our crowdsourcing activities. We are looking at expanding the types of collections involved in the platform and investigating different types of games for teaching which are focussed primarily on engagement and the development of students' library skills.

See more on our blog: http://libraryblogs.is.ed.ac.uk/librarylabs/

Gavin Willshaw, Digital Curator, Library and University Collections, University of Edinburgh
Claire Knowles, Library Digital Development Manager, Library and University Collections, University of Edinburgh
Scott Renton, Digital Developer, Library and University Collections, University of Edinburgh

Images reproduced by permission of the authors


Send us your comments on this article

<< Back to CONTENTS


CATALOGUING MEDIEVAL MANUSCRIPTS: SOME REFLECTIONS

Holkham Hall, MS 345, fol. 1r, Livy, Ab urbe condita (Milan, second quarter of the fifteenth century).In late May this year, two very heavy boxes arrived at the back door of the Fitzwilliam Museum. I had been awaiting them with some trepidation, for they contained the results of my ten (or more) years of work on the manuscript collection at Holkham Hall. To put it in a historical context, these boxes contained the first successful attempt, in over two hundred years, to publish a scholarly catalogue of one of the most important manuscript collections of medieval manuscripts still in private hands. Would the catalogue do them justice?

A brief outline of the collection is probably in order. Just under a third of the five hundred and seventy manuscripts still at Holkham belonged to Sir Edward Coke (1552-1634), Attorney General to Elizabeth I, Chief Justice to James I, and leading parliamentarian in the years leading up to the Civil War. They reflect his interests in the law, in literature, heraldry, and religion, and are mainly of northern European origin. The greater part was acquired by Sir Edward's eventual successor, Thomas Coke, 1st Earl of Leicester (1697-1759), principally through a series of large-scale purchases on his Grand Tour (1712-1718), and includes examples of the whole range of European book production from the twelfth to the eighteenth century. Several lavishly illuminated manuscripts were acquired by Thomas William Coke (better known as Coke of Norfolk), 1st Earl of the 2nd Creation (1754-1842), who also commissioned the programme of restoration and rearrangement that gave the Holkham manuscript library its present form.

Though a small number of treasures had been widely exhibited, most of the Holkham manuscripts were known only by the brief details given by Seymour de Ricci in his Handlist of the Manuscripts in the Library of the Earl of Leicester at Holkham Hall (1932). De Ricci admitted that he had 'hardly been able to do more than skim the surface', and had based his work not upon the manuscripts themselves but upon the unpublished descriptions drawn up by William Roscoe (1753-1831) and Frederic Madden (1801-1873) in the first abortive attempt to catalogue the manuscripts in the 1820s. In the twentieth century, some of the most sumptuous illuminated manuscripts were surveyed by Leon Dorez (1908); in the 1980s, Jeremy Griffiths began work on a comprehensive catalogue of the collection that was brought to a sudden halt by his early death in 1997.

The case for a full scholarly catalogue was overwhelming. The wealth of the collection was widely acknowledged, and de Ricci's handlist was skeletal, outdated, and in many cases, seriously mistaken. My aims were therefore straightforward: to identify the textual contents accurately and fully; to date the manuscripts as far as possible to within a quarter of a century, and to localise them as precisely as possible, using palaeographical, art-historical, textual and other evidence; and to suggest the potential significance of an individual manuscript for future research.

If the aims were straightforward, the means of achieving them were perhaps less so! Anyone who has consulted manuscript catalogues will know that while the fundamental categories of information are fairly uniform (contents, structure, script, illumination, provenance, bibliography), nomenclature, arrangement, and level of detail are most certainly not. So, the first step was to decide on a form of entry that had enough flexibility to respond to the complexities of the manuscripts, and could answer the requirements of this particular collection. In the case of the Holkham manuscripts, whose details were barely known, and whose location in a remotely situated private house makes access difficult, I designed the form of entry to allow for as much detail as possible.

A crucial step in this process was the selection of a model, a model to be tweaked and adapted, of course, but a point of reference all the same. After surveying the range of catalogues produced in Britain, Europe and America in the second half of the twentieth century, I chose Andrew Watson's Descriptive Catalogue of the Medieval Manuscripts of All Souls College, Oxford (1997) as my guide. Not an easy act to follow, but a publication that gave me, as a manuscripts scholar, what I needed from a catalogue. In particular, Watson's practice of locating additional comments (distinguished by a smaller typeface) within each section, rather than pulled together into one section of Comments, allowed me to describe fully these under-explored books.

With Watson always in mind, I produced for each manuscript (or multi-volume set) a single entry with a Heading of manuscript number, author and title, language, place of production, date of production, material support (parchment or paper), and secundo folio. Some catalogues have abandoned giving the first words of the 'second folio', but I had managed to identify Holkham Hall, MS 344 in the personal library of Pope Benedict XIII at Avignon in the late fourteenth century on that basis, so was certainly convinced of its value!

The first section in the body of the entry details the textual Contents. As the Holkham library is not easily accessible, I decided to include the incipits and explicits for each individual text, as they are found in the manuscript (explaining the principles I used in producing these transcriptions in the Introduction). For each text, I cited modern critical editions with page and line numbers, and where I had not been able to identify texts, I included a brief indication of content.

Next came Structure, where I described the physical make-up of the manuscript as it is constituted today, noting any evidence of alterations to the original structure. In essence, I followed the descriptive and collational principles established by Neil Ker's Medieval Manuscripts in British Libraries, supplemented by more recent work on humanistic manuscripts in Albert Derolez's Codicologie des manuscrits en écriture humanistique sur parchemin (1984). For paper manuscripts, I described any visible watermarks and indicated any matches in Briquet's survey.

So fraught is the debate over palaeographical nomenclature that some catalogues, particularly in Italy, have now abandoned this element of a manuscript description. I decided to retain a section on Hands in order to give a general indication of the type of script, but for details of individual hands, I referred the reader to illustrations which were, for the most part, embedded within the text. Where I could identify the hand of an individual scribe, or where the hand was hard to categorize, I gave a more detailed description. Given the growing scholarly interest in reading, use, and ownership, and the fact that many Holkham manuscripts were glossed by identifiable humanist readers, I decided to list and analyse significant Annotation separately in this section.

The description of Decoration is ordered hierarchically from full-page miniatures, through various categories of initials, to line-fillers and flourishing to rubrics or catchwords. Binding outlined the date, material, colour, and any ornamental or heraldic elements, but as many of the Holkham manuscripts were rebound in a well-documented campaign from 1813 to 1821, I took the opportunity to give the relevant entry from the binder's account book (Holkham Hall, MS 748), with his description and price.

The section on History and Ownership presents in summary form all the evidence for the origin and subsequent history of the manuscript in chronological order up to the present day, as follows: 'Copied in Florence in 1423 (scribal colophon, fol. 121v)', and so on. The Bibliography aims to give a complete record of all significant references to the manuscript in print. It does not include the works cited in the main body of the entry that do not mention the manuscript itself; these are to be found in the main Bibliography of Works Cited at the end of the volume.

Finally, as the manuscripts have been subject to dispersals and re-arrangement since de Ricci's time, I included a Concordance of Shelfmarks listing de Ricci numbers with the current location of each manuscript, along with additions to de Ricci's list, in the hope that this would become the standard point of reference for the Holkham manuscript collection. Whilst not the most exciting element to compile, I suspect this section may be one of the most heavily used parts of the catalogue.

Finally, I would like to venture some thoughts on producing a printed catalogue of manuscripts in the early twenty-first century.

  • Ideally, manuscripts should be catalogued by an individual or team dedicated to that work and to that work only. The fundamental duties of accuracy and consistency across entries will be achieved more quickly and efficiently by those for whom the catalogue is the sole focus.
  • It is inevitable that the catalogue will evolve, and entries researched and written early in the cataloguing process will need correcting, enhancing, and supplementing. Most manuscript collections are formed of smaller groups of books with common provenances, which may only be revealed in the process of cataloguing. The dialogue between different manuscripts in a collection is one of the most important things to acknowledge and build in to the entries.
  • Do not be afraid to overturn received wisdom, no matter how appealing the attribution or prestigious the provenance. Our scholarly forebears did remarkable work on manuscripts, but scholarship in the many fields on which a manuscript cataloguer draws has changed immeasurably. Moreover, access to electronic manuscript catalogues and to digital reproductions has revolutionised the amount of information at our disposal for the accurate identification of texts, scribes, artists and owners.
  • Signal, date, and categorise according to an agreed typology the annotations in a manuscript. This wealth of information has been ignored or glossed (apologies) over for too long.
  • Illustrate as much as you can afford and as much as the publisher will allow. If at all possible (and with digital typesetting it is eminently possible), embed images within entries to achieve the maximum economy in your descriptions, and the maximum information for the reader.

It is for others to judge whether the Holkham catalogue achieves its aims. What I can say is that cataloguing manuscripts is painstaking work, at times frustrating, even overwhelming – but it is immensely rewarding and the greatest learning experience one could hope for.

Suzanne Reynolds
Assistant Keeper of Manuscripts and Printed Books
Fitzwilliam Museum
Email: scr42@cam.ac.uk

Image reproduced by permission of the Earl of Leicester and the Trustees of the Holkham Estate.


Send us your comments on this article

<< Back to CONTENTS


"ZEDIFYING" SHAKESPEARE: HOME-GROWN CLASSIFICATION AT THE ENGLISH FACULTY LIBRARY

We have a dilemma. Our Shakespeare collection has outgrown its shelf space, and we need to move part of it. We've identified a trolley load of books which we can sensibly enough shift to another part of the library. The trouble is that these books are classified as E 34 SHA, and they'll be going into Z. And the structure of 'E' (for English) and 'Z' (for Reference, disappointingly) are completely different. We've discussed it to death (ten minutes or so, which is roughly the same), and there's only one thing for it: we'll have to hack the system. We'll create a brand new number, and that number will be Z 999 SHA. It won't particularly make sense in terms of how 'Z' is currently structured, and it'll be an anomaly across the whole system. But our lovely library users will actually be able to find the books, so in this particular battle of theory vs practice, the latter wins hands down.

There lies one of the main two benefits of a home-grown scheme: to an extent, you can make it up as you go along. Our postcolonial collection is getting bigger every day, practically, and it needs more numbers. Visual culture, graphic novels, eco-criticism – we're on the case, there's a new number for them all. Of course you can take it too far. There has to be at least a semblance of underlying logic or the whole thing stops making sense at all, and it becomes borderline impossible to assign classmarks without thinking about it for far longer than is desirable. But a home-grown scheme means you can tailor it to your collection, no matter what directions that collection is moving in.

Devil's advocate would say you can do that with established schema too, and of course these carry the benefit that they may feel familiar to library users. But because of that you may feel some obligation to 'make it fit'; here we're prepared for the fact that, at least in some areas of the library, we're flying (a little) by the seat of our pants.

For us, though, the principal benefit of our home-grown scheme is that it works so well with the undergraduate Tripos. We're arranged chronologically by region – and that's how our students are taught. (Let's assume this is deliberate.) In practice, though, it means our students tend to be able to use one part of the library per term, which makes locating things for them far, far easier, especially when they're reasonably new to Cambridge.

If we could start again from scratch, would we? There'd be talk of adding decimal points to existing classmarks. There are some sections that could do with a 21st century reorganising, and we might consider getting rid of our geographical divisions. But I doubt we'd go much further than that. It isn't just that classification in general can be about selecting the least rubbish option, and more a case of, in the words of esteemed philosopher Ms K. Minogue, 'better the devil you know'. And, let's face it, because of the devil we know we can now officially claim to have an 'Emergency Shakespeare' section, which can be nothing less than a victory all round.

Helen Murphy
Deputy Librarian, English Faculty Library


Send us your comments on this article

<< Back to CONTENTS


PEOPLE

The UL welcomed Andy Priestner, formerly at the Judge Business School library. Andy will be managing the FutureLib project which aims to deliver innovative new library services across the entire University.

Jenny Grewcock has joined the Libraries Connect Programme as Business Analyst. She will be working to establish the requirements of the programme as well as liaising closely with potential providers. Paul Taylor-Crush has taken up the post of Data Specialist in the LMS Project and Wendy Stacey from English Cataloguing moved to Periodicals to take his place. English Cataloguing welcomed back Agnieszka Kurzeja, and are happy to have Graham Levene and Clare Shortman join them. The department bid farewell to Richard Johnson, who will now be working in the office of the new Cambridge MP, David Zeichner.

European Cataloguing bid farewell to Lorne Noble, and welcomed Federica Signoriello as their new library assistant.

Hannah Bond has joined the Reader Services Desk team. She comes to the UL having worked in a number of college libraries.

Danielle Spittle left Legal Deposit, as she is moving to Sheringham.

Rare Books Department bid farewell to Laura Nuvoloni, the Incunabula Cataloguer, who for the past five years has been working on the library's incunabula collection creating specialist records and making them available and searchable online.

The Operations Team bid farewell to Charlotte Ross who had been the PA to the Librarian and the Deputy Librarian. They lost Jack Kelly, their Operations Services Co-ordinator. Ewa Dedza, Sonia Krajciova and Sam Laister have joined the team as Operations Co-ordinators.

The Digital Library Programme welcomed Rekha Rajan as Software Developer. Digital Services bid farewell to their Desktop Support officer Chris Bray-Allen, and Peter Heimer, a member of their Senior Operations Team. They welcomed back Lauren Cadwallader, the Open Access Publishing Officer. The Digital Content Unit were sorry to lose Mark Scudder, their Digital Camera Operator. The library has a new management accountant – Ben Perks will be working closely with the Librarian, Deputy Librarian and the new Leadership Team.

The Office of Scholarly Communications has gained two new Research Repository Assistants – Kennedy Ikpe and Patrick Linton. They have also gained two new Administrative Assistants for their Open Access Project – Dr. Joyce Heckman and Mathew Wright. Dr. Marta Teperek has joined the team as the University's Research Data Facilitator. They bid farewell to Nick Dodd.

Jasmina Makljenovic formerly of the UL, has now joined the Medical Library.

Congratulations to Jill Whitelock on the birth of her son Arthur Peter Ralley, born on the 2nd of February, and also to Janet Davis whose son Jonathan Peter Davis was born on the 15th of April.

The UL is proud to announce that Claire Sewell has been chosen to participate in the CILIP Leadership Programme, which is designed to create additional leadership capacity within the profession. She is one of only twenty one in the country selected for this programme.

Ann Leonard, one of our longest serving members has retired after 45 years of service. She joined the library in 1970, and soon moved to the Official Publications department where she retired as the Deputy Head. She intends to devote more time to horses and gardening in her retirement.

Neil Parsons and Ashley Hurrell both from Building Services, and Rosina Rusin from the Bindery, have all retired. Derek Hardinge, Principal Building Services Technician also retired after 12 years of service in the UL.

Two long familiar faces left the Reference Department recently. Neil Hudson retired after over 43 years largely within the confines of the West Room, first in the Periodicals Department and later in Reference. From 1997 onwards Neil had sole charge of the Periodicals Enquiries Desk, helping countless readers unravel the many mysteries of serial publications and much more besides. He will now be free to pursue his passion for trains, including taking the controls of a large diesel locomotive on the Nene Valley Railway in September. Neil is not entirely lost to the University. He can still be seen on parade around the Senate House in his role as Vice Marshal.

Ann Toseland followed Neil into retirement after more than thirty-five years in Official Publications (repeatedly), Inter-Library Loans and the Map Room. Ann is perhaps best known for her final and longest-lasting post, initiating readers into the dark arts of the Microfilm Reading Room in her capacity of Superintendent. Ann looks forward to devoting more time to her horses. Both Ann and Neil have received a number of warm tributes from readers and are already sorely missed.

Dr Patrick Zutshi retired as Keeper of Manuscripts and University Archives this summer after 28 years in the UL. He had joined in 1987 as Keeper of University Archives. In 1990, on the retirement of Mr Owen, the then Keeper of Manuscripts, he was appointed Keeper of the newly combined departments. When he took over, the archives of the Royal Greenwich Observatory had just arrived. His initiative led to the creation of a curatorship for the scientific collections. Looking ahead from a time when all the catalogues were hard copy, typescript or even hand-written and the photographic formats were bromide or microfilm, he had the foresight to embrace modern technologies. His colleagues commented on his good humour, understanding and sympathy which has benefitted them all. While he was at the helm, the University Archives grew by over 500%. Beyond this, he is a leading historian of papal administration and has edited the series entitled 'The history of the University of Cambridge: texts and studies'. He leaves behind a strong and resourceful department of the library.

We say goodbye to Marilyn Glanfield who is retiring after 21 years working in libraries in Cambridge, the last twelve as the Librarian in the Centre of African Studies. Marilyn is planning a move to Lincoln to be nearer some of her grandchildren and is looking forward to a new place to explore and make new friends with more freedom and unscheduled time for grandchildren, travel, and old and new interests.

We wish all our former colleagues all the very best in retirement.

After 30 years as the Divinity Library Petà Dunstan has left. She will be greatly missed especially by the other Arts and Humanities librarians. But she is not going far as she remains the Fellow Librarian at St. Edmund's College. The new Divinity Librarian is Clemens Gresser who is switching from the Economics Library.

Rebecca Blunk is the new Librarian at Churchill College. Becky comes to Cambridge from UCS Ipswich where she has been Academic Liaison Librarian. She takes over from Mary Kendall, who is retreating into oblivion after an undisclosed number of decades in the post. We all wish Mary the very best for her retirement.

We also wish Jan Waller, Assistant Librarian at Murray Edwards, well in her retirement.

Trish Howard is leaving Pembroke College Library after 25 years as the part-time library assistant. Pembroke now has a full-time Assistant Librarian, Natalie Kent, who came from the Inner Temple Library and a new Graduate Trainee Librarian, Matilda Watson, who has invigilated at Trinity College Library in the past.

After 28 years at Girton, Frances Gandy retired from the post of Librarian and Curator on 30 September, although she continues as Graduate Tutor for Science for another year and as a Life Fellow. The Assistant Librarian, Jenny Blackhurst, is Acting Librarian for 2015-16, and the Archivist, Hannah Westall, takes on the Curatorship. It is hoped that the Library Assistants, Helen Shearing and Helen Grieve, will be joined imminently by a part-time Library Assistant and a part-time Archives Assistant.

Elizabeth Bradshaw retired from the Cambridge Colleges Conservation Consortium at Corpus Christi at Easter. We wish her well in her retirement. Sylvia Steven has been appointed as the new Conservator. Sylvia has been working for Trinity College Library, Dublin since October 2014 and prior to that for two years at the National Archives of Ireland.

James Smith was appointed Assistant College Librarian at Christ's College in April 2015. He replaces Charlotte Byrne, who has taken up a new position as Open Source Researcher at the Foreign and Commonwealth Office. James's promotion created a vacancy in the post of Senior Library Assistant, which has been filled by Charlotte Hoare. Charlotte undertook her graduate traineeship at St John's College, Cambridge, prior to being appointed Library Assistant at the English Faculty Library. In other news, congratulations are extended to Christ's outgoing graduate trainee, Eleanor Wale, who has been awarded a bursary by the Stationers' Company to allow her to undertake the MA in Library and Information Studies at UCL in 2015-16, and to the College Librarian, Amelie Roper, for the publication of her recent article, 'Poor Man's Music? The Production of Song Pamphlets and Broadsheets in Sixteenth-Century Augsburg' in Brill's Specialist Markets in the Early Modern Book World. Finally, Christ's is looking forward to welcoming Nicholas Butler as the new Graduate Trainee in September 2015. Nicholas graduated with a BA in Classics from Jesus College, Cambridge, in 2014.

It's that time of year when Graduate Trainees come and go. At St John's, Richard Sellens is replaced by Felicity French. From 1 October she will be joined by Eleanor Swire who takes up a new 9-month trainee post supported by a benefactor. Eleanor will split her time between the Old Library and the Archives. Newnham's 2015-16 trainee will be Anne O'Neill. At Classics the new trainee is Charlie Barranu, who has just completed her MPhil at the History Faculty and has also worked previously in the libraries at Corpus.


<< Back to CONTENTS

Valid XHTML 1.0 Strict