Digital Preservation of Video Games

When I was an Junior Fellow for the Library of Congress in 2015, one of the best days of the summer was getting to tour the Packard Campus for Audio-Visual Conservation in Culpeper, Virginia. At one point during our visit, our guide showed us a small stack of games – they looked like PlayStation/Xbox games – and described their preservation activities for videogames. I can’t say for sure, but I doubt the Library of Congress (or any other major institutions) thought about collecting and preserving games back in, say, the 1980s. It took archivists a while to realize that this new kind of media would be culturally valuable, and by then, obsolescence had already set in.  McDonough et al. (2010) emphasize the importance of early intervention from preservationists before the game’s software and/or hardware becomes obsolete (p. 5). I wonder what “new media” the archival community is neglecting as we speak? I bet we could be doing more to preserve memes. Just saying!

I am wildly impressed by (and intimidated by) the digital preservation of videogames because they are complex audio-visual artifacts with specific hosting requirements. McDonough (2010), Dawson (2017), and Sköld (2018) talk about the unique challenges of providing access to games.

Here are some of the many obstacles:

  • Obsolescence: The software and hardware supporting videogames quickly becomes obsolete (McDonough, 5). Have a look at Wikipedia’s list of home video game consoles. Acquiring consoles can be expensive, and it takes a lot of expert knowledge to maintain and repair them. This point reminds me of the video we watched for class, “Preserving digital art: How will it survive?” and its description of the labor involved in constantly migrating and updating software to make sure digital files don’t rot.
  • Preserving interactivity: Because modern games are playable on a variety of platforms/in multiple formats, archives must be able to host a variety of platforms. Moreover, games are increasingly released online with no physical component (Lee et al.). The lazy approach would be to archive videos of streamed games, but then the game’s interactivity wouldn’t be preserved (Dawson). Games are meant to be played!
  • Copyright: The Digital Millennium Copyright Act prohibits the copying of videogames, so archivists must approach companies to secure rights to make preservation copies (McDonough, 6). I would venture to guess that companies who make money from selling videogames aren’t in a hurry to make them universally accessible.
  • Metadata: A brave new world of metadata. The Seattle Interactive Media Museum and the University of Washington Information School GAMER (GAme MEtadata Research) Group made a metadata schema for video games contains more than forty-six elements (Lee et al.). Some of the more unique ones include: franchise/universe, special hardware, controls, and number of players. Some of this information can be hard to find for digital games, especially indie games by smaller creators.
  • Preserving gamer culture: Sköld (2018) discusses the concept of an “expanded notion” of a video game which includes “its game culture, experiences, play, and community activity” (p. 134). These aspects of games are especially relevant in the competitive gaming scene, in which games like Dota, Rocket League, and League of Legends which have their own esports organizations, attract huge fanbases, and host a multitude of recorded events relevant to the game.

Videogames, whether vintage or modern, are meaningful to our society and culture and of great technological value. It is important to fight obsolescence of videogames because preserved games serve as important educational tools for game developers and are important pieces of cultural memory (Sköld, p. 137). McDonough outlines strategies for the long-term digital preservation of games. I think the most impactful strategy archivists can take is to seek out collaboration with the gaming community in crowdsourcing initiatives and to develop relationships with gaming companies to overcome legal obstacles to preservation (McDonough, p. 7). As an archivist, I would love to be make it possible for someone decades from now to play a game they haven’t played in years and feel like a kid again.


Dawson, George (30 November 2017). “Digital Preservation: Video Games.” University of North Texas Libraries: Digital Humanities.

Google (30 May 2017). Preserving Digital Art: How Will it Survive?.

Lee, Jin Ha, Rachel Ivy Clarke, and Andrew Perti (15 January 2014). “Metadata for Digitally Distributed Video Games at the Seattle Interactive Media Museum.” MW2014: Museums and the Web 2014.

McDonough, Jerome P. et al. (31 August 2010). “Preserving Virtual Worlds Final Report.” Illinois Digital Environment for Access to Learning and Scholarship.

Owens, Trevor (26 September 2012). “Yes, The Library of Congress Has Video Games: An Interview with David Gibson.” Library of Congress: The Signal.

Sköld, Olle (January 2018). “Understanding the “Expanded Notion” of Videogames as Archival Objects: A Review of Priorities, Methods, and Conceptions.” Journal of the Association for Information Science and Technology vol. 69, issue 1, pp. 134-145.


Participatory Design and Digital Libraries

In the process of UX design, usability testing does not typically occur until the later stages of development when a prototype is complete. Participatory design, aka “cooperative design,” is a method that involves users from a project’s inception. Elizarova et al. define participatory design as “an approach to design that invites all stakeholders (e.g. customers, employees, partners, citizens, consumers) into the design process as a means of better understanding, meeting, and sometimes preempting their needs.”  

Participatory design helps developers define a target audience before starting a project, since that target audience will be involved from the beginning. Early involvement from the community “informs how designers focus their efforts, and the ideas users propose serve as actionable inspiration for the solutions created” (Elizarova et al.). Participatory design methods help us design with our community rather than for our community. It gives them a sense of ownership over the final product.

What does participatory design look like? Designers may use activities that elicit ideas from the users, like asking them to draw a mockup of what they would want the homepage to look like, or a diagram of a roughly-sketched out information architecture. We can ask what users “would love to use in the ‘perfect world’ scenario while also asking them to explain why they built their perfect software or product in that particular way” (Anic). Wood and Kompare describe what participatory design looked like at a large academic library during a website overhaul. Based on early contributions from the academic community, they addressed challenges early that fit into their budget and timeline. The successes of participatory design were a “shared vision and vocabulary,” “better communication with stakeholders,” and “increased user advocacy.”

I think that participatory design is well suited to GLAMs because their communities are central to their mission, and because collaboration is already an important theme and aspect of the work they do. Libraries and archives often serve a specific community, whether it be a local town, a school or university, a specific scholastic interest, ideological groups, or marginalized populations. Community archives are becoming more and more popular. Participatory design fits in with the grassroots spirit exemplified by The Baltimore Uprising 2015 Archive Project, which sources its content from the target community. It also reminds me of conversations we had in our Research Methods course about the importance of working closely with communities to both conduct research and implement findings. There is a shift in cultural heritage institutions that seeks to involve the user and make them feel like the driving factor behind our content and services.

Two challenges concerning participatory design are time and money. Ah, time and money, you’re always the problem. Early and ongoing participation from users demands more of their time and more of our time. This is of little concern to me, since tackling challenges early will save a lot more time later on. If you create a core part of a website that you later find out users dislike, changing that fundamental part will force you to undo a lot of work and make many cascading changes to the design. Also, a confusing design will end up wasting the time of librarians when they have to explain the site to users. In terms of money, users may make requests which are outside of the project’s scope or budget. We can negotiate with users to find solutions that fit the scope and still work for them.

Another possible challenge is that users may not know how to articulate their ideas because they don’t have the UX vocabulary or design knowledge to describe what they envision. Designers can’t expect users to immediately discuss “information architecture” or human-computer interaction, but we can easily have conversations in layman’s terms about what looks good or bad and what feels intuitive or clunky from their perspective. Designers act as translators for users, making their ideas come to life.

Anic, Ines (4 November 2015). “Participatory Design: What is it, and what makes it so great?” UX Passion.

Elizarova, Olga, Jen Briselli, and Kimberly Dowd (14 December 2017). “Participatory Design: what it is, what it isn’t and how it actually works.” UX Magazine no. 1695,

Wood, Tara M. and Cate Kompare (30 January 2017). “Participatory Design Methods for Collaboration and Communication.” Code4lib Journal issue 35.

Fast and slow digitization

You rarely hear of a library or archive without a backlog of items that needs to be cataloged or digitized. With limited funding, staff, time, and increasing acquisitions, libraries have had to either ramp up the speed and efficiency of their digitization efforts or be much choosier about what gets digitized. This dilemma is the focus of literature like Erway and Schaffner’s (2007) “Shifting Gears: Gearing Up to Get into the Flow,” which argues that users are better served by a higher quantity of digitized collections than a high quality of fewer items (2). They acknowledge that this principle may not apply in every situation (3). The speed and level of detail in digitization can and should vary depending on the value of the items, the volume of documents, and most importantly, the way researchers will work with the digitized item.

The level of detail depends on the kind of access that will be demanded. For items high in demand, librarians can create digital editions that are like facsimiles, accompanied with expert knowledge and interpretation and tools like transcription. A slower digitization process is important for items like valuable and in-demand manuscripts that are difficult to read without transcription.

Faithfully replicating manuscripts requires some text encoding skills – specifically, XML markup encoded according to TEI guidelines. The “NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials” (2002-2003) describes how basic TEI encoding “can be applied nearly automatically using scripts, but detailed encoding requires additional staff, training, and time” (86-7 ).

I want to walk through some examples of TEI projects to show how they differ from  basic digitization.

#1 : Electronic Beowulf

Electronic Beowulf

This online edition of Beowulf provides full translations and helps with definitions and grammar. The website brings together multiple restorations and editions of the text, allowing scholars to make comparisons between all of them. It is the work of editor Kevin Kiernan, a Beowulf scholar, and software engineer Emil Iacob.

#2 : Emily Dickinson Archive

Emily Dickinson Archie

Harvard and Amherst, which both own parts of Dickinson’s archive, collaborated to make this website. Transcriptions on this site show how Dickinson’s editors made changes to her work, like standardizing the arrangement of words on the pages. This site lets users contribute to transcriptions by typing or uploading TEI-encoded documents. I love seeing how libraries use crowdsourcing to engage users – for example, check out the British Library’s crowdsourcing projects.

#3 : Jane Austen’s Fiction Manuscripts


This project is the work of English scholar Kathryn Sutherland and digital humanities scholar Elena Pierazzo.  The manuscripts come from the Bodleian Library in Oxford, the British Library in London, the Pierpont Morgan Library in New York, King’s College in Cambridge, and private ownership. This image gives a good example of diplomatic transcriptions; you can see how the transcription reflects Austen’s edits. Sutherland and Pierazzo actually wrote a whole article about their experience making this project, entitled “The Author’s Hand: From Page to Screen.”

These projects almost always seem to be collaborative, dividing up the work involved and frequently bringing together related items from around the world. Some projects are the work of a library or archive, but many are completed by scholars, with libraries just providing copies of the items. To me, that raises the question: should libraries and archives be concerned about making elaborate digital editions of certain manuscripts, or should that work be left to scholars who might know more about the text? There is a lot of opportunity for collaboration, and librarians have a lot of technical support to offer.

Speed and efficiency are important for ensuring access to more of a institution’s collections. For the certain items that are popular and demand more access, this kind of attention to detail can produce wonderful research tools. I also am intrigued at the potential of crowdsourcing. I wonder if libraries could let users generate transcriptions on a wider collection of items. For A/V materials, users could generate captions. The ultimate question is, should libraries and archives involve themselves in more projects like these, or are they too time-consuming? I think, realistically, that most institutions don’t have the resources to commit to these big projects, but could reach out to and collaborate with scholars who undertake such projects. My main point is that libraries can’t always take a cookie-cutter approach to digitization. Some items need just basic detail, while some deserve more description to ensure access.


Thanks for reading!



Erway, R. and Schaffner, J. (2007). “Shifting Gears: Gearing Up to Get Into the Flow. OCLC Programs and Research. Retrieved from

The Humanities Advanced Technology and Information Institute, University of Glasgow and the National Initiative for a Networked Cultural Heritage (2002-2003). “The NINCH Guide to Good Practice in the Digital Representation and Management of Cultural Heritage Materials.” National Initiative for a Networked Cultural Heritage. Retrieved from

Sutherland, Kathryn and Pierazzo, Elena (2012). “The Author’s Hand: From Page to Screen.” Collaborative Research in the Digital Humanities. Marilyn Deegan and Willard Mccarty, eds. Farnham, Surrey, GBR: Ashgate Publishing Group, 2012. 191-212.