Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Disambiguating people and places in dirty historical data

One challenge that everyone working on early modern spatial data and / or the mobility of historical agents has to face is the disambiguation of person and place names in dirty data. In many cases, data are dirty because they came from very different sources, preserved by different institutions and archived following divergent conventions. If we cannot even access the original sources but need to rely on later archival transcripts or summaries, the case is even more complicated.

Both in my islands project and in the DigiKAR geohumanities project, I am regularly confronted with these problems and would like to share my on-going quest for solutions in this blog post. This means that my post will be updated whenever I come across new findings — and I hope this will lead to a more vivid exchange with fellow researchers.

Task 1: extracting data from unstructured or semi-structured sources

The first issue I am going to cover is how colleagues in the DigiKAR project and I are trying to transform semi-structured spatial and biographic data from scans of early modern sources and 20th-century typewritten archival overviews into an EXCEL table for meaningful data analysis.

a) Transcribing and analysing the 18th-century state calendars

An example of an early modern printed source that at least follows some pattern are the so called “state calendars” (“Staatskalender”) annually published in the Electorate of Mainz in the 18th century. These sources contain short biographic entries on clerics and other officials working for the princely administration in the given year, including their titles. As these sources are printed in Fraktur font, testing and elaborating OCR models for this particular text type is vital in making the data machine-readable in the first place.

Example of transcription issues caused by multi-column layouts in Transkribus Lite

In late 2021 and early 2022, we added sample volumes to the browser-based OCR tool Transkribus Lite and experimented with public transcription models. The first challenge was to enhance the quality of the ingested images as the original scans were based on micro-fiches or micro-films and thus of low resolution. I wrote several Python scripts using the Python Image Library (PIL) to increase the contrast and adjust the image noise. While a contrast increase by factor 4 proved too drastic, contract increase by factor 2 and a noise reduction with median blur improved our OCR results considerably.

Cover of the 1740 state calendar in the original (right) and with contrast enhanced by factor 4 (left)

One OCR model curated by the Transkribus team proved very stable in transcribing text from standard pages with just one column of text and horizontal text alignment. Many pages in the state calendars, however, are printed in multi-column formats which required a lot of manual adjustment of the layout-recognition. Also the information given in the state calendars varies from volume to volume, which is why content sections are hard to define. The function of people listed, for instance, usually comes as a headline underneath which names are given without further specification. In order to read this information via script, all function sections would need to be carefully marked-up in XML.

Title page of the 1740 state calendar with median blur noise reduction and factor 2 contrast enhancement (best solution for automated transcription in Transcribus Lite)

Also, there is no recurring punctuation that could serve as a delimiter between names. The most common textual feature that separates names is, in fact, the address “Titl. Herr” ( often misread as „Tirl. Herr“). We will still need to discuss if using this as an item delimiter in a Python script makes sense. At the moment, we are simply using the OCRed state calendars as a searchable reference to back up biographic information found elsewhere.

b) Working with 20th-century summaries of university registers

The archival transcripts of the Mainz university registers (“Universitätsmatrikeln”) written with typewriter in the 20th century are easier to read with OCR technology, and mis-interpretations of German special characters (“Umlaute”) can be cleaned automatically. This is why we have decided to work on them first. After reading the PDF files provided by the archive to .txt format, we have performed some basic pre-processing to correct OCR errors and to introduce the #NAME and #SOURCE delimiters to separate person name and source citations (at the end of each entry) from the biographic information given. The biographic information is mostly structured with semi-colons between events, which we can thus read as individual items of a list with Python. Moreover, the transcripts of the university registers contain hints to people that might be identical with others, using „ein …“, „—ein“, „—Ein“, „. Ein“ or „. ein“ to denote this additional information. Reading the registers with Python, the #IDENTITY separator is thus needed as well.

All scripts I have used to split .txt files by several delimiters (including sequences of uppercase letters) have been published in the DigiKAR Github repository.

In order to be able to refine and analyse the event information further, one script combines person names, individual events and relevant source information as follows:

PERSON NO. 1 : Philipp ACKER (Agricola)
Number of recorded events: 9
[1, 0, ['Philipp', ' ACKER (Agricola)'], ' Mainz Magister artium', ' Praetorius, Professoren, S.93; Verzeichnis theol. Fak., S.l; Heim, S.17; Dürr, S.55; NDB 1, S.103; Severus PM, S.8; Knodt, Commentatio II, S.31, 35, 69; Simon Bagen, Kurmainzer Staatsmann des 16. Jahrhunderts, in: Der Katholik 78,1, 1898, S.166 Anm. 1; A.Ph. Brück, Die Mainzer Dompfarrer des 16. Jahrhunderts, in: Archiv für mittelrheinische Kirchengeschichte 12, 1960, S.152 f.; Arens, Inschr., Nr. 1286 und 1287 \n']
[1, 1, ['Philipp', ' ACKER (Agricola)'], ' 1551-1555 Dompfarrer', ' Praetorius, Professoren, S.93; Verzeichnis theol. Fak., S.l; Heim, S.17; Dürr, S.55; NDB 1, S.103; Severus PM, S.8; Knodt, Commentatio II, S.31, 35, 69; Simon Bagen, Kurmainzer Staatsmann des 16. Jahrhunderts, in: Der Katholik 78,1, 1898, S.166 Anm. 1; A.Ph. Brück, Die Mainzer Dompfarrer des 16. Jahrhunderts, in: Archiv für mittelrheinische Kirchengeschichte 12, 1960, S.152 f.; Arens, Inschr., Nr. 1286 und 1287 \n']
[...]
[1, 7, ['Philipp', ' ACKER (Agricola)'], ' 1559 Kanoniker von Liebfrauen, 1560 Dekan von St. Peter', ' Praetorius, Professoren, S.93; Verzeichnis theol. Fak., S.l; Heim, S.17; Dürr, S.55; NDB 1, S.103; Severus PM, S.8; Knodt, Commentatio II, S.31, 35, 69; Simon Bagen, Kurmainzer Staatsmann des 16. Jahrhunderts, in: Der Katholik 78,1, 1898, S.166 Anm. 1; A.Ph. Brück, Die Mainzer Dompfarrer des 16. Jahrhunderts, in: Archiv für mittelrheinische Kirchengeschichte 12, 1960, S.152 f.; Arens, Inschr., Nr. 1286 und 1287 \n']
[1, 8, ['Philipp', ' ACKER (Agricola)'], ' + 15.(16.)3.1572, 45 Jahre alt ', ' Praetorius, Professoren, S.93; Verzeichnis theol. Fak., S.l; Heim, S.17; Dürr, S.55; NDB 1, S.103; Severus PM, S.8; Knodt, Commentatio II, S.31, 35, 69; Simon Bagen, Kurmainzer Staatsmann des 16. Jahrhunderts, in: Der Katholik 78,1, 1898, S.166 Anm. 1; A.Ph. Brück, Die Mainzer Dompfarrer des 16. Jahrhunderts, in: Archiv für mittelrheinische Kirchengeschichte 12, 1960, S.152 f.; Arens, Inschr., Nr. 1286 und 1287 \n']

The ultimate aim is to differentiate the information even more and to write each event to a new row in an EXCEL table that also distinguishes dates / time frames in which events took place. This workflow is inspired by the “factoid prosopography” popularised by digital humanities researchers at King’s College London and is, inter alia, explained in my German-language contribution to Historikertag 2021.

This differentiation process will include the following (automated) steps:

  • Recognise dates and write “start” / “end” dates to separate EXCEL columns. More vague “before” and “after” dating will need to be fixed manually.
  • Extract information on related persons such as fathers (usually written as V: NAME in source) and write them to the rel_pers column of the EXCEL file.
  • Store rel_pers information (e.g. on someone’s role as a parent or teacher) also as proper event in EXCEL, highlighting the related persons’ own agency. [This analysis is best performed on a highly differentiated EXCEL table than on the original source.]
  • Use ontology of most common biographic events (provided by Florian Stabel, JGU Mainz) to identify event types and write those to a separate EXCEL column.
  • Automatically duplicate entries if more than one event is mentioned in the source excerpt and write a unique EXCEL row for each.

Data structured in this way can be used to re-construct individual biographies (especially professional careers) as well as political and institutional developments in the early modern Electorate of Mainz. In the DigiKAR project, we are interested in how professional mobility shaped spaces and which centres of (political) interaction mattered most in different phases.

Task 2: comparing data and disambiguating names

The second important goal will be making various EXCEL tables communicate with each other to compare and evaluate the collected data. For this purpose, I will use the pandas package in Python and elaborate on the test scripts for analysing multiple EXCEL files which I have already shared in the DigiKAR GITHUB repository.

As part of this comparison, we will need a more advanced script that can suggest the most likely person and place name matches, possibly based on a cosine similarity of n-grams (co-occurring words). The detection of the most similar words is well supported in Gensim (Word2Vec function).

However, our challenge will be that both the number of middle names given and the spelling of all names can vary. In many cases, names have also been appreviated. Aristocrats, clerics, and other officials are also known by personal names and titles alike. Continuously updated ontology lists might help us track all variants, but it will be difficult to decide what the threshold for suggesting a match ought to be.

Some form of “fuzzy matching” (as proposed by Chris van den Berg) will be necessary. (Cf. blog posts by Mala Deep: “Surprisingly Effective Way To Name Matching In Python“, 30 June 2020, and Chris Moffitt: “Python Tools for Record Linking and Fuzzy Matching“, 18 February 2020) Two commonly used matching algorithms are the Levenshtein and Jaro-Winkler distances. In Python, the Record Linkage Toolkit could help us apply those, but early modern data often require extra turns.

a) The challenge of disambiguating people

One problem we anticipated due to the frequency of certain Christian names in our data is that a fuzzy matching without some form of hierarchisation of strings (based on a differentiation of surnames, aristocratic titles and first names) would result in too many false positives. Testing a Levenshtein distance ratio of 80 as part of the fuzzywuzzy package in Python, we got a lot of false positives of relatives who not only had surnames and / or titles in common but also shared at least one Christian name. Passing certain Christian names on through the generations was established practice in early modern families and makes it necessary to allow some human intervention when disambiguating names. Therefore, I intend to apply a 3-step matching process:

1) A Python script will read (normalised) name strings from our master people list in which persons we have so far identified are manually assigned a unique person ID.

2) The script then tries to match each name string from the people list to name variants occuring in our factoid list of place-time-agent events.

3) The supposedly identical names are then shown to the user in a clickable list (preferrably displayed through a PySimpleGUI interface) from which actual matches can be selected. The script then adds the unique person ID to all relevant place-time-agent events. In the case of hitherto neglected persons, their names can be added to a separate spreadspreet for further checks. They will eventually be added to the people list, and the script can be run again.

Moving forward, the script will need to be improved to use all captured name variants as a the basis for assigning people IDs to newly-added place-time-agent events, too. It will most likely be necessary to create a separate script. Ideally, however, that script can also be triggered from the same graphic user interface.

b) Disambiguation place names

The challenges of disambiguation place names are similar as far the number of early modern spelling variants is concerned. In addition, however, we also need to take into account that modern source transcripts such as the university registers from Mainz often use pre-fixes and suffixes to further define the places’ location or political affiliation. Not all of these pre- and suffixes were already common in the early modern period. Pre- and suffixes for place names found in the university registers are:

St.

 

abbreviation of “Sankt” (“saint”): often found in places of pilgramage as well as places with prominent monasteries or churches
Bad

 

meaning “bath”: used for bathing resorts
bei

 

meaning “near” (indicating location)
(b. …)

 

abbreviation for “bei”, meaning “near” (indicating location)
in

 

“in”: indicating relation to a larger territory
(in …) “in”: indicating relation to a larger territory

A script using these pre- and suffixes to read more than just one token per place name will be needed.

We also intend to link place names we capture from our data to existing databases of (historical) place names such as Geonames or the GND norm data. The data dump of German-language place names in Geonames has 190.759 entries including duplicates, which we can possibly compare to our own list of places in early modern Mainz. The basis of our Mainz places list are the place names which our intern Nele Zunker is currently extracting from the 1646 Topographia of the Electorate of Mainz. We are using the Word Historical Gazeetteer spreadsheet template to collect these place names and categorise them. Places found in the various sources analysed can be added later.

All scripts will be published in due course.

Watch out for updates and get in touch if you would like to brainstorm about similar problems.


OpenEdition suggests that you cite this post as follows:
Monika Barget (October 26, 2021). Disambiguating people and places in dirty historical data. INSULAE. Retrieved February 6, 2025 from https://doi.org/10.58079/qd8y


Monika Barget

Monika Barget is an assistant professor in History at Maastricht University and co-coordinator of the DigiKAR geohumanities project at IEG Mainz. Her research interests include spatial history, digital mapping, political history and media.

You may also like...

1 Response

  1. 28/02/2023

    […] sind. Die ersten Schritte in der Auswertung der Matrikeln habe ich bereits in meinem Blog-Post Disambiguating people and places in dirty historical data […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.