Adding DigiKAR data to FactGrid — chances and challenges
Why Fact Grid?
The FactGrid database for historical research is a project of the Gotha Research Centre operated by the data lab of the University of Erfurt. With the support of Wikimedia Germany, the project uses a MediaWiki with Wikidata’s “wikibase” extension to make data from different projects publicly available and searchable. Advantages of using FactGrid for data collected in our DigiKAR geohumanities project between 2021 and 2024 are that data in FactGrid can be retrieved independent of our own project infrastructures and that the Query Service provided by FactGrid allows users to selectively extract information with GUI-supported SPARQL queries and export them in formats such as CSV and JSON. This can be an advantage over the Python scripts for data exploration which we ourselves provide as executing them requires some coding skills and a suitable coding environment.
That making data available in FactGrid does not necessarily mean that they are interoperable with similar data from other projects is a limitation which I will cover in the last section of this post. First of all, I would like to describe our efforts to check if people whose biographic information we have collected in the Kurmainz work package of DigiKAR are already part of FactGrid, and how we have tried to map our Kurmainz table columns to properties defined in the FactGrid Directory of Properties. I would also like to note that this blog post is very much work in progress and will be updated several times.
Challenges of querying data in FactGrid
The first challenge we encountered when trying to identify possible matches between early modern persons in FactGrid and in our own data was that the Wikidata properties, which I have often used in the past, are not the same as the FactGrid properties although the systems use the same software components. In order to build a query that would retrieve all names, birth dates, birth places, dying dates and places of death for people already in FactGrid, I had to consult the directory of properties and check all relevant properties that are currently linked with person items.
The second challenge that my attempt to get all the person data for people born between 1500 and 1800 hit the rate limit, which is why I had to query the data in batches of 50 years each. Sven Dittmar (JGU Mainz) and I then proceded to compare the person names with likely matches in our own data. This was difficult, too, as many first names in FactGrid were abbreviated and the number of first names given can vary both in FactGrid and in our own data.
Comparing export of early modern person entries to DigiKAR person list
So what we had to do was write a Python script that allowed for some fuzzy matching. Our own DigiKAR person list is an EXCEL file, in which we also assign each person a unique project ID. To compare entries from this EXCEL file with matching person names in the multiple CSV files exported from FactGrid, we used thepandas
library for the manipulation of tabular data. On the Maastricht DSRI, where we run Python in a ready-made Jupyter notebook environment hosted by my university, the pandas
library is pre-installed, but users can also install it using pip if the package is not found:
pip install pandas
The EXCEL file containing the person names we wanted to match contained the relevant information in the person_name
column, but we also wanted to display the matching IDs from the pers_ID
column. So these two columns had to be linked in our script. We created a dictionary (person_dict
) to clearly map our person names to their corresponding pers_ID
and make the lookup process for successful matches faster.
We also put all the CSV files from FactGrid into one folder to be able to easily iterate over all the files. In each CSV file, the person name was contained in an itemLabel
. The first version of our script searched for matches of complete strings, disregarding sentence case or capitalization.
To make the script case insensitive, we used the .lower() function to display all names in lowercase letters:
item_label = row['itemLabel'].lower()
This preliminary matching retrieved 30 possible name matches. The script appended the relevant row data from the CSV file alongside the matching name and pers_ID
from the EXCEL file to a matches
list and wrote all results to a new file, which we called FactGrid_person-matches.csv
.
We then tried to find matches where name components may have been omitted. One way to identify such possible matches is to split the name string from our EXCEL file into individual tokens and identify a numeric threshold for how many of those tokens also need to be present in the CSV name strings from FactGrid. I first experimented with a threshold of three common tokens, which resulted in the following “else” condition that I added to the existing script:
# check if three or more tokens are the same
else:
names = item_label.split(" ")
for n, id in person_dict.items():
n_tokens = n.split(" ")
common_tokens = set(names).intersection(n_tokens)
if len(common_tokens) >= 3:
matches.append({
'file': filename,
'item': row['item'],
'itemLabel': row['itemLabel'],
'birthDate': row['birthDate'],
'deathDate': row['deathDate'],
'placeOfBirth': row['placeOfBirth'],
'placeOfBirthLabel': row['placeOfBirthLabel'],
'placeOfDeath': row['placeOfDeath'],
'placeOfDeathLabel': row['placeOfDeathLabel'],
'matching_name': row['itemLabel'],
'person_id': id,
'person_name': n,
})
break # stop after the first match
This “else” clause increased the results to 1178 alleged matches, but checking the results data frame, we found that most of them were false positives, simply because some combinations of first names are abundant (e.g. “Friedrich Wilhelm Karl”) or because several aristocratic surnames per se consist in three tokens (e.g. “von der Becke”). Setting the threshold for token matching to 4 reduced the output to 121 lines of data, which is a more realistic number for manual data checks.
If you wish to reuse this code for your own purposes, you can find it in the DigiKAR repositorium on Github.
Challenges of adding new data to FactGrid
As the number of person matches between our data and existing FactGrid projects is relatively small, we realised that we would have to create new instances for the vast majority of our persons from scratch. This brought us back to taking a careful look at the person properties already registered in the FactGrid directory. Our first observation was that the research history of the existing data is clearly visible in some of the very specific ways in which properties such as “matriculation numbers” are currently used:
Matriculation number | wd:P669 | use this statement as a qualifier on P430 Grand Lodge statements to state the registry number |
Many properties in FactGrid obviously relate to people`s activities in Masonic lodges because early modern German Freemasons seem to form one of the largest data sets added in the past. Property use in this context will potentially collide with some of our own use cases. While we also have individual Freemasons in our data set, most historical persons in the DigiKAR project are either university members and / or clerics and other professionals in the service of the Kurmainz administration. As a consequence, our “matriculation numbers” tend to relate to academic studies. And we will need the opportunity to assign different types of membership in religious orders, too.
Under the “religion” property, FactGrid currently lists different religious movements including “Christianity” as subclasses, but also Christian churches and denominations without hierarchical differentiation. While some churches are linked to higher-level entities such as Eastern Orthodoxy, such connections are not inherent in all data. These inconsistencies currently make queries complicated and go against the ideal of linked data. Also, we have seen many (seemingly unregulated) plain text fields in data sets on FactGrid and wonder if we should provide our own narrative descriptions alongside more structured data at all.
In order to learn from other projects preparing data for FactGrid integration, we have made contact with colleagues from the mainzed digital humanities community, especially archaeologists from Hochschule Mainz. Francisca Klemmstein and Timo Homburg have pointed out to us that although projects largely implement their own modeling ideas, making it difficult to find datasets on related topics, there are also efforts to establish minimal standards, such as basic requirements for places.
Moreoever, Dr. Olaf Simons (FactGrid / Gotha Research Center) is bringing together various projects that are working on similar topics to achieve better coordination and modeling approaches. One such work in progress is defining Modeled Associations in the “Mapping Koblenz” project, to which student assistants contribute.
Another challenge which researchers should ideally tackle collaboratively is that all geodata in FactGrid must currently be created with EPSG:4326 WGS 84 projection (for more information on EPSG, see the EPSG Geodetic Parameter Dataset entry in Wikipedia). Individual researchers may, therefore, have to put more work into previously collected geodata to meet this requirement. Timo Homburg informed as that the relevant property is https://database.factgrid.de/wiki/Property:P1035 and equivalent to the property geo:asWKT from the GeoSPARQL standard. One may thus assume that it is possible to define a literal in the format "<http://www.opengis.net/def/crs/EPSG/0/4326> Point(33.95 -83.38)"
in FactGrid, where the URL in the literal specifies the applied coordinate reference system (CRS). However, the FactGrid query interface, which allows a map view, cannot currently handle this URI. If geoinformation is not given in the required format however, it cannot currently be rendered by the FactGrid visualisation tool. To better understand how FactGrid queries geodata and displays them on maps, I recommend looking at some of the official sample queries that generate spatial visualisations, such as a query to show the map of (archival) documents listed on FactGrid.
The SPARQL query has the following structure and finds 4195 results in 2081 ms:
#defaultView:Map
SELECT ?item ?itemLabel ?Ort ?OrtLabel ?Geokoordinaten WHERE {
SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". }
?item wdt:P2 wd:Q10671.
?item wdt:P95 ?Ort.
?Ort wdt:P48 ?Geokoordinaten.
?item wdt:P97 wd:Q10677.
}
Instead of displaying the results in table format, FactGrid renders a zoomable map because “map” was selected as the “default view” (see screenshot on the left). If this immediate rendering fails because of an unknown geodata format, one workaround, suggested by Timo Homburg, could be to create an external (static) website, using Leaflet or a similar web mapping service, that displays such non-WGS84 information as a map (queried via JS from FactGrid with SPARQL). Kai Christian Bruhn recommends the image reprojection and warping utility “gdalwarp” as an effective tool to transform geodata into the format which FactGrid can directly display in a geovisualisation.
As soon as we have new insights on providing DigiKAR data via FactGrid, I will update this blog post to keep you informed, ideally pointing you to properties relevant to our project.
OpenEdition suggests that you cite this post as follows:
Monika Barget (July 17, 2024). Adding DigiKAR data to FactGrid — chances and challenges. INSULAE. Retrieved October 13, 2024 from https://doi.org/10.58079/1213i