Everyday coding I: Creating a book index with Python
Most academics are not only researchers but also teachers and administrators. Teaching and administration are tasks which often come together and pose (mathematical) problems that scripts may be able to handle for us. Acting from necessity, I have decided to start an “everyday coding” series and propose script-based solutions for some common administrative problems. The first edition presents the creation of a book index with Python.
Publishing academic books also requires the creation of indices. While some authors or volume editors have funding that pays someone else for this work, many of us must submit our own carefully crafted index to the publisher. Especially in the case of multi-author books, identifying keywords that matter across chapters and extracting all the page numbers can be very challenging. This is why I have decided to automate the process with the help of Python packages.
My workflow developed for an edited volume can be broken down into 4 steps:
1) First, I needed a package to read the publisher’s proof (PDF document) as plain text, ideally excluding footnotes, page headers or footers, and bibliographies. I experimented with both PyPDF2 and PDFtoTEXT.
2) Then, I wrote a script (relying on Python’s NLTK package and based on a tutorial by Abder-Rahman Ali published in December 2016) that identified unique words in our proof while excluding common English-language stop words. The same script writes the identified tokens to a CSV table. It indicates the number of occurrences in a separate column so that their importance in the entire book can be better assessed.
3) The third step was to manually review the keywords, delete unnecessary ones, and map the ones to be kept to more general index words. In the case of the book which I co-edited with David de Boer and Malte Griesse, different ways of describing early modern confessional conflict needed to be referred to the overarching term “confessional wars” to reduce the number of final index words to a reasonable number.
The following CSV table contains several mapping examples from our book project:
4) Once we had completed the keyword mapping, another Python script was required to go through the cleaned list of words, find their page numbers in the PDF file, identify the overarching term for each word, and write all page numbers for those top-level terms to an alphabetically sorted EXCEL file.
While the first two steps were straightforward, writing the script for creating the final EXCEL table (step 4) proved more complicated than I had anticipated. I spent quite some time in coding forums to search for solutions.
The two major problems I needed to solve were:
1) Finding the best Python package to read my PDF file:
I started with PyPDF2 (see my script on GITHUB) but figured out that it missed too many occurrences of my keywords, so I tested other packages and was pleased enough with PDFtoText.
2) Handling the keyword-page relations as Python dictionaries, overwriting most of the dictionary keys, but preserving their values:
When reading all original keywords we were interested in from the PDF file, I got a long dictionary of lists similar to this example:
{'Person A': [125], 'Person B': [95, 97], 'Event A': [11], 'Event B': [115, 118, 132, 134], 'Place A': [116], 'Place B': [181, 182, 264, 266, 267, 285]}
The numbers in the lists represent the relevant page numbers found in the proof. My desired output, however, was a merged dictionary of lists with new keys based on our semantic mapping:
{'Person AB': [ 95, 97, 125], 'Event AB': [11, 115, 118, 132, 134], 'Place AB': [116, 181, 182, 264, 266, 267, 285]}
My first attempts resulted in either losing many values, a 'dict' object is not callable error, or having many duplicate page numbers in the final dictionaries. A recommendation on Stackoverflow was to use the chain function from itertools to flatten and sort my lists. But by then, I had already found the solution to de-duplicate and sort each list in my ultimate dictionary of lists only when writing the key:value pairs to CSV:
# write each dictionary to one row in new .CSV file
with open('C:\\#####\\BRILL_book.csv', 'w', encoding="utf-8") as x:
writer = csv.writer(x)
for key, value in page_dict.items():
writer.writerow([key, sorted(set(value))]) # sort and de-deduplicate list
I have now made both scripts for index-building available in my “Digital History” GITHUB repository. I hope that these scripts will help other researchers – particularly those in the humanities – create book indices at a low cost and with minimal investment of time.
OpenEdition suggests that you cite this post as follows:
Monika Barget (October 26, 2021). Everyday coding I: Creating a book index with Python. INSULAE. Retrieved November 6, 2025 from https://doi.org/10.58079/qd8x
