From print to LOD, part II: Extracting structure

As explained in the previous post, we want to transform the Repertorio Terminologico per la Schedatura delle Sculture dell’Arte Gandharica from a printed book to a SKOS vocabulary that we can use in a linked data context. In this post, we’ll describe our efforts and experience with the first steps: OCR’ing the Repertorio and extracting its content in structured form.

A typical page of the Repertorio has a quasi-tabular structure: There are three columns, for figures, Italian, and English description, and each row describes one major concept and its facets and parts. These can be distinguished through their labels: Main concepts use numerical labels, the facets use second-level enumerations, and parts use alphabetical labels. Our idea was to maintain this outer structure and then use regular expressions to parse the structure of each block.

Scan of a page of the Repertorio
The structure of the Repertorio

The OCR itself is not an issue for this kind of modern printed material. The main challenge was finding a suitable approach for maintaining as much of the page structure as possible so that I would not have to reconstruct it from the raw OCR results. Initial experiments with Tesseract showed that it sometimes maintains at least the column structure, but sometimes it doesn’t. And the illustrations are a real challenge, because not only their labels but also various parts of the line drawings themselves are recognized as characters that spoil the output.

1 - Statua [con sola 1 - Statue [front-view
veduta frontale] statue]
1.1: [faccia ant. (=fronte)] | 1.1: [front face]
1.2: [faccia post. (=retro)] | 1.2: [back face]
a: base a: base
b: figura b: figure
c: faccia ant. c: front face
1.2
: 2 - Statua a tutto tondo 2 - Statue in the round
—— 2.1: [faccia ant. (=fronte)] | 2.1: [front face]
2.2: [faccia post. (=retro)] | 2.2: [back face]
cH
a
me
3 - Statua a figure 3 - Statue with back-to-
attergate back figures
=>

Example output from Tesseract OCR. For this page, Italian and English labels are output on the same line, while on other pages columns are recognized and Italian and English appear as blocks one below the other.

I looked into several layout analysis engines like the recently announced LayoutParser, Transkribus, and eynollah. Initially, I tried to follow two distinct approaches: Either treat the structure as a table with one row per main concept, or focus on the column structure.

  • LayoutParser claims support for tabular structures, but its tutorial was rather disappointing. For actually parsing table cells (instead of just recognizing that a structure is a table), it suggests a rather basic row clustering approach and manual column identification.
  • Transkribus’ table processing allows marking table structures and even export them as Excel sheets. But it only has limited support for automatically recognizing them:
    • It can make use of the external tool nomacs for finding table columns from a table template (although the section seems to have been removed from the guide). But my experiments showed that it was quite capable of recognizing the column separators, but not the top and bottom boundaries of the table, probably due to the lack of visual separators here. And post-correction would not be significantly less work than manual markup, especially since it only recognizes columns, not rows.
    • For automatic column identification, it suggests using its trainable layout analysis module P2PaLA, but this appears to work on TextRegions, not TableRegions, so one would lose the Excel export capabilities (and one would still have to split columns into cells).
    • eynollah does not support parsing table structures, but it is quite capable of recognizing columns, and a nice feature is that it can automatically extract recognized graphics regions as individual image files. However, it is really slow, especially if you don’t have a capable GPU to speed up the recognition process.

So when actually treating this as a table seems not to be feasible with current tools (and I agree it’s a border case, and very different from, e.g., full-border data tables), I thought about resorting to only recognizing the column structure and progressing from there.

Rather by chance, I noticed that in addition to its default layout analysis method “CITlab Advanced” Transkribus also has a method called “Printed Block Detection.” That method turned out to do a rather remarkable job at recognizing each “cell” of the tabular structure as a separate text region, and also recognize the illustrations as graphic regions.

Output from Printed Block Detection in Transkribus. Cells are recognized well. The main issues are superfluous separators recognized from the line drawings, and that the illustrations are recognized as a single block.

This proved to be a useful starting point for the systematic extraction of the page content. After some experimentation, I ended up with this workflow:

Step 1: Preprocessing

In order to improve the quality of the recognition process, I processed all page images with ScanTailor Advanced. Since most pages are slightly rotated and show the bookbinding, this should ensure a more uniform input. To be honest, I am not completely sure this is necessary for Transkribus, as most examples in its documentation show unprocessed manuscript pages. But I guess it would not hurt and will be useful when we extract the illustrations.

Step 2: Layout Analysis

After the import of the cleaned pages into Transkribus, automatic layout analysis is performed using the “Printed Block Detection” method. It still needs some minor cleanup and labeling to support the extraction process:

  1. Delete superfluous graphics elements, text regions (we don’t want figure labels as separate text), and all separators (we don’t need them at all, and often parts of the line drawings are falsely classified as separators).
  2. Split graphics and text regions. In the end, we want one graphic, one Italian text region, and one English text region per main concept.
  3. Label main headings as heading, Italian regions as column-italiano and English regions as column-english.
  4. Fix reading order, should always be Graphic – Italian – English.

Step 3: OCR

Since the “Printed Block Detection” method only recognizes sections, not lines, we run line segmentation with the method “CITlab Advanced.” This time, we have to uncheck the “Find Text Regions” options so that it only finds lines inside the regions we already recognized. Afterward, we run text recognition with the model “Transkribus print 0.3,” which supports a variety of languages.

The recognition is quite good, but needs a little proof reading. Especially, the closing square brackets that are used heavily in the Repertorio are often omitted, as are the pipe characters that serve as language separators in the headings.

Step 4: PAGE XML Export

Transkribus can export its data in PAGE XML format. This representation contains all the information we need: The recognized regions, their labels, and their recognized text content. This then serves as the input for the last step, where we build a structured representation of the vocabulary from the OCR’d text.

Step 5: Vocabulary Extraction

Based on the page structure information and the Repertorio’s uniform representation of its concepts, I built a structured vocabulary using a custom Python script. At this point, my main goal is to capture the hierarchy in a tree structure. The basic idea is to do this in three steps:

  1. First, we reconstruct the original tabular form of each page. Since we know that each row consists of a graphic element, a text region with the label column-italiano, and finally one labeled column-english, we can build a table-like structure by looking for these three elements.
  2. Second, we go through each of these rows and parse the structure of each cell. Since each concept and sub-concept is enumerated with a numerical or alphabetical label, we can use Regular Expressions to parse them.
  3. Finally, we merge the Italian and English cells – because we want a unified thesaurus, just with multilingual labels for each concept – and build the tree.

The code is available as a Jupyter Notebook. From the example page shown above, we get this output:

Vocab
└── (1.3.2) la statua
    |       statue
    ├── (1) Statua [con sola veduta frontale]
    |   |   Statue [front-view statue)
    │   ├── (1.1) faccia ant. (=fronte)
    |   |         front facel
    │   ├── (1.2) faccia post. (=retro)
    |   |         back face
    │   ├── (a) base
    |   |       base
    │   ├── (b) figura
    |   |       figure
    │   └── (c) faccia ant.
    |           front face
    ├── (2) Statua a tutto tondo
    |   |   Statue in the round
    │   ├── (2.1) faccia ant. (=fronte)
    |   |         front face
    │   └── (2.2) faccia post. (=retro)
    |             back facel
    └── (3) Statua a figure atter gate
            Statue with back-to- back figures

Outlook

The current result is a proof of concept: It is possible to extract the vocabulary structure from the Repertorio using the workflow outlined above. And while the process can be largely automated, doing this for the complete terminology still requires a fair amount of post-correction and proof-reading. This will keep us busy for a while.

Also, at this point we ignore the illustrations, but it would be nice to be able to extract them as well. And then we will need to replace the ad-hoc tree with actual SKOS concept relations. But from the results so far, I am confident that we will manage to get there.


Update, 3 May 2021: Explained the last step slightly more detailed.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.