Manx da capo
By Paul Flo Williams
Over lockdown, I was revisiting some old projects and taking the opportunity to see if they would benefit from a fresh pair of eyes and new learning over the last decade.
Manx was one such; a catalogue of old computer manuals that I created 20 years ago and worked on until 2009, at which point some major changes in my life meant I could no longer sustain the effort to maintain what had always been a single person task. Although I’d had plans to allow multiple editors to add content, the tooling was always the least exciting part of the project and only two of us had ever maintained it.
The Perl source for the application and the database had been dumped and distributed, and I mostly forgot about it. It was rewritten in PHP by someone else and placed online.
Over a decade on and, I had started work on a new, large art-history database (a catalogue raisonné for the artist A. C. Michael) and had learned a lot about improved database queries, application structure, testing and templating of web applications as well as, crucially, backend tooling. So I took a look at upgrading Manx, to satisfy my own curiosity.
This is still a work in progress and I’ll be writing more about this over the coming months. The significant changes so far, not all user-visible, are:
- Citations have been added, so that publications mentioned by related documents or catalogues are explicitly marked. This should answer questions about why a given document is in the catalogue at all.
- Some documents incorporate others. For example, print sets are frequently assemblies of lower level parts. These are now explicit in the database.
- Pages are generated using a template engine. Apart from the benefit of separating functionality from presentation, the discipline of retrieving all the relevant information from the database before “pouring” it into a template for display allows the information to also be made available in other forms. I can now use the same backend for powering the command line tools I use for updating the database by sending JSON extracts as well as HTML.
- The back-end spidering software now provides a much faster way of updating websites that contain scanned documents.
- Different versions of documents can now be grouped in the database to provide a clearer view of supersessions.
- The number of documents in the database has climbed by 13000 to over 35000, with another 7000 references to online scans.
I have spoken to the person who took on the job of maintenance of the database many years ago. I haven’t yet worked out how we can collaborate with the new features but it is not completely off my radar.