EpiDoc: News and Views

http://planet.atlantides.org/epidoc

Tom Elliott (tom.elliott@nyu.edu)

This feed aggregator is part of the Planet Atlantides constellation. Its current content is available in multiple webfeed formats, including Atom, RSS/RDF and RSS 1.0. The subscription list is also available in OPML and as a FOAF Roll. All content is assumed to be the intellectual property of the originators unless they indicate otherwise.

June 19, 2017

Stoa

OEDUc: EDH and Pelagios NER working group

Participants:  Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta

Report: https://github.com/EpiDoc/OEDUc/wiki/EDH-and-Pelagios-NER

The EDH and Pelagios NER working group was part of the Open Epigraphic Data Unconference held on 15 May 2017. Our aim was to use Named Entity Recognition (NER) on the text of inscriptions from the Epigraphic Database Heidelberg (EDH) to identify placenames, which could then be linked to their equivalent terms in the Pleiades gazetteer and thereby integrated with Pelagios Commons.

Data about each inscription, along with the inscription text itself, is stored in one XML file per inscription. In order to perform NER, we therefore first had to extract the inscription text from each XML file (contained within <ab></ab> tags), then strip out any markup from the inscription to leave plain text. There are various Python libraries for processing XML, but most of these turned out to be a bit too complex for what we were trying to do, or simply returned the identifier of the <ab> element rather than the text it contained.

Eventually, we found the Python library Beautiful Soup, which converts an XML document to structured text, from which you can identify your desired element, then strip out the markup to convert the contents of this element to plain text. It is a very simple and elegant solution with only eight lines of code to extract and convert the inscription text from one specific file. The next step is to create a script that will automatically iterate through all files in a particular folder, producing a directory of new files that contain only the plain text of the inscriptions.

Once we have a plain text file for each inscription, we can begin the process of named entity extraction. We decided to follow the methods and instructions shown in the two Sunoikisis DC classes on Named Entity Extraction:

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-II

Here is a short outline of the steps might involve when this is done in the future.

  1. Extraction
    1. Split text into tokens, make a python list
    2. Create a baseline
      1. cycle through each token of the text
      2. if the token starts with a capital letter it’s a named entity (only one type, i.e. Entity)
    3. Classical Language Toolkit (CLTK)
      1. for each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
      2. Compare to baseline
    4. Natural Language Toolkit (NLTK)
      1. Stanford NER Tagger for Italian works well with Latin
      2. Differentiates between different kinds of entities: place, person, organization or none of the above, more granular than CLTK
      3. Compare to both baseline and CLTK lists
  2. Classification
    1. Part-Of-Speech (POS) tagging – precondition before you can perform any other advanced operation on a text, information on the word class (noun, verb etc.); TreeTagger
    2. Chunking – sub-dividing a section of text into phrases and/or meaningful constituents (which may include 1 or more text tokens); export to IOB notation
    3. Computing entity frequency
  3. Disambiguation

Although we didn’t make as much progress as we would have liked, we have achieved our aim of creating a script to prepare individual files for NER processing, and have therefore laid the groundwork for future developments in this area. We hope to build on this work to successfully apply NER to the inscription texts in the EDH in order to make them more widely accessible to researchers and to facilitate their connection to other, similar resources, like Pelagios.

June 13, 2017

Stoa

OEDUc: Images and Image metadata working group

Participants: Sarah Middle, Angie Lumezeanu, Simona Stoyanova
Report: https://github.com/EpiDoc/OEDUc/wiki/Images-and-image-metadata

 

The Images and Image Metadata working group met at the London meeting of the Open Epigraphic Data Unconference on Friday, May 15, 2017, and discussed the issues of copyright, metadata formats, image extraction and licence transparency in the Epigraphik Fotothek Heidelberg, the database which contains images and metadata relating to nearly forty thousand Roman inscriptions from collections around the world. Were the EDH to lose its funding and the website its support, one of the biggest and most useful digital epigraphy projects will start disintegrating. While its data is available for download, its usability will be greatly compromised. Thus, this working group focused on issues pertaining to the EDH image collection. The materials we worked with are the JPG images as seen on the website, and the images metadata files which are available as XML and JSON data dumps on the EDH data download page.

The EDH Photographic Database index page states: “The digital image material of the Photographic Database is with a few exceptions directly accessible. Hitherto it had been the policy that pictures with unclear utilization rights were presented only as thumbnail images. In 2012 as a result of ever increasing requests from the scientific community and with the support of the Heidelberg Academy of the Sciences this policy has been changed. The approval of the institutions which house the monuments and their inscriptions is assumed for the non commercial use for research purposes (otherwise permission should be sought). Rights beyond those just mentioned may not be assumed and require special permission of the photographer and the museum.”

During a discussion with Frank Grieshaber we found out that the information in this paragraph is only available on this webpage, with no individual licence details in the metadata records of the images, either in the XML or the JSON data dumps. It would be useful to be included in the records, though it is not clear how to accomplish this efficiently for each photograph, since all photographers need to be contacted first. Currently, the rights information in the XML records says “Rights Reserved – Free Access on Epigraphischen Fotothek Heidelberg”, which presumably points to the “research purposes” part of the statement on the EDH website.

All other components of EDH – inscriptions, bibliography, geography and people RDF – have been released under Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows for their reuse and repurposing, thus ensuring their sustainability. The images, however, will be the first thing to disappear once the project ends. With unclear licensing and the impossibility of contacting every single photographer, some of whom are not alive anymore and others who might not wish to waive their rights, data reuse becomes particularly problematic.

One possible way of figuring out the copyright of individual images is to check the reciprocal links to the photographic archive of the partner institutions who provided the images, and then read through their own licence information. However, these links are only visible from the HTML and not present in the XML records.

Given that the image metadata in the XML files is relatively detailed and already in place, we decided to focus on the task of image extraction for research purposes, which is covered by the general licensing of the EDH image databank. We prepared a Python script for batch download of the entire image databank, available on the OEDUc GitHub repo. Each image has a unique identifier which is the same as its filename and the final string of its URL. This means that when an inscription has more than one photograph, each one has its individual record and URI, which allows for complete coverage and efficient harvesting. The images are numbered sequentially, and in the case of a missing image, the process skips that entry and continues on to the next one. Since the databank includes some 37,530 plus images, the script pauses for 30 seconds after every 200 files to avoid a timeout. We don’t have access to the high resolution TIFF images, so this script downloads the JPGs from the HTML records.

The EDH images included in the EAGLE MediaWiki are all under an open licence and link back to the EDH databank. A task for the future will be to compare the two lists to get a sense of the EAGLE coverage of EDH images and feed back their licensing information to the EDH image records. One issue is the lack of file-naming conventions in EAGLE, where some photographs carry a publication citation (CIL_III_14216,_8.JPG, AE_1957,_266_1.JPG), a random name (DR_11.jpg) and even a descriptive filename which may contain an EDH reference (Roman_Inscription_in_Aleppo,_Museum,_Syria_(EDH_-_F009848).jpeg). Matching these to the EDH databank will have to be done by cross-referencing the publication citations either in the filename or in the image record.

A further future task could be to embed the image metadata into the image itself. The EAGLE MediaWiki images already have the Exif data (added automatically by the camera) but it might be useful to add descriptive and copyright information internally following the IPTC data set standard (e.g. title, subject, photographer, rights etc). This will help bring the inscription file, image record and image itself back together, in the event of data scattering after the end of the project. Currently linkage exist between the inscription files and image records. Embedding at least the HD number of the inscription directly into the image metadata will allow us to gradually bring the resources back together, following changes in copyright and licensing.

Out of the three tasks we set out to discuss, one turned out to be impractical and unfeasible, one we accomplished and published the code, one remains to be worked on in the future. Ascertaining the copyright status of all images is physically impossible, so all future experiments will be done on the EDH images in EAGLE MediaWiki. The script for extracting JPGs from the HTML is available on the OEDUc GitHub repo. We have drafted a plan for embedding metadata into the images, following the IPTC standard.

June 07, 2017

Stoa

Open Epigraphic Data Unconference report

Last month, a dozen or so scholars met in London (and were joined by a similar number via remote video-conference) to discuss and work on the open data produced by the Epigraphic Database Heidelberg. (See call and description.)

Over the course of the day seven working groups were formed, two of which completed their briefs within the day, but the other five will lead to ongoing work and discussion. Fuller reports from the individual groups will follow here shortly, but here is a short summary of the activities, along with links to the pages in the Wiki of the OEDUc Github repository.

Useful links:

  1. All interested colleagues are welcome to join the discussion group: https://groups.google.com/forum/#!forum/oeduc
  2. Code, documentation, and other notes are collected in the Github repository: https://github.com/EpiDoc/OEDUc

1. Disambiguating EDH person RDF
(Gabriel Bodard, Núria García Casacuberta, Tom Gheldof, Rada Varga)
We discussed and broadly specced out a couple of steps in the process for disambiguating PIR references for inscriptions in EDH that contain multiple personal names, for linking together person references that cite the same PIR entry, and for using Trismegistos data to further disambiguate EDH persons. We haven’t written any actual code to implement this yet, but we expect a few Python scripts would do the trick.

2. Epigraphic ontology
(Hugh Cayless, Paula Granados, Tim Hill, Thomas Kollatz, Franco Luciani, Emilia Mataix, Orla Murphy, Charlotte Tupman, Valeria Vitale, Franziska Weise)
This group discussed the various ontologies available for encoding epigraphic information (LAWDI, Nomisma, EAGLE Vocabularies) and ideas for filling the gaps between this. This is a long-standing desideratum of the EpiDoc community, and will be an ongoing discussion (perhaps the most important of the workshop).

3. Images and image metadata
(Angie Lumezeanu, Sarah Middle, Simona Stoyanova)
This group attempted to write scripts to track down copyright information on images in EDH (too complicated, but EAGLE may have more of this), download images and metadata (scripts in Github), and explored the possibility of embedding metadata in the images in IPTC format (in progress).

4. EDH and SNAP:DRGN mapping
(Rada Varga, Scott Vanderbilt, Gabriel Bodard, Tim Hill, Hugh Cayless, Elli Mylonas, Franziska Weise, Frank Grieshaber)
In this group we revised the status of SNAP:DRGN recommendations for person-data in RDF, and then looked in detail about the person list exported from the EDH data. A list of suggestions for improving this data was produced for EDH to consider. This task was considered to be complete. (Although Frank may have feedback or questions for us later.)

5. EDH and Pelagios NER
(Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta, Thomas Kollatz)
This group explored the possibility of running machine named entity extraction on the Latin texts of the EDH inscriptions, in two stages: extracting plain text from the XML (code in Github); applying CLTK/NLTK scripts to identify entities (in progress).

6. EDH and Pelagios location disambiguation
(Paula Granados, Valeria Vitale, Franco Luciani, Angie Lumezeanu, Thomas Kollatz, Hugh Cayless, Tim Hill)
This group aimed to work on disambiguating location information in the EDH data export, for example making links between Geonames place identifiers, TMGeo places, Wikidata and Pleiades identifiers, via the Pelagios gazetteer or other linking mechanisms. A pathway for resolving was identified, but work is still ongoing.

7. Exist-db mashup application
(Pietro Liuzzo)
This task, which Dr Liuzzo carried out alone, since his network connection didn’t allow him to join any of the discussion groups on the day, was to create an implementation of existing code for displaying and editing epigraphic editions (using Exist-db, Leiden+, etc.) and offer a demonstration interface by which the EDH data could be served up to the public and contributions and improvements invited. (A preview “epigraphy.info” perhaps?)

May 03, 2017

Current Epigraphy

Open Epigraphic Data Unconference, London, May 15, 2017

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

May 02, 2017

Stoa

Open Epigraphic Data Unconference, London, May 15, 2017

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

April 26, 2017

Current Epigraphy

Inscriptions of Chersonesos and Tyras launch, KCL, May 11, 2017

The Department of Classics at King’s College London and the team of
IOSPE: Ancient Inscriptions of the Northern Black Sea
iospe.kcl.ac.uk

request the pleasure of your company at the launch of two new digital collections of Greek and Latin inscriptions from the region of Northern Black Sea:

Inscriptions of Chersonesos
and
Inscriptions of Tyras

Speakers include: Askold Ivantchik (Moscow/Bordeaux), Igor Makarov
(Moscow), Irene Polinskaya (London), Gabriel Bodard (London),
Jonathan Prag (Oxford), Riet van Bremen (London), Georgy Kantor (Oxford)

17.30-19.00, Thursday 11 May 2017
Harvard Lecture Theatre
Bush House, King’s College London
30 Aldwych, London WC2B 4BG

Doors open and refreshments 17:00
Wine reception 19:00

Please join us for the occasion!

The project is funded by the A.G. Leventis Foundation

April 19, 2017

Current Epigraphy

Summer School in Advanced Tools for Digital Humanities and IT

The event is organized by the Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, with lecturers and trainers from the School of Advanced Studies to the University of London and the Carnegie Mellon University, Pittsburgh, USA.

The event will take place in September 2017 in a mountain retreat near Sofia, Bulgaria (location tbc). The school will offer the following teaching modules:

  •     Linked Spatial Data, Geo-annotation, Visualisation and Information system (Geography and Topography) – with Valeria Vitale and Gabriel Bodard (School of Advanced Studies, University of London);
  •     Python for data extraction, enriching and cataloguing – with Simona Stoyanova and Gabriel Bodard (School of Advanced Studies, University of London);
  •     EpiDoc and TEI markup, use of vocabularies, and web delivery (including external URI use, XSLT customization, and entity normalization) – with Simona Stoyanova and Gabriel Bodard (School of Advanced Studies, University of London);
  •     Big Data and Information Extraction – Dimitar Birov (University of Sofia) and Eduardo Miranda (Carnegie Mellon University of Pittsburgh).

In the framework of the event, a round table on the current trends and the future developments of Digital Humanities in South-East Europe.

The event will take place between 7th and 11th September. The participation fee is 50 euros. If you are interested in the Summer School, please send a Curriculum Vitae and a Motivation Letter stating your main areas of interest and expertise, the projects on which you are currently working, as well as which module(s) are most relevant for your work and why you would like to attend them. The applications should be sent to dhsummerschool@uni-sofia.bg no later than 1 June 2017.

The organizing team

Assoc. Prof. Dimitar Birov, University of Sofia, Dr. Dimitar Iliev, University of Sofia, Dr. Maria Baramova, University of Sofia, Dobromir Dobrev, University of Sofia

March 22, 2017

Stoa

Research Fellows: Latinization of the north-western provinces

Posted on behalf on Alex Mullen (to whom enquiries should be addressed):

I should like to draw your attention to the advertisement for 2 Research Fellows for the 5-year ERC project: the Latinization of the North-Western Provinces: Sociolinguistics, Epigraphy and Archaeology (LatinNow).

The RFs will be based at the Centre for the Study of Ancient Documents, University of Oxford, and will start, at the earliest, in September 2017. The positions will be for 3 years, with the possibility of extension.

Although the RFs will be located in Oxford, their contracts will be with the project host, the University of Nottingham, so applications must be made via the Nottingham online system:

https://www.nottingham.ac.uk/jobs/currentvacancies/ref/ART002017

Please note that the panel requires basic details to be filled in online and a CV and covering letter to be uploaded (apologies, the generic application system is not clear on what needs to be uploaded). The deadline for applications is the 14th April.

If you would like further information, please do not hesitate to contact the Principal Investigator, Dr Alex Mullen.

March 20, 2017

Current Epigraphy

Research Fellows: Latinization of the north-western provinces

I should like to draw your attention to the advertisement for 2 Research Fellows for the 5-year ERC project: the Latinization of the North-Western Provinces: Sociolinguistics, Epigraphy and Archaeology (LatinNow).

The RFs will be based at the Centre for the Study of Ancient Documents, University of Oxford, and will start, at the earliest, in September 2017. The positions will be for 3 years, with the possibility of extension.

Although the RFs will be located in Oxford, their contracts will be with the project host, the University of Nottingham, so applications must be made via the Nottingham online system:

https://www.nottingham.ac.uk/jobs/currentvacancies/ref/ART002017

Please note that the panel requires basic details to be filled in online and a CV and covering letter to be uploaded (apologies, the generic application system is not clear on what needs to be uploaded). The deadline for applications is the 14th April.

If you would like further information, please do not hesitate to contact the Principal Investigator, Dr Alex Mullen.

March 14, 2017

Current Epigraphy

EpiDoc training workshop, Athens, May 2017

Call for Participation

A four-day training workshop on “EpiDoc” will be held in Athens (Greece), from Tuesday, 2 May to Friday, 5 May 2017, at the Academy of Athens. The workshop is organized by the Academy of Athens within the framework of the DARIA-EU project “Humanities at Scale”.

The topic of the training workshop “EpiDoc” will be digital editing of epigraphic and papyrological texts and will focus on the encoding of inscriptions, papyri and other ancient texts. EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and project workflow and management.

Instructors: Elli Mylonas and Simona Stoyanova.

The four-day workshop will be divided into five sections:

  • Section 1: Encoding epigraphic and other texts: Basic EpiDoc, using OxygenXML, transforming text with XSL for proofreading and display.
  • Section 2: Metadata: Encoding the history and description of the textual support.
  • Section 3: Advanced Features (Apparatus criticus, verse, complex texts).
  • Section 4: Text encoding projects: organization, roles, workflows.
  • Section 5: Vocabularies and Analysis: indexing, names and places, controlled vocabularies.

The workshop will include ample time for hands on practice, questions, discussion of individual projects, and the option to learn about topics that are of special interest to participants.

The workshop will be conducted in English and the participation is free.

The workshop will assume knowledge of epigraphy or papyrology; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome.

The participants should bring their own laptops. It is also strongly recommended for the participants to have prepared in advance a mini corpus of texts concerning their field of scientific interest.

Registration

Please fill the application form until 10 April 2017 at the following address:

https://goo.gl/forms/0Xaf8umatP8oJaCf1

Due to the limited seats there will be a selection among applicants. Applicants will be notified by email.

Dates:

2-5/5/2017, 9:00-17:00

Organisation:

Academy of Athens

Project DARIAH-EU – Humanities at Scale

Location:

Academy of Athens – Main Building, East Hall
Panepistimiou 28,
10679 Athens
Greece

For additional information, please contact: gchrysovitsanos@academyofathens.gr

Readings:

The first three items provide a good overview to Digital Epigraphy and Epidoc. We recommend that you read those first.

February 24, 2017

Current Epigraphy

Engineer Position in Bordeaux (Papyrii, Inscriptions, etc.) for the project PATRIMONIVM

The European funding scheme ERC Starting Grant rewards the most innovative research projects led by young researchers in all scientific areas. Among those selected for funding in the 2016 call, the project PATRIMONIVM, hosted by the University Bordeaux Montaigne, aims at realizing the first global study of the economic, social and political role of the properties of Roman emperors using a complete documentary base of all relevant sources. The project lasts 5 years and will involve 9 historians and a web engineer responsible of the database. The documentary system of PATRIMONIVM is one of the most ambitious features of the project, not only because of the number and the variety of the data (epigraphic, papyrological and literary sources, prosopographical data, archaeological descriptions, images, georeferenced data, bibliographic references), but also because of the implementation of the latest XML standards for the digital presentation of ancient sources. These features make PATRIMONIVM one of the leading digital humanities projects at international level.

The engineer responsible for the documentary system is one of the most important members PATRIMONIVM’s research team. She/he will work in close coordination with the Principal Investigator and collaborate with the other team members. She/he will participate to the scientific programme of the project and contribute to the visibility to the project thanks to her/his participation to conferences and workshops on the digital humanities in France and abroad. She/he will be part of the project for its entire duration: full time during the first three years, part time for the remaining months.

http://ausonius.u-bordeaux-montaigne.fr/presentation/recrutements

January 23, 2017

Current Epigraphy

EpiDoc training workshop, London, April 2017

We invite applications to participate in a training workshop on digital editing of papyrological and epigraphic texts, at the Institute of Classical Studies, London, April 3–7, 2017. The workshop will be taught by Gabriel Bodard and Lucia Vannini (ICS) and Simona Stoyanova (KCL). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc: Ancient Documents in XML

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and use of the online Papyrological Editor tool.

The workshop will assume knowledge of papyrology or epigraphy; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome. To apply, please email gabriel.bodard@sas.ac.uk with a brief description of your background and reason for application, by February 14, 2017.

(Revised to bring back deadline for applications to Feb 14th.)

January 17, 2017

Stoa

EpiDoc training workshop, London, April 2017

We invite applications to participate in a training workshop on digital editing of papyrological and epigraphic texts, at the Institute of Classical Studies, London, April 3–7, 2017. The workshop will be taught by Gabriel Bodard and Lucia Vannini (ICS) and Simona Stoyanova (KCL). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc: Ancient Documents in XML

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and use of the online Papyrological Editor tool.

The workshop will assume knowledge of papyrology or epigraphy; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome. To apply, please email gabriel.bodard@sas.ac.uk with a brief description of your background and reason for application, by February 14, 2017.

(Revised to bring back deadline for applications to Feb 14th.)

November 10, 2016

Current Epigraphy

Digital Epigraphy am Scheideweg? / Digital Epigraphy at a crossroads?

“Error message: Server not found”: If everything remains as it is now, the familiar click on EDH’s Internet address (www.epigraphische-datenbank-heidelberg.de) in four years will generate exactly this feedback – after a total of 34 years work on EDH and 23 years availability online. The reason: … read more … .

September 05, 2016

Current Epigraphy

EpiDoc Workshop 2016 – Bologna

12-14 settembre 2016

Alma Mater Studiorum Università di Bologna

 Dipartimento di Storia Culture Civiltà, sezione di Storia Antica

Sala Celio – piano V – Via Zamboni 38

Alice Bencivenni, Marta Fogagnolo (Università di Bologna)

Giuditta Mirizio (Università di Bologna, Universität Heidelberg)

Irene Vagionakis (Università Ca’ Foscari Venezia)

Lunedì 12 settembre (09:00-18:00)

09:00 Introduzione al corso e presentazioni
09:30 Introduzione a EpiDoc e a XML

11:30 Introduzione alla struttura di una edizione critica epigrafica
11:45 The EpiDoc guidelines: struttura di base di un file EpiDoc; dati descrittivi e storici

14:00 The EpiDoc guidelines: trascrizione del testo
15:30 Esempi completi di markup
16:00 Come usare Oxygen
16:30 Esercitazioni EpiDoc

Martedì 13 settembre (09:00-18:00)

09:00 Trasformazione XSL
10:00 The EpiDoc guidelines: indicizzazione

11:30 Esercitazioni EpiDoc

13:30 Papyri.info; Leiden+, Leiden+ Help
14:30 Esercitazioni

Mercoledì 14 settembre (09:00-17:30)

09:00 Epifacs
10:00 EpiDoc: approfondimento su alcuni aspetti del tagging EpiDoc

11:30 Esercitazioni a scelta

14:00 EpiDoc Workshop Blog e Markup List
14:30 Eagle apps
15:30 Esercitazioni a scelta
17:00 Feedback session

April 19, 2016

Horothesia (Tom Elliott)

Stable Orbits or Clear Air Turbulence: Capacity, Scale, and Use Cases in Geospatial Antiquity


I delivered the following talk on 8 April 2016 at the Mapping the Past: GIS Approaches to Ancient History conference at the University of North Carolina at Chapel Hill. Update (19 April 2016): video is now available on YouTube, courtesy of the Ancient World Mapping Center.

How many of you are familiar with Jo Guldi's on-line essay on the "Spatial Turn" in western scholarship? I highly recommend it. It was published in 2011 as a framing narrative for the Spatial Humanities website, a publication of the Scholar's Lab at the University of Virginia. The website was intended partly to serve as a record of the NEH-funded Institute for Enabling Geospatial Scholarship. That Institute, organized in a series of three thematic sessions, was hosted by the Scholars Lab in 2009 and 2010. The essay begins as follows:
“Landscape turns” and “spatial turns” are referred to throughout the academic disciplines, often with reference to GIS and the neogeography revolution ... By “turning” we propose a backwards glance at the reasons why travelers from so many disciplines came to be here, fixated upon landscape, together. For the broader questions of landscape – worldview, palimpsest, the commons and community, panopticism and territoriality — are older than GIS, their stories rooted in the foundations of the modern disciplines. These terms have their origin in a historic conversation about land use and agency.
Professor Guldi's essay takes us on a tour through the halls of the Academy, making stops in a variety of departments, including Anthropology, Literature, Sociology, and History. She traces the intellectual innovations and responses -- prompted in no small part by the study and critique of the modern nation state -- that iteratively gave rise to many of the research questions and methods that concern us at this conference. I don't think it would be a stretch to say that not only this conference but its direct antecedents and siblings -- the Ancient World Mapping Center and its projects, the Barrington Atlas and its inheritors -- are all symptoms of the spatial turn.

So what's the point of my talk this evening? Frankly, I want to ask: to what degree do we know what we're doing? I mean, for example, is spatial practice a subfield? Is it a methodology?  It clearly spans chairs in the Academy. But does it answer -- better or uniquely? -- a particular kind of research question? Is spatial inquiry a standard competency in the humanities, or should it remain the domain of specialists? Does it inform or demand a specialized pedagogy? Within ancient studies in particular, have we placed spatially informed scholarship into a stable orbit that we can describe and maintain, or are we still bumping and bouncing around in an unruly atmosphere, trying to decide whether and where to land?

Some will recognize in this framework questions -- or should we say anxieties -- that are also very much alive for the digital humanities. The two domains are not disjoint. Spatial analysis and visualization are core DH activities. The fact that the Scholar's Lab proposed and the NEH Office of Digital Humanities funded the Geospatial Institute I mentioned earlier underscore this point.

So, when it comes to spatial analysis and visualization, what are our primary objects of interest? "Location" has to be listed as number one, right? Location, and relative location, are important because they are variables in almost every equation we could care about. Humans are physical beings, and almost all of our technology and interaction -- even in the digital age -- are both enabled and constrained by physical factors that vary not only in time, but also in three-dimensional space. If we can locate people, places, and things in space -- absolutely or relatively -- then we can open our spatial toolkit. Our opportunities to explore become even richer when we can access the way ancient people located themselves, each other, places, and things in space: the rhetoric and language they used to describe and depict those locations.

The connections between places and between places and other things are also important. The related things can be of any imaginable type: objects, dates, events, people, themes. We can express and investigate these relationships with a variety of spatial and non-spatial information structures: directed graphs and networks for example. There are digital tools and methods at our disposal for working with these mental constructs too, and we'll touch on a couple of examples in a minute. But I'd like the research questions, rather than the methods, to lead the discussion.

When looking at both built and exploited natural landscapes, we are often interested in the functions humans impart to space and place. These observations apply not only to physical environments, but also to their descriptions in literature and their depictions in art and cartography. And so spatial function is also about spatial rhetoric, performance, audience, and reception.

Allow me a brief example: the sanctuary of Artemis Limnatis at Volimnos in the Tayegetos mountains (cf. Koursoumis 2014; Elliott 2004, 74-79 no. 10). Its location is demonstrated today only by scattered architectural, artistic, and epigraphic remains, but epigraphic and literary testimony make it clear that it was just one of several such sanctuaries that operated at various periods and places in the Peloponnese.  Was this ancient place of worship located in a beautiful spot, evocative of the divine? Surely it was! But it -- and its homonymous siblings -- also existed to claim, mark, guard, consecrate, and celebrate political and economic assertions about the land it overlooked. Consequently, the sanctuary was a locus of civic pride for the Messenians and the Spartans, such that -- from legendary times down to at least the reign of Vespasian -- it occasioned both bloodshed and elite competition for the favor of imperial powers. Given the goddess's epithet (she is Artemis Of The Borders), the sanctuary's location, and its history of contentiousness, I don't think we're surprised that a writer like Tacitus should take notice of delegations from both sides arriving in Rome to argue for and against the most recent outcome in the struggle for control of the sanctuary. I can't help but imagine him smirking as he drops it into the text of his Annals (4.43), entirely in indirect discourse and deliberately ambiguous of course about whether the delegations appeared before the emperor or the Senate. It must have given him a grim sort of satisfaction to be able to record a notable interaction between Greece and Rome during the reign of Tiberius that also served as a metaphor for the estrangement of emperor and senate, of new power and old prerogatives.

Epigraphic and literary analysis can give us insight into issues of spatial function, and so can computational methods. The two approaches are complementary, sometimes informing, supporting, and extending each other, other times filling in gaps the other method leaves open. Let's spend some time looking more closely at the computational aspects of spatial scholarship.

A couple of weeks ago, I got to spend some time talking to Lisa Mignone at Brown about her innovative work on the visibility of temples at Rome with respect to the valley of the Tiber and the approaches to the city. Can anyone doubt that, among the factors at play in the ancient siting and subsequent experience of such major structures, there's a visual expression of power and control at work? Mutatis mutandis, you can feel something like it today if you get the chance to walk the Tiber at length. Or, even if you just go out and contemplate the sight lines to the monuments and buildings of McCorkle Place here on the UNC campus. To be sure, in any such analysis there is a major role for the mind of the researcher ... in interpretation, evaluation, narration, and argument, and that researcher will need to be informed as much as possible by the history, archaeology, and literature of the place. But, depending on scale and the alterations that a landscape has undergone over time, there is also the essential place of viewshed analysis. Viewsheds are determined by assessing the visibility of every point in an area from a particular point of interest. Can I see the University arboretum from the north-facing windows of the Ancient World Mapping Center on the 5th floor of Davis Library? Yes, the arboretum is in the Center's viewshed. Well, certain parts of it anyway. Can I see the Pit from there? No. Mercifully, the Pit is not in the Center's viewshed.

In one methodological respect, Professor Mignone's work is not new. Viewshed analysis has been widely used for years in archaeological and historical study, at levels ranging from the house to the public square to the civic territory and beyond. I doubt anyone could enumerate all the published studies without a massive amount of bibliographical work. Perhaps the most well known -- if you'll permit an excursion outside the domain of ancient studies -- is Anne Kelly Knowles' work (with multiple collaborators) on the Battle of Gettysburg. What could the commanders see and when could they see it? There's a fascinating, interactive treatment of the data and its implications published on the website of Smithsonian Magazine.

Off the top of my head, I can point to a couple of other examples in ancient studies. Though their mention will only scratch the surface of the full body of work, I think they are both useful examples. There's Andrew Sherrat's 2004 treatment of Myceneae, which explores the site's visual and topographical advantages in an accessible, online form. It makes use of cartographic illustration and accessible text to make its points about strategically and economically interesting features of the site.

I also recall a poster by James Newhard and several collaborators that was presented at the 2012 meeting of the Archaeological Institute of America. It reported on the use of viewshed analysis and other methods as part of an integrated approach to identifying Byzantine defensive systems in North Central Anatolia. The idea here was that the presence of a certain kind of viewshed -- one offering an advantage for surveillance of strategically useful landscape elements like passes and valleys -- might lend credance to the identification of ambiguous archaeological remains as fortifications. Viewshed analysis is not just revelatory, but can also be used for predictive and taxonomic tasks.

In our very own conference, we'll hear from Morgan Di Rodi and Maria Kopsachelli about their use of viewshed analysis and other techniques to refine understanding of multiple archaeological sites in northwest Greece. So we'll get to see viewsheds in action!

Like most forms of computational spatial analysis, viewshed work is most rigorously and uniformly accomplished with GIS software, supplied with appropriately scaled location and elevation data. To do it reliably by hand for most interesting cases would be impossible. These dependencies on software and data, and the know-how to use them effectively, should draw our attention to some important facts. First of all, assembling the prerequisites of non-trivial spatial analysis is challenging and time consuming. More than once, I've heard Prof. Knowles say that something like ninety percent of the time and effort in a historical GIS project goes into data collection and preparation.  Just as we depend on the labor of librarians, editors, philologists, Renaissance humanists, medieval copyists, and their allies for our ability to leverage the ancient literary tradition for scholarly work, so too we depend on the labor of mathematicians, geographers, programmers, surveyors, and their allies for the data and computational artifice we need to conduct viewshed analysis. This inescapable debt -- or, if you prefer, this vast interdisciplinary investment in our work -- is a topic to which I'd like to return at the end of the talk.

Before we turn our gaze to other methods, I'd like to talk briefly about other kinds of sheds. Watershed analysis -- the business of calculating the entire area drained and supplied by a particular water system -- is a well established method of physical geography and the inspiration for the name viewshed. It has value for cultural, economic, and historical study too, and so should remain on our spatial RADAR. In fact, Melissa Huber's talk on the Roman water infrastructure under Claudius will showcase this very method.

Among Sarah Bond's current research ideas is a "smells" map of ancient Rome. Where in the streets of ancient Rome would you have encountered the odors of a bakery or a latrine a fullonica. And -- God help you -- what would it have smelled like? Will it be possible at some point to integrate airflow and prevailing wind models with urban topography and location data to calculate "smellsheds" or "nosescapes" for particular installations and industries? I sure hope so! Sound sheds ought to be another interesting possibility; we ought to look for leadership to the work of people like Jeff Veitch who is investigating acoustics and architecture at Ostia, and the Virtual Paul's Cross project at North Carolina State.

Every bit as interesting as what the ancients could see, and from where they could see it, is the question of how they saw things in space and how they described them. Our curiosity about ancient geographic mental models and worldview drives us to ask questions like ones Richard Talbert has been asking: did the people living in a Roman province think of themselves as "of the province" in the way modern Americans think of themselves as being North Carolinians or Michiganders? Were the Roman provinces strictly administrative in nature, or did they contribute to personal or corporate identity in some way? Though not a field that has to be plowed only with computers, questions of ancient worldview do sometimes yield to computational approaches.

Consider, for example, the work of Elton Barker and colleagues under the rubric of the Hestia project. Here's how they describe it:
Using a digital text of Herodotus’s Histories, Hestia uses web-mapping technologies such as GIS, Google Earth and the Narrative TimeMap to investigate the cultural geography of the ancient world through the eyes of one of its first witnesses. 
In Hestia, word collocation -- a mainstay of computational text analysis -- is brought together and integrated with location-based measures to interrogate not only the spatial proximity of places mentioned by Herodotus, but also the textual proximities of those place references. With these keys, the Hestia team opens the door to Herodotus' geomind and that of the culture he lived in: what combinations of actual location, historical events, cultural assumptions, and literary agenda shape the mention of places in his narrative?

Hestia is not alone in exploring this particular frontier. Tomorrow we'll hear from Ryan Horne about his collaborative work on the Big Ancient Mediterranean project. Among its pioneering aspects is the incorporation of data about more than the collocation of placenames in primary sources and the relationships of the referenced places with each other. BAM also scrutinizes personal names and the historical persons to whom they refer. Who is mentioned with whom where? What can we learn from exploring the networks of connection that radiate from such intersections?

The introduction of a temporal axis into geospatial calcuation and visualization is also usually necessary and instructive in spatial ancient studies, even if it still proves to be more challenging in standard GIS software than one might like. Amanda Coles has taken on that challenge, and will be telling us more about what it's helped her learn about the interplay between warfare, colonial foundations, road building, and the Roman elites during the Republic.

Viewsheds, worldviews, and temporality, oh my!

How about spatial economies? How close were sources of production to their markets? How close in terms of distance? How close in terms of travel time? How close in terms of cost to move goods?

Maybe we are interested in urban logistics. How quickly could you empty the colosseum? How much bread could you distribute to how many people in a particular amount of time at a particular place? What were the constraints and capacities for transport of the raw materials? What do the answers to such questions reveal about the practicality, ubiquity, purpose, social reach, and costs of communal activities in the public space? How do these conclusions compare with the experiences and critiques voiced in ancient sources?

How long would it take a legion to move from one place to another in a particular landscape? What happens when we compare the effects of landscape on travel time with the built architecture of the limes or the information we can glean about unit deployment patterns from military documents like the Vindolanda tablets or the ostraca from Bu Njem?

The computational methods involved in these sorts of investigations have wonderful names, and like the others we've discussed, require spatial algorithms realized in specialized software. Consider cost surfaces: for a particular unit of area on the ground, what is the cost in time or effort to pass through it? Consider network cost models: for specific paths between nodal points, what is the cost of transit? Consider least cost path analysis: given a cost surface or network model, what is the cheapest path available between two points?

Many classicists will have used Orbis: The Stanford Geospatial Network Model of the Roman World. The Orbis team, assembled by Walter Scheidel, has produced an online environment in which one can query a network model of travel costs between key nodal points in the Roman world, varying such parameters as time of year and mode of transport. This model, and its digital modes of access, bring us to another vantage point. How close were two places in the Roman world, not as the crow flies, not in terms of miles along the road, but as the boat sailed or the feet walked.

Barbora Weissova is going to talk to us tomorrow about her work in and around Nicaea. Among her results, she will discuss another application of Least Cost Path Analysis: predicting the most likely route for a lost ancient roadway.

It's not just about travel, transport, and cost. Distribution patterns are of interest too, often combined with ceramic analysis, or various forms of isotopic or metallurgical testing, to assess the origin, dissemination, and implications of ancient objects found in the landscape. Inscriptions, coins, portable antiquities, architectural and artistic styles, pottery, all have been used in such studies. Corey Ellithorpe is going to give us a taste of this approach in numismatics by unpacking the relationship between Roman imperial ideology and regional distribution patterns of coins.

I'd like to pause here for just a moment and express my hope that you'll agree with the following assessment. I think we are in for an intellectual feast tomorrow. I think we should congratulate the organizers of the conference for such an interesting, and representative, array of papers and presentations. That there is on offer such a tempting smorgasbord is also, of course, all to the credit of the presenters and their collaborators. And surely it must be a factor as we consider the ubiquity and disciplinarity of spatial applications in ancient studies.

Assiduous students of the conference program will notice that I have neglected yet to mention a couple of the papers. Fear not, for they feature in the next section of my talk, which is -- to borrow a phrase from Meghan Trainor and Kevin Kadish -- all about that data.

So, conference presenters, would you agree with the dictum I've attributed to Anne Knowles? Does data collection and preparation take up a huge chunk of your time?

Spatial data, particularly spatial data for ancient studies, doesn't normally grow on trees, come in a jar, or sit on a shelf. The ingredients have to be gathered and cleaned, combined and cooked. And then you have to take care of it, transport it, keep track of it, and refashion it to fit your software and your questions. Sometimes you have to start over, hunt down additional ingredients, or try a new recipe. This sort of iterative work -- the cyclic remaking of the experimental apparatus and materials -- is absolutely fundamental to spatially informed research in ancient studies.

If you were hoping I'd grind an axe somewhere in this talk, you're in luck. It's axe grinding time.

There is absolutely no question in my mind that the collection and curation of data is part and parcel of research. It is a research activity. It has research outcomes. You can't answer questions without it. If you aren't surfacing your work on data curation in your CV, or if you're discounting someone else's work on data curation in decisions about hiring, tenure, and promotion, then I've got an old Bob Dylan song I'd like to play for you.

  • Archive and publish your datasets. 
  • Treat them as publications in your CV. 
  • Write a review of someone else's published dataset and try to get it published. 
  • Document your data curation process in articles and conference presentations.

Right. Axes down.

So, where does our data come from? Sometimes we can get some of it in prepared form, even if subsequent selection and reformatting is required. For some areas and scales, modern topography and elevation can be had in various raster and vector formats. Some specialized datasets exist that can be used as a springboard for some tasks. It's here that the Pleiades project, which I direct, seeks to contribute. By digitizing not the maps from the Barrington Atlas, but the places and placenames referenced on those maps and in the map-by-map directory, we created a digital dataset with potential for wide reuse. By wrapping it in a software framework that facilitates display, basic cartographic visualization, and collaborative updates, we broke out of the constraints of scale and cartographic economy imposed by the paper atlas format. Pleiades now knows many more places than the Barrington did, most of these outside the cultures with which the Atlas was concerned. More precise coordinates are coming in too, as are more placename variants and bibliography. All of this data is built for reuse. You can collect it piece by piece from the web interface or download it in a number of formats. You can even write programs to talk directly to Pleiades for you, requesting and receiving data in a computationally actionable form. The AWMC has data for reuse too, including historical coastlines and rivers and map base materials. It's all downloadable in GIS-friendly formats.

But Pleiades and the AWMC only help for some things. It's telling that only a couple of the projects represented at this conference made use of Pleiades data. That's not because Pleiades is bad or because the authors didn't know about Pleiades or the Center. It's because the questions they're asking require data that Pleiades is not designed to provide.

It's proof of the point I'd like to underline: usually -- because your research question is unique in some way, otherwise you wouldn't be pursuing it -- you're going to have to get your hands dirty with data collection.

But before we get dirty, I'm obliged to point out that, although Pleiades has received significant, periodic support from the National Endowment for the Humanities since 2006, the views, findings, conclusions, or recommendations expressed in this lecture do not necessarily reflect those of the National Endowment for the Humanities.

We've already touched on the presence of spatial language in literature. For some studies, the placenames, placeful descriptions, and narratives of space found in both primary and secondary sources constitute raw data we'd like to use. Identifying and extracting such data is usually a non-trivial task, and may involve a combination of manual and computational techniques, the latter depending on the size and tractability of the textual corpus in question and drawing on established methods in natural language processing and named entity recognition. It's here we may encounter "geoparsing" as a term of art. Many digital textual projects and collections are doing geoparsing: individual epigraphic and papyrological publications using the Text Encoding Initiative and EpiDoc Guidelines; the Perseus Digital Library; the Pelagios Commons by way of its Recogito platform. The China Historical GIS is built up entirely from textual sources, tracking each placename and each assertion of administrative hierarchy back to its testimony.

For your project, you may be able to find geoparsed digital texts that serve your needs, or you may need to do the work yourself. Either way, some transformation on the results of geoparsing is likely to be necessary to make them useful in the context of your research question and associated apparatus.

Relevant here is Micah Myers's conference paper. He is going to bring together for us the analysis and visualization of travel as narrated in literature. I gather from his abstract that he'll show us not only a case study of the process, but discuss the inner workings of the on-line publication that has been developed to disseminate the work.

Geophysical and archaeological survey may supply your needs. Perhaps you'll have to do fieldwork yourself, or perhaps you can collect the information you need from prior publications or get access to archival records and excavation databases. Maybe you'll get lucky and find a dataset that's been published into OpenContext, the British Archaeology Data Service, or tDAR: the Digital Archaeological Record. But using this data requires constant vigilance, especially when it was collected for some purpose other than you own. What were the sampling criteria? What sorts of material were intentionally ignored? What circumstances attended collection and post-processing?

Sometimes the location data we need comes not from a single survey or excavation, but from a large number of possibly heterogeneous sources. This will be the case for many spatial studies that involve small finds, inscriptions, coins, and the like. Fortunately, many of the major documentary, numismatic, and archaeological databases are working toward the inclusion of uniform geographic information in their database records. This development, which exploits the unique identifying numbers that Pleiades bestows on each ancient place, was first championed by Leif Isaksen, Elton Barker, and Rainer Simon of the Pelagios Commons project. If you get data from a project like the Heidelberg Epigraphic Databank, papyri.info, the Arachne database of the German Archaeological Institute, the Online Coins of the Roman Empire, or the Perseus Digital Library, you can count on being able to join it easily with Pleiades data and that of other Pelagios partners. Hopefully this will save some of us some time in days to come.

Sometimes what's important from a prior survey will come to us primarily through maps and plans. Historical maps may also carry information we'd like to extract and interpret. There's a whole raft of techniques associated with the scanning, georegistration, and georectification (or warping) of maps so that they can be layered and subjected to feature tracing (or extraction) in GIS software. Some historic cartofacts -- one thinks of the Peutinger map and medieval mappae mundi as examples -- are so out of step with our expectations of cartesian uniformity that these techniques don't work. Recourse in such cases may be had to first digitizing features of interest in the cartesian plane of the image itself, assigning spatial locations to features later on the basis of other data. Digitizing and vectorizing plans and maps resulting from multiple excavations in order to build up a comprehensive archaeological map of a region or site also necessitates not only the use of GIS software but the application of careful data management practices for handling and preserving a collection of digital files that can quickly grow huge.

We'll get insight into just such an effort tomorrow when Tim Shea reports on Duke's "Digital Athens Project".

Let's not forget remote sensing! In RS we use sensors -- devices that gather information in various sections of the electro-magnetic spectrum or that detect change in local physical phenomena. We mount these sensors on platforms that let us take whatever point of view is necessary to achieve the resolution, scale, and scope of interest: satellites, airplanes, drones, balloons, wagons, sleds, boats, human hands. The sensors capture emitted and reflected light in the visible, infrared, and ultraviolet wavelengths or magnetic or electrical fields. They emit and detect the return of laser light, radio frequency energy, microwaves, millimeter waves, and, especially underwater, sound waves. Specialized software is used to analyze and convert such data for various purposes, often into rasterized intensity or distance values that can be visualized by assigning brightness and color scales to the values in the raster grid. Multiple images are often mosaicked together to form continuous images of a landscape or 360 degree seamless panoramas.

Remotely sensed data facilitate the detection and interpretation of landforms, vegetation patterns, and physical change over time, revealing or enhancing understanding of built structures and exploited landscapes, as well as their conservation. This is the sort of work that Sarah Parcak has been popularizing, but it too has decades of practice behind it. In 1990, Tom Sever's dissertation reported on a remote-sensing analysis of the Anasazi road system, revealing a component of the built landscape that was not only invisible on the ground, but that demonstrates that the Anasazi were far more willing than even the Romans to create arrow-straight roads in defiance of topographical impediments. More recently, Prof. Sever and his NASA colleague Daniel Irwin have been using RS data for parts of Guatemala, Honduras, and Mexico, to distinguish between vegetation that thrives in alkaline soils and vegetation that doesn't. Because of the Mayan penchant for coating monumental structures with significant quantities of lime plaster, this data has proved remarkably effective in the locating of previously unknown structures beneath forest canopy. The results seem likely to overturn prevailing estimates of the extent of Mayan urbanism, illustrating a landscape far more cleared and built upon than heretofore proposed (cf. Sever 2003).

Given the passion with which I've already spoken about the care and feeding of data, you'll be unsurprised to learn that I'm really looking forward to Nevio Danelon's presentation tomorrow on the capture and curation of remotely sensed data in a digital workflow management system designed to support visualization processes.

I think it's worth noting that both Professor Parcak's recently collaborative work on a possible Viking settlement in Newfoundland, as well as Prof. Sever's dissertation, represent a certain standard in the application of remote sensing to archaeology. RS analysis is tried or adopted for most archaeological survey and excavation undertaken today. The choice of sensors, platforms, and analytical methods will of course vary in response to landscape conditions, expected archaeological remains, and the realities of budget, time, and know-how.

Similarly common, I think, in archaeological projects is the consideration of geophysical, alluvial, and climatic features and changes in the study area. The data supporting such considerations will come from the kinds of sources we've already discussed, and will have to be managed in appropriate ways. But it's in this area -- ancient climate and landscape change -- that I think ancient studies has a major deficit in both procedure and data. Computational, predictive modeling of ancient climate, landscape, and ground cover has made no more than tentative and patchy inroads on the way we think about and map the ancient historical landscape. That's a deficit that needs addressing in an interdisciplinary and more comprehensive way.

I'd be remiss if, before moving on to conclusions, I kept the focus so narrowly on research questions and methods that we miss the opportunity to talk about pedagogy, public engagement, outreach, and cultural heritage preservation. Spatial practice in the humanities is increasingly deeply involved in such areas. The Ancient World Mapping Center's Antiquity a-la Carte website enables users to create and refine custom maps from Pleiades and other data that can then be cited, downloaded, and reused. It facilitates the creation of map tests, student projects, and maps to accompany conference presentations and paper submissions.

Meanwhile, governments, NGOs, and academics alike are brining the full spectrum of spatial methods to bear as they try to prevent damage to cultural heritage sites through assessment, awareness, and intervention. The American Schools of Oriental Research conducts damage assessments and site monitoring with funding in part from the US State Department. The U.S. Committee of the Blue Shield works with academics to prepare geospatial datasets that are offered to the Department of Defense to enhance compliance with the 1954 Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict.

These are critical undertakings as well, and should be considered an integral part of our spatial antiquity practice.

So, how should we gather up the threads of this discussion so we can move on to the more substantive parts of the conference?

I'd like to conclude as I began, by recommending an essay. In this case, I'm thinking of Bethany Nowiskie's recent essay on "capacity and care" in the digital humanities. Bethany is the former director of UVA's Scholars Lab. She now serves as Director of the Digital Library Federation at the Council on Library and Information Resources. I had the great good fortune to hear Bethany deliver a version of this essay as a keynote talk at a project director's meeting hosted by the NEH Office of Digital Humanities in Washington in September of last year. You can find the essay version on her personal website.

Bethany thinks the Humanities must expand its capacity in order not only to survive the 21st century, but to contribute usefully to its grand challenges. To cope with increasing amounts and needs for data of every kind. To move gracefully in analysis and regard from large scales to small ones and to connect analysis at both levels. To address audiences and serve students in an expanding array of modes. To collaborate across disciplines and heal the structurally weakening divisions that exist between faculty and "alternate academics", even as the entire edifice of faculty promotion and tenure threatens to shatter around us.

What is Bethany's prescription? An ethic of care. She defines an ethic of care as "a set of practices", borrowing the following quotation from the political scientist Joan Tronto:
a species of [collective] activity that includes everything we do to maintain, contain, and repair our world, so that we can live in it as well as possible.
I think our practice of spatial humanities in ancient studies is just such a collective activity. We don't have to turn around much to know that we are cradled in the arms and buoyed up on the shoulders of a vast cohort, stretching back through time and out across the globe. Creating data and handing it on. Debugging and optimizing algorithms. Critiquing ideas and sharpening analytical tools.

The vast majority of projects on the conference schedule, or that I could think of to mention in my talk, are explicitly and immediately collaborative.

And we can look around this room and see like-minded colleagues galore. Mentors. Helpers. Friends. Comforters. Makers. Guardians.

And we have been building the community infrastructure we need to carry on caring about each other and about the work we do to explain the human past to the human present and to preserve that understanding for the human future. We have centers and conferences and special interest groups and training sessions. We involve undergraduates in research and work with interested people from outside the academy. We have increasingly useful datasets and increasingly interconnected information systems. Will all these things persist? No, but we get to play a big role in deciding what and when and why.

So if there's a stable orbit to be found, I think it's in continuing to work together and to do so mindfully, acknowledging our debts to each other and repaying them in kind.

I'm reminded of a conversation I had with Scott Madry, back in the early aughts when we were just getting the Mapping Center rolling and Pleiades was just an idea. As many of you know, Scott together with Carole Crumley and numerous other collaborators here at UNC and beyond, have been running a multidimensional research project in Burgundy since the 1970s. At one time or another the Burgundy Historical Landscapes project has conducted most of the kinds of studies I've mentioned tonight, all the while husbanding a vast and growing store of spatial and other data across a daunting array of systems and formats.

I think that the conversation I'm remembering with Scott took place after he'd spent a couple of hours teaching undergraduate students in my seminar on Roman roads and land travel how to do orthophoto analysis the old fashioned way: with stereo prints and stereoscopes. He was having them do the Sarah Parcak thing: looking for crop marks and other indications of potentially buried physical culture. After the students had gone, Scott and I were commiserating about the challenges of maintaining and funding long-running research projects. I was sympathetic, but know now that I really didn't understand those challenges then. Scott did, and I remember what he said. He said: "We were standing on that hill in Burgundy twenty years ago, and as we looked around I said to Carol: 'somehow, we are going to figure out what happened here, no matter how long it takes.'"

That's what I'm talking about.

March 13, 2016

Current Epigraphy

EAGLE Storytelling App available on wordpress.org

The EAGLE Storytelling App is a WordPress plugin that allows users to write blogpost, news, stories and narratives by citing and embedding content from various web repositories related to the Ancient World (like Pelagios, the iDAI.gazetteer, Finds.org and many more).

The web app is available on the EAGLE project’s official website. Users can create an EAGLE account and start writing their epigraphic-related narratives right away and publish them on the Stories page.

the interface to insert epigraphy-related content

the interface to insert epigraphy-related content

But right now, epigraphers that want to experiment with the application can also install it on their WordPress-powered site easily from the official plugin repositories!

The application is designed to work within the EAGLE user-dedicated ecosystem (the search engine and the EAGLE collection of inscriptions and images), but it’s easily customizable: new plugins to parse and embed content from various sources can be implemented with minimal effort.

Currently, the EAGLE Storytelling App support content from:

What’s more, we provide an “EpiDoc generic reader” that can transform any EpiDoc-compliant XML file into a human-readable edition, with formatted text, images and all the information.

 a map displaying the locations cited in the EAGLE stories

a map displaying the locations cited in the EAGLE stories

 

Embedded content is visualized as a compact interactive box that can be expanded by the users or (in the case of maps inserted from the iDAI.gazetteer and Pleiades) navigated. Users can also visualize the object in its original webpage or launch a query to retrieve all other stories that embed the same item.

The app makes it very easy for authors to insert their content from the supported repositories. A simple interface allows users to search in the supported websites from the “Add Media” menu of WP. Alternatively, authors can use the native search pages on the supported sites and then copy-paste the URL in the search interface of the App or even directly into the story editor!

If you want to know more about our application, visit our FAQ page or simply browse some of the stories that our users have published.

 

Francesco Mambrini, Philipp Franck

March 04, 2016

Current Epigraphy

EpiDoc at Summer School in Digital Humanities (Sep 2016, Hissar, Bulgaria)

The Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, organizes jointly with an international team of lecturers and researchers in the field of Digital Humanities a Summer School in Digital Humanities. The Summer School will take place between 05-10 September 2016 and is targeted at historians, archaeologists, classical scholars, philologists, museum and conservation workers, linguists, researchers in translation and reception studies, specialists in cultural heritage and cultural management, textual critics and other humanitarians with little to moderate skills in IT who would like to enhance their competences. The Summer School will provide four introductory modules on the following topics:

  • Text encoding and interchange by Gabriel Bodard, University of London, and Simona Stoyanova, King’s College London: TEI, EpiDoc XML (http://epidoc.sourceforge.net/), marking up of epigraphic monuments, authority lists, linked open data for toponymy and prosopography: SNAP:DRGN (http://snapdrgn.net/), Pelagios (http://pelagios-project.blogspot.bg/), Pleiades (http://pleiades.stoa.org/).
  • Text and image annotation and alignment by Simona Stoyanova, King’s College London, and Polina Yordanova, University of Sofia: SoSOL Perseids tools (http://perseids.org), Arethusa grammatical annotation and treebanking of texts, Alpheios text and translation alignment, text/image alignment tools.
  • Geographical Information Systems and Neogeography by Maria Baramova, University of Sofia, and Valeria Vitale, King’s College London: Historical GIS, interactive map layers with historical information, using GeoNames (http://www.geonames.org/) and geospatial data, Recogito tool for Pelagios.
  • 3D Imaging and Modelling for Cultural Heritage by Valeria Vitale, King’s College London: photogrammetry, digital modelling of indoor and outdoor objects of cultural heritage, Meshmixer (http://www.meshmixer.com/), Sketchup (http://www.sketchup.com/) and others.

The school is open for applications by MA and PhD students and postdoc and early researchers from all humanitarian disciplines, as well as employees in the field of cultural heritage. The applicants should send a CV and a Motivation statement clarifying their specific needs and expressing interest in one or more of the modules no later than 15.05.2016. The places are limited and you will be notified about your acceptance within 10 working days after the application deadline. Transfer from Sofia to Hissar and back, accommodation and meal expenses during the Summer School are covered by the organizers. Five scholarships of 250 euro will be accorded by the organizing committee to the participants whose work and motivation are deemed the most relevant and important.

The participation fee is 40 еurо. It covers coffee breaks, social programme and materials for the participants.

Please submit your applications to dimitar.illiev@gmail.com.

ORGANISING COMMITTEE
Assoc. Prof. Dimitar Birov (Department of Informatics, University of Sofia)
Dr. Maria Baramova (Department of Balkan History, University of Sofia)
Dr. Dimitar Iliev (Department of Classics, University of Sofia)
Mirela Hadjieva (Centre for Excellence in the Humanities, University of Sofia)
Dobromir Dobrev (Centre for Excellence in the Humanities, University of Sofia)
Kristina Ferdinandova (Centre for Excellence in the Humanities, University of Sofia)

March 03, 2016

Stoa

Summer School in Digital Humanities (Sep 2016, Hissar, Bulgaria)

The Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, organizes jointly with an international team of lecturers and researchers in the field of Digital Humanities a Summer School in Digital Humanities. The Summer School will take place between 05-10 September 2016 and is targeted at historians, archaeologists, classical scholars, philologists, museum and conservation workers, linguists, researchers in translation and reception studies, specialists in cultural heritage and cultural management, textual critics and other humanitarians with little to moderate skills in IT who would like to enhance their competences. The Summer School will provide four introductory modules on the following topics:

  • Text encoding and interchange by Gabriel Bodard, University of London, and Simona Stoyanova, King’s College London: TEI, EpiDoc XML (http://epidoc.sourceforge.net/), marking up of epigraphic monuments, authority lists, linked open data for toponymy and prosopography: SNAP:DRGN (http://snapdrgn.net/), Pelagios (http://pelagios-project.blogspot.bg/), Pleiades (http://pleiades.stoa.org/).
  • Text and image annotation and alignment by Simona Stoyanova, King’s College London, and Polina Yordanova, University of Sofia: SoSOL Perseids tools (http://perseids.org), Arethusa grammatical annotation and treebanking of texts, Alpheios text and translation alignment, text/image alignment tools.
  • Geographical Information Systems and Neogeography by Maria Baramova, University of Sofia, and Valeria Vitale, King’s College London: Historical GIS, interactive map layers with historical information, using GeoNames (http://www.geonames.org/) and geospatial data, Recogito tool for Pelagios.
  • 3D Imaging and Modelling for Cultural Heritage by Valeria Vitale, King’s College London: photogrammetry, digital modelling of indoor and outdoor objects of cultural heritage, Meshmixer (http://www.meshmixer.com/), Sketchup (http://www.sketchup.com/) and others.

The school is open for applications by MA and PhD students and postdoc and early researchers from all humanitarian disciplines, as well as employees in the field of cultural heritage. The applicants should send a CV and a Motivation statement clarifying their specific needs and expressing interest in one or more of the modules no later than 15.05.2016. The places are limited and you will be notified about your acceptance within 10 working days after the application deadline. Transfer from Sofia to Hissar and back, accommodation and meal expenses during the Summer School are covered by the organizers. Five scholarships of 250 euro will be accorded by the organizing committee to the participants whose work and motivation are deemed the most relevant and important.

The participation fee is 40 еurо. It covers coffee breaks, social programme and materials for the participants.

Please submit your applications to dimitar.illiev@gmail.com.

ORGANISING COMMITTEE
Assoc. Prof. Dimitar Birov (Department of Informatics, University of Sofia)
Dr. Maria Baramova (Department of Balkan History, University of Sofia)
Dr. Dimitar Iliev (Department of Classics, University of Sofia)
Mirela Hadjieva (Centre for Excellence in the Humanities, University of Sofia)
Dobromir Dobrev (Centre for Excellence in the Humanities, University of Sofia)
Kristina Ferdinandova (Centre for Excellence in the Humanities, University of Sofia)

February 24, 2016

Stoa

EpiDoc Workshop, London, April 11-15, 2016

We invite applications for a 5-day training workshop on digital editing of epigraphic and papyrological texts, to be held in the Institute of Classical Studies, University of London, April 11-15, 2016. The workshop will be taught by Gabriel Bodard (ICS), Simona Stoyanova (KCL) and Pietro Liuzzo (Heidelberg / Hamburg). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions in TEI, identifying and linking to external person and place authorities, and use of the online Papyrological Editor and Perseids platforms.

No technical skills are required, but a working knowledge of Greek/Latin or other ancient language, epigraphy or papyrology, and the Leiden Conventions will be assumed. The workshop is open to participants of all levels, from graduate students to professors and professionals.

To apply for a place on this workshop please email pietro.liuzzo@zaw.uni-heidelberg.de with a brief description of your reason for interest and summarising your relevant background and experience, by 6th March 2016. Please use as subject of your email “[EPIDOC LONDON 2016] application <yourname>”.