EpiDoc: News and Views

http://planet.atlantides.org/epidoc

Tom Elliott (tom.elliott@nyu.edu)

This feed aggregator is part of the Planet Atlantides constellation. Its current content is available in multiple webfeed formats, including Atom, RSS/RDF and RSS 1.0. The subscription list is also available in OPML and as a FOAF Roll. All content is assumed to be the intellectual property of the originators unless they indicate otherwise.

October 06, 2017

Current Epigraphy

Visible Words workshop (Brown, Oct 6-7, 2017)

Posted for John Bodel:

Visible Words: Digital Epigraphy in a Global Perspective
An international workshop at Brown University, Providence, R.I.
John D. Rockefeller Library, 6-7 October 2017

This workshop, which is free and open to the public, will bring together experts in the epigraphic cultures of different languages and scriptural traditions from the ancient Mediterranean and Near East, South and East Asia, and Mesoamerica who are also involved in creating (or developing) digital editions and databases of inscriptions, to explore shared interests and challenges during a day-and-a-half of short presentations and group round-table discussions.

The workshop will be immediately preceded by an EpiDoc workshop (5-6 October) designed to introduce the basics of the EpiDoc editing system; the Epidoc workshop is also free and open to the public but space in it is limited and advanced registration is required. For further information about both workshops, see here: https://www.brown.edu/academics/classics/visible-words-workshop

September 18, 2017

Current Epigraphy

EpiDoc Workshop — Brown University, Oct. 5-6 2017

Posted for Scott DiGiulio:

We are pleased to announce a 1.5 day Introduction to EpiDoc workshop at Brown University on Oct. 5-6, to be held in conjunction with the conference “Visible Words: Epigraphy in a Global Perspective,” taking place Oct. 6-7 (for more information on the conference itself, please see https://www.brown.edu/academics/classics/visible-words-workshop).

This workshop will provide an introduction to the EpiDoc schema for editing epigraphic and papyrological texts. EpiDoc (epidoc.sf.net) is a community of practice as well as a specialized customization of the XML schema developed by the Text Encoding Initiative (TEI) for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital epigraphic projects including Inscriptions of Aphrodisias (http://insaph.kcl.ac.uk), the US Epigraphy Project (http://usepigraphy.brown.edu),the Duke Databank of Documentary Papyri (http://papyri.info/), the Digital Corpus of Literary Papyri (https://wiki.digitalclassicist.org/Digital_Corpus_of_Literary_Papyri), and many more. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and project workflow and management.

Draft Schedule: http://goo.gl/VNNhBE

The workshop is limited to 25 participants, so we ask you to please fill out the application form: https://goo.gl/forms/Qxf8lnNiwuZJQZd02.

Instructors will include Scott DiGiulio (Mississippi State University), Elli Mylonas (Brown University); Hugh Cayless (Duke University); Tom Elliot (NYU) and others.

September 03, 2017

Current Epigraphy

Collection of Greek Ritual Norms (CGRN)

reposted from Classicist and MARKUP list

Collection of Greek Ritual Norms (abbreviated CGRN):

http://cgrn.ulg.ac.be/

The collection contains 222 inscriptions belonging to the category of so-called “sacred laws”, for which we have preferred the designation “ritual norms” (on this subject, see the programmatic introductory article found here: http://kernos.revues.org/2115). The texts included in the collection thus far concern the themes of sacrifice and purification. Each inscription is presented in an up-to-date published edition (occasionally, a new edition is offered), with information about its context, essential bibliography, French and English translations, and a detailed commentary. All of the files have been encoded in TEI-XML Epidoc, are fully searchable, and may be used and downloaded in Open Access.

The project behind the development of the CGRN, financed by the Fonds pour la recherche scientifique (F.R.S.-FNRS, Belgium) at University of Liège, is still ongoing. Updates to the website will be made on an annual basis. Additionally, a print-on-demand edition will soon become available on the website. The printing, sale and distribution of the work will be undertaken by Éditions De Boccard.

We are very interested in receiving your feedback about the CGRN at the following address:cgrn@ulg.ac.be.

Vinciane Pirenne-Delforge, Jan-Mathieu Carbon, Saskia Peels

August 30, 2017

Current Epigraphy

Digital Edition of IGCyr and GVCyr

The  Inscriptions of Greek Cyrenaica and Greek Verse Inscriptions of Cyrenaica have been published and are now available at https://igcyr.unibo.it/

Dobias-Lalou, Catherine. Inscriptions of Greek Cyrenaica in collaboration with Alice Bencivenni, Hugues Berthelot, with help from Simona Antolini, Silvia Maria Marengo, and Emilio Rosamilia; Dobias-Lalou, Catherine. Greek Verse Inscriptions of Cyrenaica in collaboration with Alice Bencivenni, with help from Joyce M. Reynolds and Charlotte Roueché. Bologna: CRR-MM, Alma Mater Studiorum Università di Bologna, 2017. ISBN 9788898010684, http://doi.org/10.6092/UNIBO/IGCYRGVCYR.

The corpus, edited in TEI EpiDoc, is available in English, French, Italian and Arabic. It contains almost 1000 inscriptions including numerous new texts, with metadata, images, translations in English, Italian  and French, apparatus, commentary and bibliography.

Texts and descriptions can be searched and browsed by a number of indexes provided.

Images are also separately available at http://amshistorica.unibo.it/epigrafi

August 08, 2017

Horothesia (Tom Elliott)

Batch XML validation at the command line

Updated: 8 August, 2017 to reflect changes in the installation pattern for jing.

Against a RelaxNG schema. I had help figuring this out from Hugh and Ryan at DC3:

$ find {searchpath} -name "*.xml" -print | parallel --tag jing {relaxngpath}
The find command hunts down all files ending with ".xml" in the directory tree under searchpath. The parallel command takes that list of files and fires off (in parallel) a jing validation run for each of them. The --tag option passed to jing ensures we get the name of the file passed through with each error message. This turns out (in general terms as seen by me) to be much faster than running each jing call in sequence, e.g. with the --exec primary in find.

As I'm running on a Mac, I had to install GNU Parallel and the Jing RelaxNG Validator. That's what Homebrew is for:
$ brew install jing
$ brew install jing-trang
$ brew install parallel
NB: you may have to install a down version of Java before you can get the jing-trang formula to work in homebrew (e.g., brew install java6).

What's the context, you ask? I have lots of reasons to want to be able to do this. The proximal cause was batch-validating all the EpiDoc XML files for the inscriptions that are included in the Corpus of Campā Inscriptions before regenerating the site for an update today. I wanted to see quickly if there were any encoding errors in the XML that might blow up the XSL transforms we use to generate the site. So, what I actually ran was:
$ curl -O http://www.stoa.org/epidoc/schema/latest/tei-epidoc.rng
$ find ./texts/xml -name '*.xml' -print | parallel --tag jing tei-epidoc.rng
 Thanks to everybody who built all these tools!


August 02, 2017

Stoa

OEDUc: Exist-db mashup application

Exist-db mashup application working
group

This working group has worked to develop a demo app built with exist-db, a natively XML database which uses XQuery.

The app is ugly, but was built reusing various bits and pieces in a bit less than two days (the day of the unconference and a bit of the following day) and it uses different data sources with different methods to bring together useful resources for an epigraphic corpus and works in most of the cases for the examples we wanted to support. This was possible because exist-db makes it possible and because there were already all the bits available (exist-db, the xslt, the data, etc.)

Code, without data, has been copied to https://github.com/EpiDoc/OEDUc .

The app is accessible, with data from EDH data dumps of July at http://betamasaheft.aai.uni-hamburg.de:8080/exist/apps/OEDUc/

Preliminary twicks to the data included:

  • Adding an @xml:id to the text element to speed up retrival of items in exist. (the xquery doing this is in the AddIdToTextElement.xql file)
  • Note that there is no Pleiades id in the EDH XML (or in any EAGLE dataset), but there are Trismegistos Geo ID! This is because it was planned during the EAGLE project to get all places of provenance in Trismegistos GEO to map them later to Pleiades. This was started using Wikidata mix’n’match but is far from complete and is currently in need for update.

The features

  • In the list view you can select an item. Each item can be edited normally (create, update, delete)
  • The editor that updates files reproduces in simple XSLT a part of the Leiden+ logic and conventions for you to enter data or update existing data. It validates the data after performing the changes against the tei-epidoc.rng schema. Plan is to have it validate before it does the real changes.
  • The search simply searches in a number of indexed elements. It is not a full text index. There are also range indexes set to speed up the queries beside the other indexes shipped with exist.
  • You can create a new entry with the Leiden+ like editor and save it. it will be first validated and in case is not ok you are pointed to the problems. There was not enough times to add the vocabularies and update the editor.
  • Once you view an item you will find in nasty hugly tables a first section with metadata, the text, some additional information on persons and a map:
  • The text exploits some of the parameters of the EpiDoc Stylesheets. You can
    change the desired value, hit change and see the different output.
  • The ids of corresponding inscriptions, are pulled from the EAGLE ids API here in Hamburg, using Trismegistos data. This app will be soon moved to Trismegistos itself, hopefully.
  • The EDH id is instead used to query the EDH data API and get the information about persons, which is printed below the text.
  • For each element with a @ref in the XML files you will find the name of the element and a link to the value. E.g. to link to the EAGLE vocabularies
  • In case this is a TM Geo ID, then the id is used to query Wikidata SPARQL endpoint and retrive coordinates and the corresponding Pleiades id (given those are there). Same logic could be used for VIAF, geonames, etc. This task is done via a http request directly in the xquery powering the app.
  • The Pleiades id thus retrieved (which could be certainly obtained in other ways) is then used in javascript to query Pelagios and print the map below (taken from the hello world example in the Pelagios repository)
  • At http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all and http://betamasaheft.aai.uni-hamburg.de/api/OEDUc/places/all/void two rest XQ function provide the ttl files for Pelagios (but not a dump as required, although this can be done). The places annotations, at the moment only for the first 20 entries. See rest.xql.

Future tasks

For the purpose of having a sample app to help people get started with their projects and see some of the possibilities at work, beside making it a bit nicer it would be useful if this could also have the following:

  • Add more data from EDH-API, especially from edh_geography_uri which Frank has added and has the URI of Geo data; adding .json to this gets the JSON Data of place of finding, which has a “edh_province_uri” with the data about the province.
  • Validate before submitting
  • Add more support for parameters in the EpiDoc example xslt (e.g. for Zotero bibliography contained in div[@type=’bibliography’])
  • Improve the upconversion and the editor with more and more precise matchings
  • Provide functionality to use xpath to search the data
  • Add advanced search capabilities to filter results by id, content provider, etc.
  • Add images support
  • Include all EAGLE data (currently only EDH dumps data is in, but the system scales nicely)
  • Include query to the EAGLE media wiki of translations (api currently unavailable)
  • Show related items based on any of the values
  • Include in the editor the possibility to tag named entities
  • Sync the Epidoc XSLT repository and the eagle vocabularies with a webhook

June 19, 2017

Stoa

OEDUc: EDH and Pelagios NER working group

Participants:  Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta

Report: https://github.com/EpiDoc/OEDUc/wiki/EDH-and-Pelagios-NER

The EDH and Pelagios NER working group was part of the Open Epigraphic Data Unconference held on 15 May 2017. Our aim was to use Named Entity Recognition (NER) on the text of inscriptions from the Epigraphic Database Heidelberg (EDH) to identify placenames, which could then be linked to their equivalent terms in the Pleiades gazetteer and thereby integrated with Pelagios Commons.

Data about each inscription, along with the inscription text itself, is stored in one XML file per inscription. In order to perform NER, we therefore first had to extract the inscription text from each XML file (contained within <ab></ab> tags), then strip out any markup from the inscription to leave plain text. There are various Python libraries for processing XML, but most of these turned out to be a bit too complex for what we were trying to do, or simply returned the identifier of the <ab> element rather than the text it contained.

Eventually, we found the Python library Beautiful Soup, which converts an XML document to structured text, from which you can identify your desired element, then strip out the markup to convert the contents of this element to plain text. It is a very simple and elegant solution with only eight lines of code to extract and convert the inscription text from one specific file. The next step is to create a script that will automatically iterate through all files in a particular folder, producing a directory of new files that contain only the plain text of the inscriptions.

Once we have a plain text file for each inscription, we can begin the process of named entity extraction. We decided to follow the methods and instructions shown in the two Sunoikisis DC classes on Named Entity Extraction:

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-I

https://github.com/SunoikisisDC/SunoikisisDC-2016-2017/wiki/Named-Entity-Extraction-II

Here is a short outline of the steps might involve when this is done in the future.

  1. Extraction
    1. Split text into tokens, make a python list
    2. Create a baseline
      1. cycle through each token of the text
      2. if the token starts with a capital letter it’s a named entity (only one type, i.e. Entity)
    3. Classical Language Toolkit (CLTK)
      1. for each token in a text, the tagger checks whether that token is contained within a predefined list of possible named entities
      2. Compare to baseline
    4. Natural Language Toolkit (NLTK)
      1. Stanford NER Tagger for Italian works well with Latin
      2. Differentiates between different kinds of entities: place, person, organization or none of the above, more granular than CLTK
      3. Compare to both baseline and CLTK lists
  2. Classification
    1. Part-Of-Speech (POS) tagging – precondition before you can perform any other advanced operation on a text, information on the word class (noun, verb etc.); TreeTagger
    2. Chunking – sub-dividing a section of text into phrases and/or meaningful constituents (which may include 1 or more text tokens); export to IOB notation
    3. Computing entity frequency
  3. Disambiguation

Although we didn’t make as much progress as we would have liked, we have achieved our aim of creating a script to prepare individual files for NER processing, and have therefore laid the groundwork for future developments in this area. We hope to build on this work to successfully apply NER to the inscription texts in the EDH in order to make them more widely accessible to researchers and to facilitate their connection to other, similar resources, like Pelagios.

June 13, 2017

Stoa

OEDUc: Images and Image metadata working group

Participants: Sarah Middle, Angie Lumezeanu, Simona Stoyanova
Report: https://github.com/EpiDoc/OEDUc/wiki/Images-and-image-metadata

 

The Images and Image Metadata working group met at the London meeting of the Open Epigraphic Data Unconference on Friday, May 15, 2017, and discussed the issues of copyright, metadata formats, image extraction and licence transparency in the Epigraphik Fotothek Heidelberg, the database which contains images and metadata relating to nearly forty thousand Roman inscriptions from collections around the world. Were the EDH to lose its funding and the website its support, one of the biggest and most useful digital epigraphy projects will start disintegrating. While its data is available for download, its usability will be greatly compromised. Thus, this working group focused on issues pertaining to the EDH image collection. The materials we worked with are the JPG images as seen on the website, and the images metadata files which are available as XML and JSON data dumps on the EDH data download page.

The EDH Photographic Database index page states: “The digital image material of the Photographic Database is with a few exceptions directly accessible. Hitherto it had been the policy that pictures with unclear utilization rights were presented only as thumbnail images. In 2012 as a result of ever increasing requests from the scientific community and with the support of the Heidelberg Academy of the Sciences this policy has been changed. The approval of the institutions which house the monuments and their inscriptions is assumed for the non commercial use for research purposes (otherwise permission should be sought). Rights beyond those just mentioned may not be assumed and require special permission of the photographer and the museum.”

During a discussion with Frank Grieshaber we found out that the information in this paragraph is only available on this webpage, with no individual licence details in the metadata records of the images, either in the XML or the JSON data dumps. It would be useful to be included in the records, though it is not clear how to accomplish this efficiently for each photograph, since all photographers need to be contacted first. Currently, the rights information in the XML records says “Rights Reserved – Free Access on Epigraphischen Fotothek Heidelberg”, which presumably points to the “research purposes” part of the statement on the EDH website.

All other components of EDH – inscriptions, bibliography, geography and people RDF – have been released under Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows for their reuse and repurposing, thus ensuring their sustainability. The images, however, will be the first thing to disappear once the project ends. With unclear licensing and the impossibility of contacting every single photographer, some of whom are not alive anymore and others who might not wish to waive their rights, data reuse becomes particularly problematic.

One possible way of figuring out the copyright of individual images is to check the reciprocal links to the photographic archive of the partner institutions who provided the images, and then read through their own licence information. However, these links are only visible from the HTML and not present in the XML records.

Given that the image metadata in the XML files is relatively detailed and already in place, we decided to focus on the task of image extraction for research purposes, which is covered by the general licensing of the EDH image databank. We prepared a Python script for batch download of the entire image databank, available on the OEDUc GitHub repo. Each image has a unique identifier which is the same as its filename and the final string of its URL. This means that when an inscription has more than one photograph, each one has its individual record and URI, which allows for complete coverage and efficient harvesting. The images are numbered sequentially, and in the case of a missing image, the process skips that entry and continues on to the next one. Since the databank includes some 37,530 plus images, the script pauses for 30 seconds after every 200 files to avoid a timeout. We don’t have access to the high resolution TIFF images, so this script downloads the JPGs from the HTML records.

The EDH images included in the EAGLE MediaWiki are all under an open licence and link back to the EDH databank. A task for the future will be to compare the two lists to get a sense of the EAGLE coverage of EDH images and feed back their licensing information to the EDH image records. One issue is the lack of file-naming conventions in EAGLE, where some photographs carry a publication citation (CIL_III_14216,_8.JPG, AE_1957,_266_1.JPG), a random name (DR_11.jpg) and even a descriptive filename which may contain an EDH reference (Roman_Inscription_in_Aleppo,_Museum,_Syria_(EDH_-_F009848).jpeg). Matching these to the EDH databank will have to be done by cross-referencing the publication citations either in the filename or in the image record.

A further future task could be to embed the image metadata into the image itself. The EAGLE MediaWiki images already have the Exif data (added automatically by the camera) but it might be useful to add descriptive and copyright information internally following the IPTC data set standard (e.g. title, subject, photographer, rights etc). This will help bring the inscription file, image record and image itself back together, in the event of data scattering after the end of the project. Currently linkage exist between the inscription files and image records. Embedding at least the HD number of the inscription directly into the image metadata will allow us to gradually bring the resources back together, following changes in copyright and licensing.

Out of the three tasks we set out to discuss, one turned out to be impractical and unfeasible, one we accomplished and published the code, one remains to be worked on in the future. Ascertaining the copyright status of all images is physically impossible, so all future experiments will be done on the EDH images in EAGLE MediaWiki. The script for extracting JPGs from the HTML is available on the OEDUc GitHub repo. We have drafted a plan for embedding metadata into the images, following the IPTC standard.

June 07, 2017

Stoa

Open Epigraphic Data Unconference report

Last month, a dozen or so scholars met in London (and were joined by a similar number via remote video-conference) to discuss and work on the open data produced by the Epigraphic Database Heidelberg. (See call and description.)

Over the course of the day seven working groups were formed, two of which completed their briefs within the day, but the other five will lead to ongoing work and discussion. Fuller reports from the individual groups will follow here shortly, but here is a short summary of the activities, along with links to the pages in the Wiki of the OEDUc Github repository.

Useful links:

  1. All interested colleagues are welcome to join the discussion group: https://groups.google.com/forum/#!forum/oeduc
  2. Code, documentation, and other notes are collected in the Github repository: https://github.com/EpiDoc/OEDUc

1. Disambiguating EDH person RDF
(Gabriel Bodard, Núria García Casacuberta, Tom Gheldof, Rada Varga)
We discussed and broadly specced out a couple of steps in the process for disambiguating PIR references for inscriptions in EDH that contain multiple personal names, for linking together person references that cite the same PIR entry, and for using Trismegistos data to further disambiguate EDH persons. We haven’t written any actual code to implement this yet, but we expect a few Python scripts would do the trick.

2. Epigraphic ontology
(Hugh Cayless, Paula Granados, Tim Hill, Thomas Kollatz, Franco Luciani, Emilia Mataix, Orla Murphy, Charlotte Tupman, Valeria Vitale, Franziska Weise)
This group discussed the various ontologies available for encoding epigraphic information (LAWDI, Nomisma, EAGLE Vocabularies) and ideas for filling the gaps between this. This is a long-standing desideratum of the EpiDoc community, and will be an ongoing discussion (perhaps the most important of the workshop).

3. Images and image metadata
(Angie Lumezeanu, Sarah Middle, Simona Stoyanova)
This group attempted to write scripts to track down copyright information on images in EDH (too complicated, but EAGLE may have more of this), download images and metadata (scripts in Github), and explored the possibility of embedding metadata in the images in IPTC format (in progress).

4. EDH and SNAP:DRGN mapping
(Rada Varga, Scott Vanderbilt, Gabriel Bodard, Tim Hill, Hugh Cayless, Elli Mylonas, Franziska Weise, Frank Grieshaber)
In this group we revised the status of SNAP:DRGN recommendations for person-data in RDF, and then looked in detail about the person list exported from the EDH data. A list of suggestions for improving this data was produced for EDH to consider. This task was considered to be complete. (Although Frank may have feedback or questions for us later.)

5. EDH and Pelagios NER
(Orla Murphy, Sarah Middle, Simona Stoyanova, Núria Garcia Casacuberta, Thomas Kollatz)
This group explored the possibility of running machine named entity extraction on the Latin texts of the EDH inscriptions, in two stages: extracting plain text from the XML (code in Github); applying CLTK/NLTK scripts to identify entities (in progress).

6. EDH and Pelagios location disambiguation
(Paula Granados, Valeria Vitale, Franco Luciani, Angie Lumezeanu, Thomas Kollatz, Hugh Cayless, Tim Hill)
This group aimed to work on disambiguating location information in the EDH data export, for example making links between Geonames place identifiers, TMGeo places, Wikidata and Pleiades identifiers, via the Pelagios gazetteer or other linking mechanisms. A pathway for resolving was identified, but work is still ongoing.

7. Exist-db mashup application
(Pietro Liuzzo)
This task, which Dr Liuzzo carried out alone, since his network connection didn’t allow him to join any of the discussion groups on the day, was to create an implementation of existing code for displaying and editing epigraphic editions (using Exist-db, Leiden+, etc.) and offer a demonstration interface by which the EDH data could be served up to the public and contributions and improvements invited. (A preview “epigraphy.info” perhaps?)

May 03, 2017

Current Epigraphy

Open Epigraphic Data Unconference, London, May 15, 2017

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

May 02, 2017

Stoa

Open Epigraphic Data Unconference, London, May 15, 2017

Open Epigraphic Data Unconference
10:00–17:00, May 15, 2017, Institute of Classical Studies

This one-day workshop, or “unconference,” brings together scholars, historians and data scientists with a shared interest in classical epigraphic data. The event involves no speakers or set programme of presentations, but rather a loose agenda, to be further refined in advance or on the day, which is to use, exploit, transform and “mash-up” with other sources the Open Data recently made available by the Epigraphic Database Heidelberg under a Creative Commons license. Both present and remote participants with programming and data-processing experience, and those with an interest in discussing and planning data manipulation and aggregation at a higher level, are welcomed.

Places at the event in London are limited; please contact <gabriel.bodard@sas.ac.uk> if you would like to register to attend.

There will also be a Google Hangout opened on the day, for participants who are not able to attend in person. We hope this event will only be the beginning of a longer conversation and project to exploit and disseminate this invaluable epigraphic dataset.

April 26, 2017

Current Epigraphy

Inscriptions of Chersonesos and Tyras launch, KCL, May 11, 2017

The Department of Classics at King’s College London and the team of
IOSPE: Ancient Inscriptions of the Northern Black Sea
iospe.kcl.ac.uk

request the pleasure of your company at the launch of two new digital collections of Greek and Latin inscriptions from the region of Northern Black Sea:

Inscriptions of Chersonesos
and
Inscriptions of Tyras

Speakers include: Askold Ivantchik (Moscow/Bordeaux), Igor Makarov
(Moscow), Irene Polinskaya (London), Gabriel Bodard (London),
Jonathan Prag (Oxford), Riet van Bremen (London), Georgy Kantor (Oxford)

17.30-19.00, Thursday 11 May 2017
Harvard Lecture Theatre
Bush House, King’s College London
30 Aldwych, London WC2B 4BG

Doors open and refreshments 17:00
Wine reception 19:00

Please join us for the occasion!

The project is funded by the A.G. Leventis Foundation

April 19, 2017

Current Epigraphy

Summer School in Advanced Tools for Digital Humanities and IT

The event is organized by the Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, with lecturers and trainers from the School of Advanced Studies to the University of London and the Carnegie Mellon University, Pittsburgh, USA.

The event will take place in September 2017 in a mountain retreat near Sofia, Bulgaria (location tbc). The school will offer the following teaching modules:

  •     Linked Spatial Data, Geo-annotation, Visualisation and Information system (Geography and Topography) – with Valeria Vitale and Gabriel Bodard (School of Advanced Studies, University of London);
  •     Python for data extraction, enriching and cataloguing – with Simona Stoyanova and Gabriel Bodard (School of Advanced Studies, University of London);
  •     EpiDoc and TEI markup, use of vocabularies, and web delivery (including external URI use, XSLT customization, and entity normalization) – with Simona Stoyanova and Gabriel Bodard (School of Advanced Studies, University of London);
  •     Big Data and Information Extraction – Dimitar Birov (University of Sofia) and Eduardo Miranda (Carnegie Mellon University of Pittsburgh).

In the framework of the event, a round table on the current trends and the future developments of Digital Humanities in South-East Europe.

The event will take place between 7th and 11th September. The participation fee is 50 euros. If you are interested in the Summer School, please send a Curriculum Vitae and a Motivation Letter stating your main areas of interest and expertise, the projects on which you are currently working, as well as which module(s) are most relevant for your work and why you would like to attend them. The applications should be sent to dhsummerschool@uni-sofia.bg no later than 1 June 2017.

The organizing team

Assoc. Prof. Dimitar Birov, University of Sofia, Dr. Dimitar Iliev, University of Sofia, Dr. Maria Baramova, University of Sofia, Dobromir Dobrev, University of Sofia

March 22, 2017

Stoa

Research Fellows: Latinization of the north-western provinces

Posted on behalf on Alex Mullen (to whom enquiries should be addressed):

I should like to draw your attention to the advertisement for 2 Research Fellows for the 5-year ERC project: the Latinization of the North-Western Provinces: Sociolinguistics, Epigraphy and Archaeology (LatinNow).

The RFs will be based at the Centre for the Study of Ancient Documents, University of Oxford, and will start, at the earliest, in September 2017. The positions will be for 3 years, with the possibility of extension.

Although the RFs will be located in Oxford, their contracts will be with the project host, the University of Nottingham, so applications must be made via the Nottingham online system:

https://www.nottingham.ac.uk/jobs/currentvacancies/ref/ART002017

Please note that the panel requires basic details to be filled in online and a CV and covering letter to be uploaded (apologies, the generic application system is not clear on what needs to be uploaded). The deadline for applications is the 14th April.

If you would like further information, please do not hesitate to contact the Principal Investigator, Dr Alex Mullen.

March 20, 2017

Current Epigraphy

Research Fellows: Latinization of the north-western provinces

I should like to draw your attention to the advertisement for 2 Research Fellows for the 5-year ERC project: the Latinization of the North-Western Provinces: Sociolinguistics, Epigraphy and Archaeology (LatinNow).

The RFs will be based at the Centre for the Study of Ancient Documents, University of Oxford, and will start, at the earliest, in September 2017. The positions will be for 3 years, with the possibility of extension.

Although the RFs will be located in Oxford, their contracts will be with the project host, the University of Nottingham, so applications must be made via the Nottingham online system:

https://www.nottingham.ac.uk/jobs/currentvacancies/ref/ART002017

Please note that the panel requires basic details to be filled in online and a CV and covering letter to be uploaded (apologies, the generic application system is not clear on what needs to be uploaded). The deadline for applications is the 14th April.

If you would like further information, please do not hesitate to contact the Principal Investigator, Dr Alex Mullen.

March 14, 2017

Current Epigraphy

EpiDoc training workshop, Athens, May 2017

Call for Participation

A four-day training workshop on “EpiDoc” will be held in Athens (Greece), from Tuesday, 2 May to Friday, 5 May 2017, at the Academy of Athens. The workshop is organized by the Academy of Athens within the framework of the DARIA-EU project “Humanities at Scale”.

The topic of the training workshop “EpiDoc” will be digital editing of epigraphic and papyrological texts and will focus on the encoding of inscriptions, papyri and other ancient texts. EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and project workflow and management.

Instructors: Elli Mylonas and Simona Stoyanova.

The four-day workshop will be divided into five sections:

  • Section 1: Encoding epigraphic and other texts: Basic EpiDoc, using OxygenXML, transforming text with XSL for proofreading and display.
  • Section 2: Metadata: Encoding the history and description of the textual support.
  • Section 3: Advanced Features (Apparatus criticus, verse, complex texts).
  • Section 4: Text encoding projects: organization, roles, workflows.
  • Section 5: Vocabularies and Analysis: indexing, names and places, controlled vocabularies.

The workshop will include ample time for hands on practice, questions, discussion of individual projects, and the option to learn about topics that are of special interest to participants.

The workshop will be conducted in English and the participation is free.

The workshop will assume knowledge of epigraphy or papyrology; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome.

The participants should bring their own laptops. It is also strongly recommended for the participants to have prepared in advance a mini corpus of texts concerning their field of scientific interest.

Registration

Please fill the application form until 10 April 2017 at the following address:

https://goo.gl/forms/0Xaf8umatP8oJaCf1

Due to the limited seats there will be a selection among applicants. Applicants will be notified by email.

Dates:

2-5/5/2017, 9:00-17:00

Organisation:

Academy of Athens

Project DARIAH-EU – Humanities at Scale

Location:

Academy of Athens – Main Building, East Hall
Panepistimiou 28,
10679 Athens
Greece

For additional information, please contact: gchrysovitsanos@academyofathens.gr

Readings:

The first three items provide a good overview to Digital Epigraphy and Epidoc. We recommend that you read those first.

February 24, 2017

Current Epigraphy

Engineer Position in Bordeaux (Papyrii, Inscriptions, etc.) for the project PATRIMONIVM

The European funding scheme ERC Starting Grant rewards the most innovative research projects led by young researchers in all scientific areas. Among those selected for funding in the 2016 call, the project PATRIMONIVM, hosted by the University Bordeaux Montaigne, aims at realizing the first global study of the economic, social and political role of the properties of Roman emperors using a complete documentary base of all relevant sources. The project lasts 5 years and will involve 9 historians and a web engineer responsible of the database. The documentary system of PATRIMONIVM is one of the most ambitious features of the project, not only because of the number and the variety of the data (epigraphic, papyrological and literary sources, prosopographical data, archaeological descriptions, images, georeferenced data, bibliographic references), but also because of the implementation of the latest XML standards for the digital presentation of ancient sources. These features make PATRIMONIVM one of the leading digital humanities projects at international level.

The engineer responsible for the documentary system is one of the most important members PATRIMONIVM’s research team. She/he will work in close coordination with the Principal Investigator and collaborate with the other team members. She/he will participate to the scientific programme of the project and contribute to the visibility to the project thanks to her/his participation to conferences and workshops on the digital humanities in France and abroad. She/he will be part of the project for its entire duration: full time during the first three years, part time for the remaining months.

http://ausonius.u-bordeaux-montaigne.fr/presentation/recrutements

January 23, 2017

Current Epigraphy

EpiDoc training workshop, London, April 2017

We invite applications to participate in a training workshop on digital editing of papyrological and epigraphic texts, at the Institute of Classical Studies, London, April 3–7, 2017. The workshop will be taught by Gabriel Bodard and Lucia Vannini (ICS) and Simona Stoyanova (KCL). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc: Ancient Documents in XML

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and use of the online Papyrological Editor tool.

The workshop will assume knowledge of papyrology or epigraphy; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome. To apply, please email gabriel.bodard@sas.ac.uk with a brief description of your background and reason for application, by February 14, 2017.

(Revised to bring back deadline for applications to Feb 14th.)

January 17, 2017

Stoa

EpiDoc training workshop, London, April 2017

We invite applications to participate in a training workshop on digital editing of papyrological and epigraphic texts, at the Institute of Classical Studies, London, April 3–7, 2017. The workshop will be taught by Gabriel Bodard and Lucia Vannini (ICS) and Simona Stoyanova (KCL). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc: Ancient Documents in XML

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions, identifying and linking to external person and place authorities, and use of the online Papyrological Editor tool.

The workshop will assume knowledge of papyrology or epigraphy; Greek, Latin or another ancient language; and the Leiden Conventions. No technical skills are required, and scholars of all levels, from students to professors, are welcome. To apply, please email gabriel.bodard@sas.ac.uk with a brief description of your background and reason for application, by February 14, 2017.

(Revised to bring back deadline for applications to Feb 14th.)

November 10, 2016

Current Epigraphy

Digital Epigraphy am Scheideweg? / Digital Epigraphy at a crossroads?

“Error message: Server not found”: If everything remains as it is now, the familiar click on EDH’s Internet address (www.epigraphische-datenbank-heidelberg.de) in four years will generate exactly this feedback – after a total of 34 years work on EDH and 23 years availability online. The reason: … read more … .