EpiDoc: News and Views

http://planet.atlantides.org/epidoc

Tom Elliott (tom.elliott@nyu.edu)

This feed aggregator is part of the Planet Atlantides constellation. Its current content is available in multiple webfeed formats, including Atom, RSS/RDF and RSS 1.0. The subscription list is also available in OPML and as a FOAF Roll. All content is assumed to be the intellectual property of the originators unless they indicate otherwise.

September 05, 2016

Current Epigraphy

EpiDoc Workshop 2016 – Bologna

12-14 settembre 2016

Alma Mater Studiorum Università di Bologna

 Dipartimento di Storia Culture Civiltà, sezione di Storia Antica

Sala Celio – piano V – Via Zamboni 38

Alice Bencivenni, Marta Fogagnolo (Università di Bologna)

Giuditta Mirizio (Università di Bologna, Universität Heidelberg)

Irene Vagionakis (Università Ca’ Foscari Venezia)

Lunedì 12 settembre (09:00-18:00)

09:00 Introduzione al corso e presentazioni
09:30 Introduzione a EpiDoc e a XML

11:30 Introduzione alla struttura di una edizione critica epigrafica
11:45 The EpiDoc guidelines: struttura di base di un file EpiDoc; dati descrittivi e storici

14:00 The EpiDoc guidelines: trascrizione del testo
15:30 Esempi completi di markup
16:00 Come usare Oxygen
16:30 Esercitazioni EpiDoc

Martedì 13 settembre (09:00-18:00)

09:00 Trasformazione XSL
10:00 The EpiDoc guidelines: indicizzazione

11:30 Esercitazioni EpiDoc

13:30 Papyri.info; Leiden+, Leiden+ Help
14:30 Esercitazioni

Mercoledì 14 settembre (09:00-17:30)

09:00 Epifacs
10:00 EpiDoc: approfondimento su alcuni aspetti del tagging EpiDoc

11:30 Esercitazioni a scelta

14:00 EpiDoc Workshop Blog e Markup List
14:30 Eagle apps
15:30 Esercitazioni a scelta
17:00 Feedback session

April 19, 2016

Horothesia (Tom Elliott)

Stable Orbits or Clear Air Turbulence: Capacity, Scale, and Use Cases in Geospatial Antiquity


I delivered the following talk on 8 April 2016 at the Mapping the Past: GIS Approaches to Ancient History conference at the University of North Carolina at Chapel Hill. Update (19 April 2016): video is now available on YouTube, courtesy of the Ancient World Mapping Center.

How many of you are familiar with Jo Guldi's on-line essay on the "Spatial Turn" in western scholarship? I highly recommend it. It was published in 2011 as a framing narrative for the Spatial Humanities website, a publication of the Scholar's Lab at the University of Virginia. The website was intended partly to serve as a record of the NEH-funded Institute for Enabling Geospatial Scholarship. That Institute, organized in a series of three thematic sessions, was hosted by the Scholars Lab in 2009 and 2010. The essay begins as follows:
“Landscape turns” and “spatial turns” are referred to throughout the academic disciplines, often with reference to GIS and the neogeography revolution ... By “turning” we propose a backwards glance at the reasons why travelers from so many disciplines came to be here, fixated upon landscape, together. For the broader questions of landscape – worldview, palimpsest, the commons and community, panopticism and territoriality — are older than GIS, their stories rooted in the foundations of the modern disciplines. These terms have their origin in a historic conversation about land use and agency.
Professor Guldi's essay takes us on a tour through the halls of the Academy, making stops in a variety of departments, including Anthropology, Literature, Sociology, and History. She traces the intellectual innovations and responses -- prompted in no small part by the study and critique of the modern nation state -- that iteratively gave rise to many of the research questions and methods that concern us at this conference. I don't think it would be a stretch to say that not only this conference but its direct antecedents and siblings -- the Ancient World Mapping Center and its projects, the Barrington Atlas and its inheritors -- are all symptoms of the spatial turn.

So what's the point of my talk this evening? Frankly, I want to ask: to what degree do we know what we're doing? I mean, for example, is spatial practice a subfield? Is it a methodology?  It clearly spans chairs in the Academy. But does it answer -- better or uniquely? -- a particular kind of research question? Is spatial inquiry a standard competency in the humanities, or should it remain the domain of specialists? Does it inform or demand a specialized pedagogy? Within ancient studies in particular, have we placed spatially informed scholarship into a stable orbit that we can describe and maintain, or are we still bumping and bouncing around in an unruly atmosphere, trying to decide whether and where to land?

Some will recognize in this framework questions -- or should we say anxieties -- that are also very much alive for the digital humanities. The two domains are not disjoint. Spatial analysis and visualization are core DH activities. The fact that the Scholar's Lab proposed and the NEH Office of Digital Humanities funded the Geospatial Institute I mentioned earlier underscore this point.

So, when it comes to spatial analysis and visualization, what are our primary objects of interest? "Location" has to be listed as number one, right? Location, and relative location, are important because they are variables in almost every equation we could care about. Humans are physical beings, and almost all of our technology and interaction -- even in the digital age -- are both enabled and constrained by physical factors that vary not only in time, but also in three-dimensional space. If we can locate people, places, and things in space -- absolutely or relatively -- then we can open our spatial toolkit. Our opportunities to explore become even richer when we can access the way ancient people located themselves, each other, places, and things in space: the rhetoric and language they used to describe and depict those locations.

The connections between places and between places and other things are also important. The related things can be of any imaginable type: objects, dates, events, people, themes. We can express and investigate these relationships with a variety of spatial and non-spatial information structures: directed graphs and networks for example. There are digital tools and methods at our disposal for working with these mental constructs too, and we'll touch on a couple of examples in a minute. But I'd like the research questions, rather than the methods, to lead the discussion.

When looking at both built and exploited natural landscapes, we are often interested in the functions humans impart to space and place. These observations apply not only to physical environments, but also to their descriptions in literature and their depictions in art and cartography. And so spatial function is also about spatial rhetoric, performance, audience, and reception.

Allow me a brief example: the sanctuary of Artemis Limnatis at Volimnos in the Tayegetos mountains (cf. Koursoumis 2014; Elliott 2004, 74-79 no. 10). Its location is demonstrated today only by scattered architectural, artistic, and epigraphic remains, but epigraphic and literary testimony make it clear that it was just one of several such sanctuaries that operated at various periods and places in the Peloponnese.  Was this ancient place of worship located in a beautiful spot, evocative of the divine? Surely it was! But it -- and its homonymous siblings -- also existed to claim, mark, guard, consecrate, and celebrate political and economic assertions about the land it overlooked. Consequently, the sanctuary was a locus of civic pride for the Messenians and the Spartans, such that -- from legendary times down to at least the reign of Vespasian -- it occasioned both bloodshed and elite competition for the favor of imperial powers. Given the goddess's epithet (she is Artemis Of The Borders), the sanctuary's location, and its history of contentiousness, I don't think we're surprised that a writer like Tacitus should take notice of delegations from both sides arriving in Rome to argue for and against the most recent outcome in the struggle for control of the sanctuary. I can't help but imagine him smirking as he drops it into the text of his Annals (4.43), entirely in indirect discourse and deliberately ambiguous of course about whether the delegations appeared before the emperor or the Senate. It must have given him a grim sort of satisfaction to be able to record a notable interaction between Greece and Rome during the reign of Tiberius that also served as a metaphor for the estrangement of emperor and senate, of new power and old prerogatives.

Epigraphic and literary analysis can give us insight into issues of spatial function, and so can computational methods. The two approaches are complementary, sometimes informing, supporting, and extending each other, other times filling in gaps the other method leaves open. Let's spend some time looking more closely at the computational aspects of spatial scholarship.

A couple of weeks ago, I got to spend some time talking to Lisa Mignone at Brown about her innovative work on the visibility of temples at Rome with respect to the valley of the Tiber and the approaches to the city. Can anyone doubt that, among the factors at play in the ancient siting and subsequent experience of such major structures, there's a visual expression of power and control at work? Mutatis mutandis, you can feel something like it today if you get the chance to walk the Tiber at length. Or, even if you just go out and contemplate the sight lines to the monuments and buildings of McCorkle Place here on the UNC campus. To be sure, in any such analysis there is a major role for the mind of the researcher ... in interpretation, evaluation, narration, and argument, and that researcher will need to be informed as much as possible by the history, archaeology, and literature of the place. But, depending on scale and the alterations that a landscape has undergone over time, there is also the essential place of viewshed analysis. Viewsheds are determined by assessing the visibility of every point in an area from a particular point of interest. Can I see the University arboretum from the north-facing windows of the Ancient World Mapping Center on the 5th floor of Davis Library? Yes, the arboretum is in the Center's viewshed. Well, certain parts of it anyway. Can I see the Pit from there? No. Mercifully, the Pit is not in the Center's viewshed.

In one methodological respect, Professor Mignone's work is not new. Viewshed analysis has been widely used for years in archaeological and historical study, at levels ranging from the house to the public square to the civic territory and beyond. I doubt anyone could enumerate all the published studies without a massive amount of bibliographical work. Perhaps the most well known -- if you'll permit an excursion outside the domain of ancient studies -- is Anne Kelly Knowles' work (with multiple collaborators) on the Battle of Gettysburg. What could the commanders see and when could they see it? There's a fascinating, interactive treatment of the data and its implications published on the website of Smithsonian Magazine.

Off the top of my head, I can point to a couple of other examples in ancient studies. Though their mention will only scratch the surface of the full body of work, I think they are both useful examples. There's Andrew Sherrat's 2004 treatment of Myceneae, which explores the site's visual and topographical advantages in an accessible, online form. It makes use of cartographic illustration and accessible text to make its points about strategically and economically interesting features of the site.

I also recall a poster by James Newhard and several collaborators that was presented at the 2012 meeting of the Archaeological Institute of America. It reported on the use of viewshed analysis and other methods as part of an integrated approach to identifying Byzantine defensive systems in North Central Anatolia. The idea here was that the presence of a certain kind of viewshed -- one offering an advantage for surveillance of strategically useful landscape elements like passes and valleys -- might lend credance to the identification of ambiguous archaeological remains as fortifications. Viewshed analysis is not just revelatory, but can also be used for predictive and taxonomic tasks.

In our very own conference, we'll hear from Morgan Di Rodi and Maria Kopsachelli about their use of viewshed analysis and other techniques to refine understanding of multiple archaeological sites in northwest Greece. So we'll get to see viewsheds in action!

Like most forms of computational spatial analysis, viewshed work is most rigorously and uniformly accomplished with GIS software, supplied with appropriately scaled location and elevation data. To do it reliably by hand for most interesting cases would be impossible. These dependencies on software and data, and the know-how to use them effectively, should draw our attention to some important facts. First of all, assembling the prerequisites of non-trivial spatial analysis is challenging and time consuming. More than once, I've heard Prof. Knowles say that something like ninety percent of the time and effort in a historical GIS project goes into data collection and preparation.  Just as we depend on the labor of librarians, editors, philologists, Renaissance humanists, medieval copyists, and their allies for our ability to leverage the ancient literary tradition for scholarly work, so too we depend on the labor of mathematicians, geographers, programmers, surveyors, and their allies for the data and computational artifice we need to conduct viewshed analysis. This inescapable debt -- or, if you prefer, this vast interdisciplinary investment in our work -- is a topic to which I'd like to return at the end of the talk.

Before we turn our gaze to other methods, I'd like to talk briefly about other kinds of sheds. Watershed analysis -- the business of calculating the entire area drained and supplied by a particular water system -- is a well established method of physical geography and the inspiration for the name viewshed. It has value for cultural, economic, and historical study too, and so should remain on our spatial RADAR. In fact, Melissa Huber's talk on the Roman water infrastructure under Claudius will showcase this very method.

Among Sarah Bond's current research ideas is a "smells" map of ancient Rome. Where in the streets of ancient Rome would you have encountered the odors of a bakery or a latrine a fullonica. And -- God help you -- what would it have smelled like? Will it be possible at some point to integrate airflow and prevailing wind models with urban topography and location data to calculate "smellsheds" or "nosescapes" for particular installations and industries? I sure hope so! Sound sheds ought to be another interesting possibility; we ought to look for leadership to the work of people like Jeff Veitch who is investigating acoustics and architecture at Ostia, and the Virtual Paul's Cross project at North Carolina State.

Every bit as interesting as what the ancients could see, and from where they could see it, is the question of how they saw things in space and how they described them. Our curiosity about ancient geographic mental models and worldview drives us to ask questions like ones Richard Talbert has been asking: did the people living in a Roman province think of themselves as "of the province" in the way modern Americans think of themselves as being North Carolinians or Michiganders? Were the Roman provinces strictly administrative in nature, or did they contribute to personal or corporate identity in some way? Though not a field that has to be plowed only with computers, questions of ancient worldview do sometimes yield to computational approaches.

Consider, for example, the work of Elton Barker and colleagues under the rubric of the Hestia project. Here's how they describe it:
Using a digital text of Herodotus’s Histories, Hestia uses web-mapping technologies such as GIS, Google Earth and the Narrative TimeMap to investigate the cultural geography of the ancient world through the eyes of one of its first witnesses. 
In Hestia, word collocation -- a mainstay of computational text analysis -- is brought together and integrated with location-based measures to interrogate not only the spatial proximity of places mentioned by Herodotus, but also the textual proximities of those place references. With these keys, the Hestia team opens the door to Herodotus' geomind and that of the culture he lived in: what combinations of actual location, historical events, cultural assumptions, and literary agenda shape the mention of places in his narrative?

Hestia is not alone in exploring this particular frontier. Tomorrow we'll hear from Ryan Horne about his collaborative work on the Big Ancient Mediterranean project. Among its pioneering aspects is the incorporation of data about more than the collocation of placenames in primary sources and the relationships of the referenced places with each other. BAM also scrutinizes personal names and the historical persons to whom they refer. Who is mentioned with whom where? What can we learn from exploring the networks of connection that radiate from such intersections?

The introduction of a temporal axis into geospatial calcuation and visualization is also usually necessary and instructive in spatial ancient studies, even if it still proves to be more challenging in standard GIS software than one might like. Amanda Coles has taken on that challenge, and will be telling us more about what it's helped her learn about the interplay between warfare, colonial foundations, road building, and the Roman elites during the Republic.

Viewsheds, worldviews, and temporality, oh my!

How about spatial economies? How close were sources of production to their markets? How close in terms of distance? How close in terms of travel time? How close in terms of cost to move goods?

Maybe we are interested in urban logistics. How quickly could you empty the colosseum? How much bread could you distribute to how many people in a particular amount of time at a particular place? What were the constraints and capacities for transport of the raw materials? What do the answers to such questions reveal about the practicality, ubiquity, purpose, social reach, and costs of communal activities in the public space? How do these conclusions compare with the experiences and critiques voiced in ancient sources?

How long would it take a legion to move from one place to another in a particular landscape? What happens when we compare the effects of landscape on travel time with the built architecture of the limes or the information we can glean about unit deployment patterns from military documents like the Vindolanda tablets or the ostraca from Bu Njem?

The computational methods involved in these sorts of investigations have wonderful names, and like the others we've discussed, require spatial algorithms realized in specialized software. Consider cost surfaces: for a particular unit of area on the ground, what is the cost in time or effort to pass through it? Consider network cost models: for specific paths between nodal points, what is the cost of transit? Consider least cost path analysis: given a cost surface or network model, what is the cheapest path available between two points?

Many classicists will have used Orbis: The Stanford Geospatial Network Model of the Roman World. The Orbis team, assembled by Walter Scheidel, has produced an online environment in which one can query a network model of travel costs between key nodal points in the Roman world, varying such parameters as time of year and mode of transport. This model, and its digital modes of access, bring us to another vantage point. How close were two places in the Roman world, not as the crow flies, not in terms of miles along the road, but as the boat sailed or the feet walked.

Barbora Weissova is going to talk to us tomorrow about her work in and around Nicaea. Among her results, she will discuss another application of Least Cost Path Analysis: predicting the most likely route for a lost ancient roadway.

It's not just about travel, transport, and cost. Distribution patterns are of interest too, often combined with ceramic analysis, or various forms of isotopic or metallurgical testing, to assess the origin, dissemination, and implications of ancient objects found in the landscape. Inscriptions, coins, portable antiquities, architectural and artistic styles, pottery, all have been used in such studies. Corey Ellithorpe is going to give us a taste of this approach in numismatics by unpacking the relationship between Roman imperial ideology and regional distribution patterns of coins.

I'd like to pause here for just a moment and express my hope that you'll agree with the following assessment. I think we are in for an intellectual feast tomorrow. I think we should congratulate the organizers of the conference for such an interesting, and representative, array of papers and presentations. That there is on offer such a tempting smorgasbord is also, of course, all to the credit of the presenters and their collaborators. And surely it must be a factor as we consider the ubiquity and disciplinarity of spatial applications in ancient studies.

Assiduous students of the conference program will notice that I have neglected yet to mention a couple of the papers. Fear not, for they feature in the next section of my talk, which is -- to borrow a phrase from Meghan Trainor and Kevin Kadish -- all about that data.

So, conference presenters, would you agree with the dictum I've attributed to Anne Knowles? Does data collection and preparation take up a huge chunk of your time?

Spatial data, particularly spatial data for ancient studies, doesn't normally grow on trees, come in a jar, or sit on a shelf. The ingredients have to be gathered and cleaned, combined and cooked. And then you have to take care of it, transport it, keep track of it, and refashion it to fit your software and your questions. Sometimes you have to start over, hunt down additional ingredients, or try a new recipe. This sort of iterative work -- the cyclic remaking of the experimental apparatus and materials -- is absolutely fundamental to spatially informed research in ancient studies.

If you were hoping I'd grind an axe somewhere in this talk, you're in luck. It's axe grinding time.

There is absolutely no question in my mind that the collection and curation of data is part and parcel of research. It is a research activity. It has research outcomes. You can't answer questions without it. If you aren't surfacing your work on data curation in your CV, or if you're discounting someone else's work on data curation in decisions about hiring, tenure, and promotion, then I've got an old Bob Dylan song I'd like to play for you.

  • Archive and publish your datasets. 
  • Treat them as publications in your CV. 
  • Write a review of someone else's published dataset and try to get it published. 
  • Document your data curation process in articles and conference presentations.

Right. Axes down.

So, where does our data come from? Sometimes we can get some of it in prepared form, even if subsequent selection and reformatting is required. For some areas and scales, modern topography and elevation can be had in various raster and vector formats. Some specialized datasets exist that can be used as a springboard for some tasks. It's here that the Pleiades project, which I direct, seeks to contribute. By digitizing not the maps from the Barrington Atlas, but the places and placenames referenced on those maps and in the map-by-map directory, we created a digital dataset with potential for wide reuse. By wrapping it in a software framework that facilitates display, basic cartographic visualization, and collaborative updates, we broke out of the constraints of scale and cartographic economy imposed by the paper atlas format. Pleiades now knows many more places than the Barrington did, most of these outside the cultures with which the Atlas was concerned. More precise coordinates are coming in too, as are more placename variants and bibliography. All of this data is built for reuse. You can collect it piece by piece from the web interface or download it in a number of formats. You can even write programs to talk directly to Pleiades for you, requesting and receiving data in a computationally actionable form. The AWMC has data for reuse too, including historical coastlines and rivers and map base materials. It's all downloadable in GIS-friendly formats.

But Pleiades and the AWMC only help for some things. It's telling that only a couple of the projects represented at this conference made use of Pleiades data. That's not because Pleiades is bad or because the authors didn't know about Pleiades or the Center. It's because the questions they're asking require data that Pleiades is not designed to provide.

It's proof of the point I'd like to underline: usually -- because your research question is unique in some way, otherwise you wouldn't be pursuing it -- you're going to have to get your hands dirty with data collection.

But before we get dirty, I'm obliged to point out that, although Pleiades has received significant, periodic support from the National Endowment for the Humanities since 2006, the views, findings, conclusions, or recommendations expressed in this lecture do not necessarily reflect those of the National Endowment for the Humanities.

We've already touched on the presence of spatial language in literature. For some studies, the placenames, placeful descriptions, and narratives of space found in both primary and secondary sources constitute raw data we'd like to use. Identifying and extracting such data is usually a non-trivial task, and may involve a combination of manual and computational techniques, the latter depending on the size and tractability of the textual corpus in question and drawing on established methods in natural language processing and named entity recognition. It's here we may encounter "geoparsing" as a term of art. Many digital textual projects and collections are doing geoparsing: individual epigraphic and papyrological publications using the Text Encoding Initiative and EpiDoc Guidelines; the Perseus Digital Library; the Pelagios Commons by way of its Recogito platform. The China Historical GIS is built up entirely from textual sources, tracking each placename and each assertion of administrative hierarchy back to its testimony.

For your project, you may be able to find geoparsed digital texts that serve your needs, or you may need to do the work yourself. Either way, some transformation on the results of geoparsing is likely to be necessary to make them useful in the context of your research question and associated apparatus.

Relevant here is Micah Myers's conference paper. He is going to bring together for us the analysis and visualization of travel as narrated in literature. I gather from his abstract that he'll show us not only a case study of the process, but discuss the inner workings of the on-line publication that has been developed to disseminate the work.

Geophysical and archaeological survey may supply your needs. Perhaps you'll have to do fieldwork yourself, or perhaps you can collect the information you need from prior publications or get access to archival records and excavation databases. Maybe you'll get lucky and find a dataset that's been published into OpenContext, the British Archaeology Data Service, or tDAR: the Digital Archaeological Record. But using this data requires constant vigilance, especially when it was collected for some purpose other than you own. What were the sampling criteria? What sorts of material were intentionally ignored? What circumstances attended collection and post-processing?

Sometimes the location data we need comes not from a single survey or excavation, but from a large number of possibly heterogeneous sources. This will be the case for many spatial studies that involve small finds, inscriptions, coins, and the like. Fortunately, many of the major documentary, numismatic, and archaeological databases are working toward the inclusion of uniform geographic information in their database records. This development, which exploits the unique identifying numbers that Pleiades bestows on each ancient place, was first championed by Leif Isaksen, Elton Barker, and Rainer Simon of the Pelagios Commons project. If you get data from a project like the Heidelberg Epigraphic Databank, papyri.info, the Arachne database of the German Archaeological Institute, the Online Coins of the Roman Empire, or the Perseus Digital Library, you can count on being able to join it easily with Pleiades data and that of other Pelagios partners. Hopefully this will save some of us some time in days to come.

Sometimes what's important from a prior survey will come to us primarily through maps and plans. Historical maps may also carry information we'd like to extract and interpret. There's a whole raft of techniques associated with the scanning, georegistration, and georectification (or warping) of maps so that they can be layered and subjected to feature tracing (or extraction) in GIS software. Some historic cartofacts -- one thinks of the Peutinger map and medieval mappae mundi as examples -- are so out of step with our expectations of cartesian uniformity that these techniques don't work. Recourse in such cases may be had to first digitizing features of interest in the cartesian plane of the image itself, assigning spatial locations to features later on the basis of other data. Digitizing and vectorizing plans and maps resulting from multiple excavations in order to build up a comprehensive archaeological map of a region or site also necessitates not only the use of GIS software but the application of careful data management practices for handling and preserving a collection of digital files that can quickly grow huge.

We'll get insight into just such an effort tomorrow when Tim Shea reports on Duke's "Digital Athens Project".

Let's not forget remote sensing! In RS we use sensors -- devices that gather information in various sections of the electro-magnetic spectrum or that detect change in local physical phenomena. We mount these sensors on platforms that let us take whatever point of view is necessary to achieve the resolution, scale, and scope of interest: satellites, airplanes, drones, balloons, wagons, sleds, boats, human hands. The sensors capture emitted and reflected light in the visible, infrared, and ultraviolet wavelengths or magnetic or electrical fields. They emit and detect the return of laser light, radio frequency energy, microwaves, millimeter waves, and, especially underwater, sound waves. Specialized software is used to analyze and convert such data for various purposes, often into rasterized intensity or distance values that can be visualized by assigning brightness and color scales to the values in the raster grid. Multiple images are often mosaicked together to form continuous images of a landscape or 360 degree seamless panoramas.

Remotely sensed data facilitate the detection and interpretation of landforms, vegetation patterns, and physical change over time, revealing or enhancing understanding of built structures and exploited landscapes, as well as their conservation. This is the sort of work that Sarah Parcak has been popularizing, but it too has decades of practice behind it. In 1990, Tom Sever's dissertation reported on a remote-sensing analysis of the Anasazi road system, revealing a component of the built landscape that was not only invisible on the ground, but that demonstrates that the Anasazi were far more willing than even the Romans to create arrow-straight roads in defiance of topographical impediments. More recently, Prof. Sever and his NASA colleague Daniel Irwin have been using RS data for parts of Guatemala, Honduras, and Mexico, to distinguish between vegetation that thrives in alkaline soils and vegetation that doesn't. Because of the Mayan penchant for coating monumental structures with significant quantities of lime plaster, this data has proved remarkably effective in the locating of previously unknown structures beneath forest canopy. The results seem likely to overturn prevailing estimates of the extent of Mayan urbanism, illustrating a landscape far more cleared and built upon than heretofore proposed (cf. Sever 2003).

Given the passion with which I've already spoken about the care and feeding of data, you'll be unsurprised to learn that I'm really looking forward to Nevio Danelon's presentation tomorrow on the capture and curation of remotely sensed data in a digital workflow management system designed to support visualization processes.

I think it's worth noting that both Professor Parcak's recently collaborative work on a possible Viking settlement in Newfoundland, as well as Prof. Sever's dissertation, represent a certain standard in the application of remote sensing to archaeology. RS analysis is tried or adopted for most archaeological survey and excavation undertaken today. The choice of sensors, platforms, and analytical methods will of course vary in response to landscape conditions, expected archaeological remains, and the realities of budget, time, and know-how.

Similarly common, I think, in archaeological projects is the consideration of geophysical, alluvial, and climatic features and changes in the study area. The data supporting such considerations will come from the kinds of sources we've already discussed, and will have to be managed in appropriate ways. But it's in this area -- ancient climate and landscape change -- that I think ancient studies has a major deficit in both procedure and data. Computational, predictive modeling of ancient climate, landscape, and ground cover has made no more than tentative and patchy inroads on the way we think about and map the ancient historical landscape. That's a deficit that needs addressing in an interdisciplinary and more comprehensive way.

I'd be remiss if, before moving on to conclusions, I kept the focus so narrowly on research questions and methods that we miss the opportunity to talk about pedagogy, public engagement, outreach, and cultural heritage preservation. Spatial practice in the humanities is increasingly deeply involved in such areas. The Ancient World Mapping Center's Antiquity a-la Carte website enables users to create and refine custom maps from Pleiades and other data that can then be cited, downloaded, and reused. It facilitates the creation of map tests, student projects, and maps to accompany conference presentations and paper submissions.

Meanwhile, governments, NGOs, and academics alike are brining the full spectrum of spatial methods to bear as they try to prevent damage to cultural heritage sites through assessment, awareness, and intervention. The American Schools of Oriental Research conducts damage assessments and site monitoring with funding in part from the US State Department. The U.S. Committee of the Blue Shield works with academics to prepare geospatial datasets that are offered to the Department of Defense to enhance compliance with the 1954 Hague Convention for the Protection of Cultural Property in the Event of Armed Conflict.

These are critical undertakings as well, and should be considered an integral part of our spatial antiquity practice.

So, how should we gather up the threads of this discussion so we can move on to the more substantive parts of the conference?

I'd like to conclude as I began, by recommending an essay. In this case, I'm thinking of Bethany Nowiskie's recent essay on "capacity and care" in the digital humanities. Bethany is the former director of UVA's Scholars Lab. She now serves as Director of the Digital Library Federation at the Council on Library and Information Resources. I had the great good fortune to hear Bethany deliver a version of this essay as a keynote talk at a project director's meeting hosted by the NEH Office of Digital Humanities in Washington in September of last year. You can find the essay version on her personal website.

Bethany thinks the Humanities must expand its capacity in order not only to survive the 21st century, but to contribute usefully to its grand challenges. To cope with increasing amounts and needs for data of every kind. To move gracefully in analysis and regard from large scales to small ones and to connect analysis at both levels. To address audiences and serve students in an expanding array of modes. To collaborate across disciplines and heal the structurally weakening divisions that exist between faculty and "alternate academics", even as the entire edifice of faculty promotion and tenure threatens to shatter around us.

What is Bethany's prescription? An ethic of care. She defines an ethic of care as "a set of practices", borrowing the following quotation from the political scientist Joan Tronto:
a species of [collective] activity that includes everything we do to maintain, contain, and repair our world, so that we can live in it as well as possible.
I think our practice of spatial humanities in ancient studies is just such a collective activity. We don't have to turn around much to know that we are cradled in the arms and buoyed up on the shoulders of a vast cohort, stretching back through time and out across the globe. Creating data and handing it on. Debugging and optimizing algorithms. Critiquing ideas and sharpening analytical tools.

The vast majority of projects on the conference schedule, or that I could think of to mention in my talk, are explicitly and immediately collaborative.

And we can look around this room and see like-minded colleagues galore. Mentors. Helpers. Friends. Comforters. Makers. Guardians.

And we have been building the community infrastructure we need to carry on caring about each other and about the work we do to explain the human past to the human present and to preserve that understanding for the human future. We have centers and conferences and special interest groups and training sessions. We involve undergraduates in research and work with interested people from outside the academy. We have increasingly useful datasets and increasingly interconnected information systems. Will all these things persist? No, but we get to play a big role in deciding what and when and why.

So if there's a stable orbit to be found, I think it's in continuing to work together and to do so mindfully, acknowledging our debts to each other and repaying them in kind.

I'm reminded of a conversation I had with Scott Madry, back in the early aughts when we were just getting the Mapping Center rolling and Pleiades was just an idea. As many of you know, Scott together with Carole Crumley and numerous other collaborators here at UNC and beyond, have been running a multidimensional research project in Burgundy since the 1970s. At one time or another the Burgundy Historical Landscapes project has conducted most of the kinds of studies I've mentioned tonight, all the while husbanding a vast and growing store of spatial and other data across a daunting array of systems and formats.

I think that the conversation I'm remembering with Scott took place after he'd spent a couple of hours teaching undergraduate students in my seminar on Roman roads and land travel how to do orthophoto analysis the old fashioned way: with stereo prints and stereoscopes. He was having them do the Sarah Parcak thing: looking for crop marks and other indications of potentially buried physical culture. After the students had gone, Scott and I were commiserating about the challenges of maintaining and funding long-running research projects. I was sympathetic, but know now that I really didn't understand those challenges then. Scott did, and I remember what he said. He said: "We were standing on that hill in Burgundy twenty years ago, and as we looked around I said to Carol: 'somehow, we are going to figure out what happened here, no matter how long it takes.'"

That's what I'm talking about.

March 13, 2016

Current Epigraphy

EAGLE Storytelling App available on wordpress.org

The EAGLE Storytelling App is a WordPress plugin that allows users to write blogpost, news, stories and narratives by citing and embedding content from various web repositories related to the Ancient World (like Pelagios, the iDAI.gazetteer, Finds.org and many more).

The web app is available on the EAGLE project’s official website. Users can create an EAGLE account and start writing their epigraphic-related narratives right away and publish them on the Stories page.

the interface to insert epigraphy-related content

the interface to insert epigraphy-related content

But right now, epigraphers that want to experiment with the application can also install it on their WordPress-powered site easily from the official plugin repositories!

The application is designed to work within the EAGLE user-dedicated ecosystem (the search engine and the EAGLE collection of inscriptions and images), but it’s easily customizable: new plugins to parse and embed content from various sources can be implemented with minimal effort.

Currently, the EAGLE Storytelling App support content from:

What’s more, we provide an “EpiDoc generic reader” that can transform any EpiDoc-compliant XML file into a human-readable edition, with formatted text, images and all the information.

 a map displaying the locations cited in the EAGLE stories

a map displaying the locations cited in the EAGLE stories

 

Embedded content is visualized as a compact interactive box that can be expanded by the users or (in the case of maps inserted from the iDAI.gazetteer and Pleiades) navigated. Users can also visualize the object in its original webpage or launch a query to retrieve all other stories that embed the same item.

The app makes it very easy for authors to insert their content from the supported repositories. A simple interface allows users to search in the supported websites from the “Add Media” menu of WP. Alternatively, authors can use the native search pages on the supported sites and then copy-paste the URL in the search interface of the App or even directly into the story editor!

If you want to know more about our application, visit our FAQ page or simply browse some of the stories that our users have published.

 

Francesco Mambrini, Philipp Franck

March 04, 2016

Current Epigraphy

EpiDoc at Summer School in Digital Humanities (Sep 2016, Hissar, Bulgaria)

The Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, organizes jointly with an international team of lecturers and researchers in the field of Digital Humanities a Summer School in Digital Humanities. The Summer School will take place between 05-10 September 2016 and is targeted at historians, archaeologists, classical scholars, philologists, museum and conservation workers, linguists, researchers in translation and reception studies, specialists in cultural heritage and cultural management, textual critics and other humanitarians with little to moderate skills in IT who would like to enhance their competences. The Summer School will provide four introductory modules on the following topics:

  • Text encoding and interchange by Gabriel Bodard, University of London, and Simona Stoyanova, King’s College London: TEI, EpiDoc XML (http://epidoc.sourceforge.net/), marking up of epigraphic monuments, authority lists, linked open data for toponymy and prosopography: SNAP:DRGN (http://snapdrgn.net/), Pelagios (http://pelagios-project.blogspot.bg/), Pleiades (http://pleiades.stoa.org/).
  • Text and image annotation and alignment by Simona Stoyanova, King’s College London, and Polina Yordanova, University of Sofia: SoSOL Perseids tools (http://perseids.org), Arethusa grammatical annotation and treebanking of texts, Alpheios text and translation alignment, text/image alignment tools.
  • Geographical Information Systems and Neogeography by Maria Baramova, University of Sofia, and Valeria Vitale, King’s College London: Historical GIS, interactive map layers with historical information, using GeoNames (http://www.geonames.org/) and geospatial data, Recogito tool for Pelagios.
  • 3D Imaging and Modelling for Cultural Heritage by Valeria Vitale, King’s College London: photogrammetry, digital modelling of indoor and outdoor objects of cultural heritage, Meshmixer (http://www.meshmixer.com/), Sketchup (http://www.sketchup.com/) and others.

The school is open for applications by MA and PhD students and postdoc and early researchers from all humanitarian disciplines, as well as employees in the field of cultural heritage. The applicants should send a CV and a Motivation statement clarifying their specific needs and expressing interest in one or more of the modules no later than 15.05.2016. The places are limited and you will be notified about your acceptance within 10 working days after the application deadline. Transfer from Sofia to Hissar and back, accommodation and meal expenses during the Summer School are covered by the organizers. Five scholarships of 250 euro will be accorded by the organizing committee to the participants whose work and motivation are deemed the most relevant and important.

The participation fee is 40 еurо. It covers coffee breaks, social programme and materials for the participants.

Please submit your applications to dimitar.illiev@gmail.com.

ORGANISING COMMITTEE
Assoc. Prof. Dimitar Birov (Department of Informatics, University of Sofia)
Dr. Maria Baramova (Department of Balkan History, University of Sofia)
Dr. Dimitar Iliev (Department of Classics, University of Sofia)
Mirela Hadjieva (Centre for Excellence in the Humanities, University of Sofia)
Dobromir Dobrev (Centre for Excellence in the Humanities, University of Sofia)
Kristina Ferdinandova (Centre for Excellence in the Humanities, University of Sofia)

March 03, 2016

Stoa

Summer School in Digital Humanities (Sep 2016, Hissar, Bulgaria)

The Centre for Excellence in the Humanities to the University of Sofia, Bulgaria, organizes jointly with an international team of lecturers and researchers in the field of Digital Humanities a Summer School in Digital Humanities. The Summer School will take place between 05-10 September 2016 and is targeted at historians, archaeologists, classical scholars, philologists, museum and conservation workers, linguists, researchers in translation and reception studies, specialists in cultural heritage and cultural management, textual critics and other humanitarians with little to moderate skills in IT who would like to enhance their competences. The Summer School will provide four introductory modules on the following topics:

  • Text encoding and interchange by Gabriel Bodard, University of London, and Simona Stoyanova, King’s College London: TEI, EpiDoc XML (http://epidoc.sourceforge.net/), marking up of epigraphic monuments, authority lists, linked open data for toponymy and prosopography: SNAP:DRGN (http://snapdrgn.net/), Pelagios (http://pelagios-project.blogspot.bg/), Pleiades (http://pleiades.stoa.org/).
  • Text and image annotation and alignment by Simona Stoyanova, King’s College London, and Polina Yordanova, University of Sofia: SoSOL Perseids tools (http://perseids.org), Arethusa grammatical annotation and treebanking of texts, Alpheios text and translation alignment, text/image alignment tools.
  • Geographical Information Systems and Neogeography by Maria Baramova, University of Sofia, and Valeria Vitale, King’s College London: Historical GIS, interactive map layers with historical information, using GeoNames (http://www.geonames.org/) and geospatial data, Recogito tool for Pelagios.
  • 3D Imaging and Modelling for Cultural Heritage by Valeria Vitale, King’s College London: photogrammetry, digital modelling of indoor and outdoor objects of cultural heritage, Meshmixer (http://www.meshmixer.com/), Sketchup (http://www.sketchup.com/) and others.

The school is open for applications by MA and PhD students and postdoc and early researchers from all humanitarian disciplines, as well as employees in the field of cultural heritage. The applicants should send a CV and a Motivation statement clarifying their specific needs and expressing interest in one or more of the modules no later than 15.05.2016. The places are limited and you will be notified about your acceptance within 10 working days after the application deadline. Transfer from Sofia to Hissar and back, accommodation and meal expenses during the Summer School are covered by the organizers. Five scholarships of 250 euro will be accorded by the organizing committee to the participants whose work and motivation are deemed the most relevant and important.

The participation fee is 40 еurо. It covers coffee breaks, social programme and materials for the participants.

Please submit your applications to dimitar.illiev@gmail.com.

ORGANISING COMMITTEE
Assoc. Prof. Dimitar Birov (Department of Informatics, University of Sofia)
Dr. Maria Baramova (Department of Balkan History, University of Sofia)
Dr. Dimitar Iliev (Department of Classics, University of Sofia)
Mirela Hadjieva (Centre for Excellence in the Humanities, University of Sofia)
Dobromir Dobrev (Centre for Excellence in the Humanities, University of Sofia)
Kristina Ferdinandova (Centre for Excellence in the Humanities, University of Sofia)

February 24, 2016

Stoa

EpiDoc Workshop, London, April 11-15, 2016

We invite applications for a 5-day training workshop on digital editing of epigraphic and papyrological texts, to be held in the Institute of Classical Studies, University of London, April 11-15, 2016. The workshop will be taught by Gabriel Bodard (ICS), Simona Stoyanova (KCL) and Pietro Liuzzo (Heidelberg / Hamburg). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions in TEI, identifying and linking to external person and place authorities, and use of the online Papyrological Editor and Perseids platforms.

No technical skills are required, but a working knowledge of Greek/Latin or other ancient language, epigraphy or papyrology, and the Leiden Conventions will be assumed. The workshop is open to participants of all levels, from graduate students to professors and professionals.

To apply for a place on this workshop please email pietro.liuzzo@zaw.uni-heidelberg.de with a brief description of your reason for interest and summarising your relevant background and experience, by 6th March 2016. Please use as subject of your email “[EPIDOC LONDON 2016] application <yourname>”.

Current Epigraphy

EpiDoc Workshop, London, April 11-15, 2016

We invite applications for a 5-day training workshop on digital editing of epigraphic and papyrological texts, to be held in the Institute of Classical Studies, University of London, April 11-15, 2016. The workshop will be taught by Gabriel Bodard (ICS), Simona Stoyanova (KCL) and Pietro Liuzzo (Heidelberg / Hamburg). There will be no charge for the workshop, but participants should arrange their own travel and accommodation.

EpiDoc (epidoc.sf.net) is a community of practice and guidance for using TEI XML for the encoding of inscriptions, papyri and other ancient texts. It has been used to publish digital projects including Inscriptions of Aphrodisias and Tripolitania, Duke Databank of Documentary Papyri, Digital Corpus of Literary Papyri, and EAGLE Europeana Project. The workshop will introduce participants to the basics of XML markup and give hands-on experience of tagging textual features and object descriptions in TEI, identifying and linking to external person and place authorities, and use of the online Papyrological Editor and Perseids platforms.

No technical skills are required, but a working knowledge of Greek/Latin or other ancient language, epigraphy or papyrology, and the Leiden Conventions will be assumed. The workshop is open to participants of all levels, from graduate students to professors and professionals.

To apply for a place on this workshop please email pietro.liuzzo@zaw.uni-heidelberg.de with a brief description of your reason for interest and summarising your relevant background and experience, by 6th March 2016. Please use as subject of your email “[EPIDOC LONDON 2016] application <yourname>”.

February 23, 2016

Current Epigraphy

Postdoctoral Research Fellow (Facilitating Access to Latin Inscriptions), Oxford

Postdoctoral Research Fellow (Facilitating Access to Latin Inscriptions)

Faculty of Classics, Ioannou Centre for Classical and Byzantine Studies, 66 St Giles’, Oxford, and Ashmolean Museum, Beaumont Street, Oxford

Grade 7: £30,738 – £32,600 p.a.

Applications are invited for a full-time Postdoctoral Research Fellowship, to work on the AHRC-funded project ‘Facilitating Access to Latin inscriptions in Britain’s Oldest Public Museum through Scholarship and Technology’. The post is fixed-term, to cover the period from 1 April 2016 to the end of the project on 31 December 2016. The principal responsibilities of the Research Fellow will be to fulfil the project’s impact and public engagement agenda and to complete the development of digital resources (EpiDoc corpus and website) under the direction of Professor Alison Cooley (PI, University of Warwick) and Dr Paul Roberts (Keeper of Greek and Roman Antiquities, Ashmolean Museum).

The successful applicant must possess a doctorate in a relevant field, and be able to demonstrate experience of working in EpiDoc programming or TEI, proficient IT skills, including web design and authoring, and experience of working in collaboration with schools, whether primary and/or secondary, excellent communication skills, and the ability to carry out research independently.

Owing to the nature of this position, any offer of employment with the University will be subject to a satisfactory security screening and to a satisfactory disclosure report from the Disclosure and Barring Service.

Applications for this vacancy are to be made online via www.recruit.ox.ac.uk and enter Vacancy ID 122067.

The closing date for applications is 12.00 noon on 7 March 2016.

Contact Person : Miss Clare Jarvis

Contact Phone : 01865 288391

Contact Email : recruitment@classics.ox.ac.uk

November 25, 2015

Current Epigraphy

EAGLE pre-conference workshops

The  day before the EAGLE 2016 Conference “DIGITAL AND TRADITIONAL EPIGRAPHY IN CONTEXT”, on January 26, 2016 there will be three pre-conference workshops taking place in Rome.

1. EpiDoc in a Nutshell

2. Translations of Inscriptions Online

3. EAGLE Search Engine and storytelling application

For more details, please see this page.

These, together with the conference, are all free of charge and open to the public, but have limited places available (20 for each workshop).

You can either book workshop 1 or 2+3.

To request a place at the workshop(s), send an email with

  • your name and affiliation
  • a short statement (1-2 lines) on your motivation to participate

to pietro.liuzzo@zaw.uni-heidelberg.de by 1 December 2015.

You will receive confirmation and further details as soon as possible thereafter.

These workshops do not require other forms of registration, but if you are also coming to the EAGLE 2016 Conference, please say so in your message and register here if you have not yet done so.

August 25, 2015

Current Epigraphy

Sixth EAGLE International Event Off the beaten track. Epigraphy at borders

Bari: Thursday, September 24 – Friday, September 25, 2015.

Hosted by EAGLE (Europeana network of Ancient Greek and Latin Epigraphy), it is the sixth in a series of international events planned by this European and international consortium with the support of the Department of Classics and Late Antiquity Studies at the University of Bari “Aldo Moro”.

The aim of this initiative is to create a shared space to discuss the issues addressed in digitizing inscriptions characterised by unusual features in comparison to the usual epigraphic habit.

During the event, the EAGLE Portal will be officially launched and presented to the public for the first time, together with the EAGLE Storytelling Application. A training session on how to use the EAGLE Storytelling Application will be offered.

Event website: http://www.eagle-network.eu/about/events/sixth-eagle-international-event-2015/

Hashtag#EAGLE2015Bari

Please ask for any information to:

antonio.felle@uniba.it

anita.rocco@uniba.it

raffaella.santucci@uniroma1.it

 

May 21, 2015

Current Epigraphy

EAGLE-EpiDoc Workshop 2015 – Bologna

 

25-27 maggio 2015

Alma Mater Studiorum Università di Bologna

 Dipartimento di Storia Culture Civiltà, sezione di Storia Antica

Sala Celio – piano V – Via Zamboni 38

Pietro Liuzzo (Universität Heidelberg, EAGLE Project, Rodopis)

Alice Bencivenni, Irene Vagionakis (Università di Bologna)

Giuditta Mirizio (Università di Bologna, Universität Heidelberg)

 

 

LUNEDÌ 25 Maggio 2015 (09:00-17:00)

09:00 Introduzione al corso (Alice Bencivenni)

09:30 General introduction to EpiDoc; Introduzione a EpiDoc, TEI e XML (Pietro Liuzzo)

10:30 Intro to XML (Irene Vagionakis)

11:30 The EpiDoc guidelines. I parte: trascrizione del testo (Irene Vagionakis)

12:30 Pranzo

13:30 The EpiDoc guidelines. II parte: indicizzazione (Irene Vagionakis)

14:30 Markup di iscrizioni: esempi completi (Alice Bencivenni)

15.00 How to use Oxygen (Pietro Liuzzo)

16:00 Hands-on work on IGCyr texts

MARTEDÌ 26 Maggio 2015 (09:00-18:00)

9:00 Dai file sorgente al sito: costruiamo un sito di esempio (Pietro Liuzzo)

GIT, FILEZILLA, Altervista.
 An online edition basic structure: HTML and CSS.

Get a basic result online edition from the source: transformations
 XPATH 
XSLT

12:30 Pranzo

14:00 EAGLE: vocabolari LOD, Wikimedia projects (Pietro Liuzzo)

EAGLE BPN and EUROPEANA
 LOD 
Vocabularies management and indexing

Contributing: translations and images of inscriptions in Commons

16:00 PersName and SNAP:DRNG (Gabriel Bodard via Skype)

MERCOLEDÌ 27 Maggio 2015 (09:00-17:00)

9:00 Papyri.info e Leiden+ (Giuditta Mirizio)

11:30 Hands-on work on papyri.info

12:30 Pranzo

13:30 Pelagios 3 (Pau De Soto via Skype)

15:00
 Hands-on work

 

 

May 08, 2015

Current Epigraphy

Humanités numériques : l’exemple de l’Antiquité

Elena Pierazzo et Isabelle Cogitore sont ravies d’annoncer que le site web du colloque Humanités numériques : l’exemple de l’Antiquité est désormais ouvert. Vous pouvez le visiter au http://dhant.sciencesconf.org. Les inscriptions sont ouvertes aussi: l’inscription est gratuite et obligatoire. Vous pouvez également visionner le riche programme d’ateliers qui sont offerts par la conférence: http://dhant.sciencesconf.org/resource/page/id/7

October 30, 2014

Current Epigraphy

EAGLE 2014 International Conference: The IGCyr | GVCyr corpora

The IGCyr | GVCyr demonstration site is now available.

The Inscriptions of Greek Cyrenaica (IGCyr) and the Greek Verse inscriptions of Cyrenaica (GVCyr) are two corpora, the first collecting all the inscriptions of Greek (VII-I centuries B.C.) Cyrenaica, the second gathering the Greek metrical texts of all periods. These new critical editions of inscriptions from Cyrenaica are part of the international project Inscriptions of Libya (InsLib), incorporating Inscriptions of Roman Tripolitania (IRT, already online), the Inscriptions of Roman Cyrenaica project (IRCyr, in preparation), and the ostraka from Bu Ngem (already available on the website Papyri.info).

A comprehensive corpus of the inscriptions of Greek Cyrenaica is a longstanding desideratum among the scholars of the ancient world. Greek inscriptions from Archaic, Classical and Hellenistic Cyrenaica are currently scattered among many different, sometimes outdated publications, while new texts have been recently discovered and edited. For the first time all the inscriptions known to us in 2014, coming from this area of the ancient Mediterranean world, will be assembled in a single online and open access publication. An essential addition to the IGCyr and GVCyr corpora, as well as a natural outcome of the study of the inscriptions, is the planned publication of the Prosopographia Cyrenaica.

Catherine Dobias-Lalou is the main epigraphy researcher working on these comprehensive epigraphic corpora in EpiDoc in cooperation with scholars from the University of Bologna, the University of Macerata, the University of Roma Tor Vergata, the University of Paris-Sorbonne and King’s College London. Although the edition of the inscriptions is still in progress, the team working on the project wish to share with others the structure of the publications and the research approach. For this reason three of the texts which will be published and a selected bibliography are included in the demonstration site. The website, hosted by the University of Bologna, has been developed and is maintained by the CRR-MM, Centro Risorse per la Ricerca Multimediale, University of Bologna.

October 09, 2014

Current Epigraphy

EAGLE 2014 International Conference: The inscription between text and object

What is an inscription? There are different ways to consider what an inscription is:

  • Signifiers on a physical support [linguistic perspective]
  • An artifact bearing text [archeological perspective]
  • A text carved or painted on a durable material to be posted [historical-literary perspective]

In the past, scholars opted for just one of these viewpoints and most of them approached inscriptions as texts. But now the new positive trend is to mix disciplines and see the inscription between text and object as a semantic system to describe, read and interpret by means of at least a threefold approach: archaeological, textual and historical.

The task we now have is to restructure the epigraphic edition, not just by switching from the paper to the web, but by relying on a model that combines the textual as well as the material dimensions of an artifact bearing text, and that helps to determine:

  • The arrangement of an inscription on the support;
  • The textual cuts made by epigraphers on the base of different criteria.

In this endeavor, we have to keep in mind a trivial but essential notion: editing an inscription is, from start to finish, an interpretation and a matter of personal choice.

In a digital representation, a distinct markup is utilised to encode the physical and textual dimensions. In order to combine them, we submit a definition of some epigraphic notions, which supports the theoretical model of an encoding schema compliant with the EpiDoc guidelines. This model is designed as a part of the IGLouvre project lead by Michèle Brunet (Professor of Greek Epigraphy, University Lumière-Lyon 2), which aims to publish a digital edition of the Louvre collection of Greek Inscriptions.

The project’s guidelines specify some recommendations for the representation of 3 base structures. In the <teiHeader> of the EpiDoc files, a text is represented with a <msItem> element while a physicals part will be described in a <msPart> element. The surface, which bears the inscribed words, is analysed as a physical feature, that is to say a non-detachable part. It must be explicitly represented using a texpart subdivision of the <div> containing the transcription (e.g. div[@type=’textpart’][subtype=’face’]). Texts, objects, physical features and transcriptions are related with a combination of correspondence attributes (@corresp) and milestones (<milestone unit=’block’/>) for the representation of physical and textual boundaries.

Our encoding strategy permits us to meet the following requirements:

  • The material and abstract dimensions of the items in the Louvre collection are taken into account in an EpiDoc markup, exploiting its capacity to provide fine grained identifiers and linking mechanisms that are required to build on an interface showing inscriptions not just as decontextualized texts;
  • The scientific editors keep full control on the editorial choices they made beyond the structure of the printed or digital publication;
  • The deconstruction of the notion of ‘inscription’ will also provide help for designing and implementing several extractions and data exports that will have to be developed in the near future to ensure the interoperability of the digital collection and its re-use for other projects.

You will find more information about this work in our paper:

Emmanuelle Morlock, Eleonora Santin, The inscription between text and object, in Silvia Orlandi, Raffaella Santucci, Vittore Casarosa, Pietro Maria Liuzzo eds., Information Technologies for Epigraphy and Cultural Heritage Proceedings of the First EAGLE International Conference, Rome (forthcoming).

 

October 06, 2014

Horothesia (Tom Elliott)

Eighteen Years of EpiDoc. Now what?

Transcript of my keynote address, delivered to the EAGLE 2014 International Conference on Monday, September 29, 2014, at the École normale supérieure in Paris:

Thank you.

Allow me to begin by thanking the organizers of this conference. The conference chairs: Silvia Orlandi, Francois Berard, and John Scheid. Members of the Steering Committee: Vittore Casarosa, Pietro Liuzzo, and Raffaella Santucci. The local organizing committee: Elizabeth Le Bunetel and Philippe Martineau. Members of the EAGLE 2014 General Committee -- you are too numerous to mention, but no less appreciated. To the sponsors of EAGLE Europeana: the Competitiveness and Innovation Framework Programme of the European Commission. Europeana. Wikimedia Italia. To the presenters and poster-authors and their collaborators. To those who have made time out of busy schedules to prepare for, support, or attend this event. Colleagues and friends. Thank you for the invitation to speak and to be part of this important conference.

OK. Please get out your laptops and start up the Oxygen XML Editor. If you actually read the syllabus for the course, you'd have already downloaded the latest copy of the EpiDoc schema...

Just kidding.

I have perhaps misled you with my title. This talk will not just be about EpiDoc. Instead, I'd like to use EpiDoc as an entrance point into some thoughts I've had about what we are doing here. About where we are going. I'd like to take EpiDoc as an example -- and the EAGLE 2014 Conference as a metaphor -- for something much larger: the whole disparate, polyvalent, heterarchical thing that we sometimes call "Épigraphie et électronique". Digital epigraphy. Res epigraphica digitalis.

Before we try to unpack how we got here and where we're going, I'd like to ask for your help in trying to illuminate who we are. I'd like you to join me in a little exercise in public self-identification. Not only is this an excellent way to help fill the generous block of time that the conference organizers have given me for this talk, it's also much less awkward than trooping out to the Place de la Sorbonne and doing trust falls on the edge of the fountain. ... Right?

Seriously. This conference brings together a range of people and projects that really have had no specific venue to meet, and so we are in some important ways unknown to each other. It's my hypothesis that, if we learn a bit about each other up front, we prime the pump of collaboration and exchange for the rest of the conference. After all, why do we travel to conferences at all if it is not for the richness of interacting with each other, both during sessions and outside them. OK, and as Charlotte Roueché is ever vigilant to remind us, for the museums.

OK then, are you ready?

Independent of any formal position or any academic or professional credential, raise your hand if you would answer "yes" to this question: "Are you an epigraphist?"

What about "are you an information scientist?"

Historians?

Oh, yes, you can be more than one of these -- you'll recall I rolled out the word "heterarchy" in my introduction!

How about "Wikipedian?" "Cultural Heritage Professional?" "Programmer?" "Philologist?" "Computer Scientist?" "Archivist?" "Museologist?" "Linguist?" "Archaeologist?" "Librarian?" "Physicist?" "Engineer?" "Journalist?" "Clergy?"

Phooey! No clergy!

Let's get at another distinction. How many of you would identify yourselves as teachers?

What about students?

Researchers? Administrators? Technicians? Interested lay persons?

OK, now that we have your arms warmed up, let's move on to voices.

If you can read, speak, or understand a reasonable amount of the English language, please join me in saying "I understand English."

Ready? "I understand English."

OK. Now, if we can read, speak, or understand a reasonable amount of French, shall we say "Je comprends le français?"

"Je comprends le français."

What about Arabic?

Bulgarian? Catalan? Flemish? German? Of course there are many more represented here, but I think you get my point.

OK. Now let's build this rhetorical construct one step higher.

This one involves standing up if that's physically appropriate for you, so get yourselves ready! If cannot stand, by all means choose some other, more appropriate form of participation.
Independent of any formal position or any academic credential, I want you to stand up if you consider yourself a "scholar".

Now, please stay standing -- or join those standing -- if you consider yourself a "student".

Yes, I did it. I reintroduced the word "student" from another category of our exercise. I am not only a champion of heterarchy, but also of recursive redefinition.

And now, please stay standing -- or join those standing -- if you consider yourself an "enthusiast."

If you're not standing, please stand if you can.

Now, pick out some one near you that you have not met. Shake their hand and introduce yourself. Ask them what they are so enthusiastic about that they were compelled to come to this conference!

Alright. Please resume your seats.

I think we're warmed up.

Let me encourage you to adopt a particular mindset while you are here at this conference. I hope that you will find it to be both amenable and familiar. It's the active recognition of the valuable traits we all share: intelligence, inquisitiveness, inventiveness, incisiveness, interdependence. Skill. Stamina. Uniqueness. Respect for the past. Congeniality.

I am here, in part, because I have a deep, inescapable interest in the study of ancient documents and in the application of computational methods and new media to their resurrection, preservation, and contemplation, and to their reintegration into the active cultural memory of the human people.
I have looked over the programme for this conference, and I have the distinct impression that your reasons for being here are somewhat similar to mine. I am delighted to have this opportunity to visit with old friends and fellow laborers. And to make the acquaintance of so many new ones. I expect to be dazzled by the posters and presentations to come. Are you as excited as I am?

My title did promise some EpiDoc.

How many of you know EpiDoc?

How many of you know what EpiDoc is?

How many of you have heard of EpiDoc?

The word "EpiDoc" is a portmanteau, composed of the abbreviated word "epigraphy" and the abbreviated word "document" or "documentation" (I can't remember which). It has become a misnomer, as EpiDoc is used for much more than epigraphic documents and documentation. It has found a home in papyrology and in the study of texts transmitted to us from antiquity via the literary and book-copying cultures of the intervening ages. It has at least informed, if not been directly used, in other allied subfields like numismatics and sigillography. It's quite possible I'll learn this week of even broader usages.

EpiDoc is a digital format and method for the encoding of both transcribed and descriptive information about ancient texts and the objects that supported and transmitted them. Formally, it is a wholly conformant customization of the Text Encoding Initiative's standard for the representation of texts in digital form. It is serialized in XML -- the Extensible Markup Language -- a specification developed and maintained by the World-Wide Web Consortium.

EpiDoc is more than format and method. It is a community of practice. The term embraces all the people who learn, use, critique, and talk about EpiDoc. It also takes in the Guidelines, tools, and other helps that have been created and curated by those people. All of them are volunteers, scraping together the time to work on EpiDoc out of their personal time, their academic time, and out of the occasional grant. There has never been formal funding devoted to the development or maintenance of the EpiDoc guidelines or software. If you are a participant in the EpiDoc community, you are a hero.

EpiDoc was born in the late 1990s in a weird little room in the northwest corner of the third floor of Murphey Hall on the campus of the University of North Carolina at Chapel Hill. The room is no longer there. It was consumed in a much-needed and long-promised renovation in 2003 or so. It was the old Classics Department computer lab: a narrow space with a sturdy, home-made, built-in counter along two walls and a derelict bookshelf. It was part of a suite of three rooms, the most spacious of which was normally granted as an office to that year's graduate fellow.

The room had been appropriated by Classics graduate students Noel Fiser and Hugh Cayless, together with classical archaeology graduate student Kathryn McDonnell, and myself (an interloper from the History Department). The Classics department -- motivated and led by these graduate students with I-forget-which-faculty-member serving as figurehead -- had secured internal university funding to digitize the department's collection of 35 millimeter slides and build a website for searching and displaying the resulting images. They bought a server with part of the grant. It soon earned the name Alecto after one of the Furies in Greek mythology. I've searched in vain for a picture of the lab, which at some point we sponge-painted in bright colors evocative of the frescoes from Minoan Santorini. The world-wide web was less than a decade old.

I was unconscious then of the history of computing and the classics at Chapel Hill. To this day, I don't know if that suite of rooms had anything to do with David Packard and his time at Chapel Hill. At the Epigraphic Congress in Oxford, John Bodel pointed to Packard's Livy concordance as one of the seminal moments in the history of computing and the classics, and thus the history of digital epigraphy. I'd like to think that we intersected that heritage not just in method, but in geography.

I had entered the graduate program in ancient history in the fall of 1995. I had what I would later come to understand to have been a spectacular slate of courses for my first term: Richard Talbert on the Roman Republic, Jerzy Linderski on Roman Law, and George Houston on Latin Epigraphy.
Epigraphy was new to me. I had seen and even tried my hand at reading the odd Latin or Greek inscription, but I had no knowledge of the history or methods of discipline, and very little skill. As George taught it, the Latin Epigraphy course was focused on the research use of the published apparatus of Latin epigraphy. The CIL. The journals. The regional and local corpora. What you could do with them.

If I remember correctly, the Epigraphic Database Heidelberg was not yet online, nor were the Packard Greek inscriptions (though you could search them on CDROM). Yes, the same Packard. Incidentally, I think we'll hear something very exciting about the Packard Greek Inscriptions in tomorrow's Linked Ancient World Data panel.

Anyway, at some point I came across the early version of what is now called the Epigraphische Datenbank Clauss - Slaby, which was online. Back then it was a simple search engine for digital transcripts of the texts in the L'Annee Epigraphique up from 1888 through 1993. Crucially, one could also download all the content in plain text files. If I understand it correctly, these texts were also destined for publication via the Heidelberg database (and eventually Rome too) after verification by autopsy or inspection of photographs or squeezes.

At some point, I got interested in abbreviations. My paper for George's class was focused on "the epigraphy of water" in Roman North Africa. I kept running across abbreviations in the inscriptions that didn't appear in any of the otherwise helpful lists one finds in Cagnat or one of the other handbooks.  In retrospect, the reasons are obvious: the handbook author tailors the list of abbreviations to the texts and types of texts featured in the handbook itself. Selected for importance and range, the statistical distribution of textual types and language, and of features like abbreviation, are not the same as those for the entire corpus. So, what is a former programmer to do? Why not download the texts from Clauss' site and write a program to hunt for parentheses. The Leiden Conventions make parentheses a strong indicator of abbreviations that have been expanded by an editor, so the logic for the program seemed relatively straightforward.

Mercifully, the hacktastical code that I wrote to do this task has, I think, perished from the face of the earth. The results, which I serialized into HTML form, may still be consulted on the website of the American Society of Greek and Latin Epigraphy.

As useful as the results were, I was dissatisfied with the experience. The programming language I had used -- called "C" -- was not a very good fit for the kind of text processing involved. Moreover, as good as the Leiden Conventions are, parentheses are used for things other than abbreviations. So, there was manual post-processing to be done. And then there were the edge cases, like abbreviations that stand alone in one document, but are incorporated into longer abbreviations in others. And then there were expanded use cases: searching for text in one inscription that was abbreviated in another. Searching for abbreviations or other strings in text that was transcribed from the original, rather than in editorial supplement or restoration. And I wanted a format and software tools that was a better fit for textual data and this class of problems.

XML and the associated Extensible Stylesheet Language (XSL) -- both then fairly new -- seemed like a good alternative approach. So I found myself confronted with a choice: should I take XML and invent my own schema for epigraphic texts, or should I adopt and adapt something someone else had already created? This consideration -- to make or to take -- is still of critical importance not only for XML, but for any format specification or standards definition process. It's important too for most digital projects. What will you build and on what will you build it?

There are pros and cons. By adopting an existing standard or tool, you can realize a number of benefits. You don't reinvent the wheel. You build on the strengths and the lessons of others. You can discuss problems and approaches with others who are using the same method. You probably make it easier to share and exchange your tools and any data you create. It's possible that many of the logic problems that aren't obvious to you at the beginning have already been encountered by the pioneers.
But standards and specifications can also be walled gardens in which decisions and expert knowledge are hoarded by the founders or another elite group. They can undermine openness and innovation. They can present a significant learning curve. You can use a complex standard and find that you've built a submarine to cross the Seine. Something simpler might have worked better.

Back then, there was a strong narrative around warning people off the cavalier creation of new XML schemas. The injunction was articulated in a harsh metaphor: "every time someone creates a new schema, a kitten dies." Behind this ugly metaphor was the recognition of another potential pitfall: building an empty cathedral. Your data format -- your personal or parochial specification -- might embody everything you imagined or needed, but be largely useless to, or unused by, anyone else.
So, being a cat lover, and being lazy (all the best programmers are lazy), I went looking for an existing schema. I found it in the Text Encoding Initiative. Whether the TEI (and EpiDoc) fit your particular use case is something only you can decide. For me, at that time and since, it was a good fit. I was particularly attracted to a core concept of the TEI: one should encode the intent behind the formatting and structure in a document -- the semantics of the authorial and editorial tasks -- rather than just the specifics of the formatting. So, where the Leiden Conventions would have us use parentheses to mark the editorial expansion of an abbreviation, the TEI gives us XML elements that mean "abbreviation" and "expansion." Where a modern Latin epigraphic edition would use a subscript dot to indicate that the identity of a character is ambiguous without reference to context, the TEI gives us the "unclear" element.

This encoding approach pays off. I'll give just one example. For a few years now, I've been helping Arlo Griffiths (who directs the Jakarta research center of the École française d'Extrême-Orient) to develop a corpus of the surviving inscriptions of the Campa Kingdoms. This is a small corpus, perhaps 400 extant inscriptions, from coastal Vietnam, that includes texts in both Sanskrit and the incompletely understood Old Cam language. The script involved has not yet made its way into the Unicode specification. The standard transliteration scheme for this script, as well as some of the other editorial conventions used in the publication of Cam inscriptions, overlaps and conflicts with the Leiden conventions. But with TEI/EpiDoc there is no confusion or ambiguity. The XML says what the editor means to say, and the conventions of transcription are preserved unchanged, perhaps someday to be converted programmatically to Unicode when Unicode is ready.

EpiDoc transitioned from a personal project to a public one when another potential use case came along. For some time, a committee commissioned by the Association Internationale d'Épigraphie Grecque et Latine had been working under the direction of Silvio Panciera, then the chair of Latin epigraphy at La Sapienza in Rome. Their goal was to establish a comprehensive database of Greek and Latin inscriptions, primarily for the purpose of searching the texts and associated descriptive information or metadata. It was Charles Crowther at Oxford's new Centre for the Study of Ancient Documents who put me in contact with the committee. And it was Charles who championed the eventual recommendation of the committee that the system they envisioned must be able to import and export structured documents governed by a standard schema. He was thinking of EpiDoc.

Many years have passed and many things have changed, and I'm forced to leave out the names of so many people whose hard work and acumen has brought about those changes. Here in Paris today Panciera's vision stands on the cusp of realization. It has also been transcended, for we are not here to talk about a standalone textual database or a federation of such, but about the incorporation of Greek and Latin epigraphy -- in all its historiographical variety and multiplicity of reception -- into the digital cultural heritage system of Europe (Europeana) and into the independent digital repository of a global people: Wikipedia and Wikidata. That EpiDoc can play a role in this grand project just blows me away.

And it's not just about EAGLE, Europeana, Wikipedia, and EpiDoc. It's about a myriad other databases, websites, images, techniques, projects, technologies, and tools. It's about you and the work that you do.

Even as we congratulate ourselves on our achievements and the importance of our mission, I hope you'll let me encourage you to keep thinking forward. We are doing an increasingly good job of bringing computational approaches into many aspects of the scholarly communication process. But plenty remains to be done. We are starting to make the transition from using computer hardware and software to make conventional books and digital imitations thereof; "born digital" is starting to mean something more than narrative forms in PDF and HTML, designed to be read directly by each single human user and, through them, digested into whatever database, notebook, or other research support system that person uses. We are now publishing data that is increasingly designed for harvesting and analyzing by automated agents and that is increasingly less encumbered by outdated and obstructive intellectual property regimes. Over time, our colleagues will begin to spend less time seeking and ingesting data, and more time analyzing, interpreting, and communicating results. We are also lowering the barriers to appreciation and participation in global heritage by a growing and more connected and more vulnerable global people.

Will we succeed in this experiment? Will we succeed in helping to build a mature and responsible global culture in which heritage is treasured, difference is honored, and a deep common cause embraced and protected? Will we say three years from now that building that database or encoding those texts in EpiDoc was the right choice? In a century, will our work be accessible and relevant to our successors and descendants? In 5? In 10?

I do not know. But I am thrilled, honored, and immensely encouraged to see you here, walking this ancient road and blazing this ambitious and hopeful new trail. This is our opportunity to help reunite the world's people and an important piece of their heritage. We are a force against the recasting of history into political rhetoric. We stand against the convenient ignorance of our past failures and their causes. We are the antidote to the destruction of ancient statues of the Buddha, to the burning of undocumented manuscripts, to papyri for sale on eBay, to fields of holes in satellite images where once there was an unexcavated ancient site.

Let's do this thing.


September 26, 2014

Horothesia (Tom Elliott)

New in Electra and Maia: I.Sicily

I have just added the following blog to the Maia and Electra Atlantides feed aggregators:

title = I.Sicily
url = http://isicily.wordpress.com/
creators = Jonathan Prag
license = None
description = Building a digital corpus of Sicilian inscriptions
keywords = None
feed = http://isicily.wordpress.com/feed/


July 16, 2014

Horothesia (Tom Elliott)

New in Electra: EpiDoc Workshop

I have just added the following blog to the Electra Atlantis feed aggregator:

title = EpiDoc workshop
url = http://epidocworkshop.blogspot.co.uk/
creators = Simona Stoyanova, et al.
description = Share markup examples; give and receive feedback
keywords = EpiDoc, epigraphy, inscriptions, XML, TEI
feed = http://epidocworkshop.blogspot.com/feeds/posts/default?alt=rss

April 10, 2014

Horothesia (Tom Elliott)

Batch XML validation at the command line

Against a RelaxNG schema. I had help figuring this out from Hugh and Ryan at DC3:

$ find {searchpath} -name "*.xml" -print | parallel --tag jing {relaxngpath}
The find command hunts down all files ending with ".xml" in the directory tree under searchpath. The parallel command takes that list of files and fires off (in parallel) a jing validation run for each of them. The --tag option passed to jing ensures we get the name of the file passed through with each error message. This turns out (in general terms as seen by me) to be much faster than running each jing call in sequence, e.g. with the --exec primary in find.

As I'm running on a Mac, I had to install GNU Parallel and the Jing RelaxNG Validator. That's what Homebrew is for:
$ brew install jing
$ brew install parallel
 What's the context, you ask? I have lots of reasons to want to be able to do this. The proximal cause was batch-validating all the EpiDoc XML files for the inscriptions that are included in the Corpus of Campā Inscriptions before regenerating the site for an update today. I wanted to see quickly if there were any encoding errors in the XML that might blow up the XSL transforms we use to generate the site. So, what I actually ran was:
$ curl -O http://www.stoa.org/epidoc/schema/latest/tei-epidoc.rng
$ find ./texts/xml -name '*.xml' -print | parallel --tag jing tei-epidoc.rng
 Thanks to everybody who built all these tools!


February 25, 2014

Horothesia (Tom Elliott)

New in EpiDig: Digital Archive for the Study of Pre-Islamic Arabian Inscriptions

I've just added a reference for the following resource to the EpiDig Zotero library:

August 28, 2012

Horothesia (Tom Elliott)

Text of my talk at CIEGL 2012


CIEGL 2012 Paper: Efficient Digital Publishing for Inscriptions
cc-by

2. I considered giving this talk the following title:

why build a submarine to cross the Tiber?

It's a question we've heard a lot over the years in various forms. And by "we" I mean not just digital epigraphers -- if you'll accept such an appellation -- but the large and growing number of scholars and practitioners across the humanities who seek to bring computational methods to bear on the evidence, analysis, and publication of our scholarly work.

3. To build a submarine.

The phrase implies something complicated, expensive, time-consuming. Something with special capabilities. Perhaps a little bit dangerous.

4. To cross the Tiber.

Something we know how to do and have been doing for years using a small number of well-known techniques (bridges, boats, swimming). Something commonplace and, given its ubiquity, easy and inexpensive (at least if calculated per trip over time).

Whether asked rhetorically or in earnest, it's a question that deserves an answer. Time is precious. Funds are limited. There are many texts.

But maybe we don't just want to cross the Tiber. Maybe we want to explore the oceans.  And this is the point: for what reasons are we publishing inscriptions in digital form? Are the tools and methods we use fit for that purpose?

5. Why are we digitizing inscriptions? Why are we digitally publishing inscriptions? What uses do we hope to make of digital epigraphy?

I think it's safe to say that no sane person would prefer to lose the ability to search the epigraphic texts that are now available digitally. By my calculations, that's perhaps fifty or sixty percent of the Greek and Latin epigraphic harvest, probably a bit less if we successfully resolved all the duplicates within and across databases. Do you look forward to a day when all Greek and Latin inscriptions can be searched?

So we agree that "search" is a righteous use of digital technology and that we are making good progress toward the goal of "comprehensive search" we set for ourselves in Rome.

6. The relationship between "search" and digital epigraphy was treated at length by Hugh Cayless, Charlotte Roueché, Gabriel Bodard and myself in a 2007 contribution to Digital Humanities Quarterly, part of a themed volume entitled Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure, which was assembled in honor of the late Ross Scaife.  Authors were asked to review digital developments in various subfields and to imagine the state of that field with respect to digital technology in ten year's time. When we wrote, we observed that the vast majority of digital epigraphic editions were still published solely in print, but we predicted that, by 2017, the situation will have changed drastically. We imagined a world in which computing will be as central to consuming epigraphic publications as it is now in making them.

Yet, the biggest challenge we still face in meeting the goals we identified in the DHQ article by 2017 is in making the transition to online publishing a reality by producing tools that are fit for the purpose. It's now no harder to create a traditional print-style epigraphic edition and put it online in HTML or PDF format than it is to get it ready to publish in print. A growing number of journals and institutional repositories -- though few yet devoted specifically to classics or epigraphy -- can now provide a publication venue for such born-digital editions that meets minimum expectations for stability, accessibility, citation, and preservation. Moreover,  I can't imagine that any of the major database projects would refuse the gift of clean digital texts and basic descriptive data corresponding to new publications in print or online, although the speed with which they might be able to incorporate same into their databases would be a function of available time and manpower.

7. But this scenario still assumes the old underlying assumptions inherited from a dead-tree era: an epigraphic publication is the work of an individual or small number of scholars, brought forth in a static format that, once discovered, must be read and understood by humans before it can be disassembled -- more or less by hand -- and absorbed into someone else's discovery or research process. From that process, new results will eventually emerge and get published in a similar way.

This is inefficient.

Time is precious. We are few. There are many texts and many questions to answer. Why are humans still doing work better fit for machines?

8. If we are to embrace and exploit the full range of benefits offered by the digital world, we have to remake our suppositions about not only publication, but about the entire research process. To the extent possible, our epigraphic publications must not only be online, but they must also meet certain additional criteria: universal discoverability, stability of access, reliability of citation, ubiquity of descriptive terminology, facility for download, readiness for reuse and remixing. Further, they must become open to incremental revision and microcontribution at levels below that of the complete text or edition.

Universal discoverability means that our editions -- and the discrete elements of their context -- must be discoverable in every way their potential readers and users might wish. So, yes, we must be able to search and browse for them via the homepage of an individual database or online publication, but they also must surface in Google, Bing, and their successors, as well as in library catalogs, introductory handbooks, and course materials. Links, created manually or automatically, ought to bring users to inscriptions from other web resources for ancient studies. Other special-purpose digital systems ought to be able to discover and access epigraphic content via open application programming interfaces. Changes and updates to contents should be reflected not only in a human-readable web page or printed conference handout, but also with a live web feed on the site that can be consumed and syndicated by automatic readers and aggregators.

Stability of access means that I should be able to revisit a given online publication at exactly the same web address I used to read it a month ago. A year ago. Ten years ago. For as long as the web exists. Don't make me run the query again. Ideally, that web address -- the Universal Resource Identifier or URI -- will be as short and easy to remember as possible.

Then there's Reliability of citation. If we cannot cite digital publications in a consistent and dependable way, those publications are of no value to the scholarly enterprise. I cannot cite your text for comparison, acknowledge your argument in making my own, or do any of the other things for which we must make scholarly reference if your online publication is not readily citable. This implies more than just stability of access, though that is essential. It also means that you must give me a way to include a reference to the appropriate part of the publication in the URI. So, if you're publishing 10 or 1,000 or 100,000 inscriptions online, you should give me a URI for each one of them. Just posting a PDF or returning a composite page of search results containing all of them doesn't cut it.

[[[Ubiquity of descriptive terminology]]]

Suppose I want to find all funerary inscriptions that have been published online that were originally written in Greek or Latin and that likely date to the third century after Christ. What if I want to narrow that group of inscriptions to a particular geographic context or pull out only those whose texts that contain Roman tria nomina?  We need an agreed mechanism for structuring and expressing the requisite descriptive elements in a standard, discoverable manner on the web.

There is an emerging web standard for this purpose. It is called Linked Data and, though we do not have time to explore it in detail here today, I believe that there is an urgent and immediate need for a collaborative effort, with real but modest funding behind it, to define and publish descriptive vocabularies for epigraphy that can be used in linked data applications. This work should build upon the metadata normalization efforts already well underway within the EAGLE consortium, and it might well benefit from the complementary work done a few years ago by EpiDoc volunteers to produce multi-language translations of many of the terms used by the Rome database. This would allow us to use a common set of machine-actionable terms for such key aspects as material, type of support, identifiable personages and places, mode of inscription, and date.

There is already a collaborative Linked Data mechanism for communicating geographic relationships in ancient studies. It is called Pelagios, and it already links XYZ different projects in various subfields of ancient studies to the Pleiades online gazetteer.

By joining Pelagios, epigraphic databases and publications would make their contents immediately discoverable by geographic location alongside the contents of the Perseus digital library, the DAI object database Arachne, the holdings of the British Museum, the archaeological reports published by Fasti Online, the coins database of the American Numismatic Society, and the documentary papyri (among others).

Facility of download and readiness for reuse and remixing. It is here perhaps that the epigraphic community faces its greatest twenty-first century challenge. We must decide no less than whether to embrace or forfeit the full promise of the digital age, for if scholars are unable to use digital surrogates -- programs designed to retrieve, analyze, and reformat data for a specific research need -- and use them across the full corpus of classical epigraphy, we will have forfeited the digital promise.

It is no secret that there is a history of disagreement and conflict in our community around the mere idea of putting published epigraphic texts in a digital database. Though such digitization and distribution is now established practice, the resulting databases and publications still assert a hodge-podge of rights and guidance for use, or else are distinguished by silence on the issue. In some cases, users are debarred from reuse of texts or other information in their own publications.

Let me offer, ex cathedra, a prescription for healing.

First, the social and institutional steps. Each person, project, institution, journal or publisher that puts inscriptions online should make a clear and complete statement of the rights it asserts over the publication, and the behavior it allows and expects of its users. "All rights reserved" or "for personal use only" are regimes that preclude most of the best of what we could do in the digital realm this century. If you choose such a path, you are in my opinion, standing in the way of progress. Moreover, you will find many colleagues who, depending on the legal jurisdiction and their experience, will contest any copyright claims to the text itself, and reuse it anyway. Far better than that fraught scenario is to use a standard open license, or even to make a public domain declaration. There are now several licenses crafted by the Creative Commons and the Open Knowledge Foundation that preserve the copyright and data rights that are permissible by law in your jurisdiction, while freely granting users a range of uses that are consistent with academic needs and practice. By choosing an appropriate CC or OKF license for your epigraphic publication, you will help bring the future.

Technologically, we need to make downloading and reusing easier. "Click here to download" is a good start, but to make serious change we will have to move beyond the PDF file to provide formats that can be chewed up and remixed by computational agents without losing the significance of the more discipline-specific aspects of the presentation. (We will pass over in silence the abomination of posting Microsoft Word files online).

So, an epigraphic edition in HTML or plain text, using standard Unicode encodings for all characters, is an excellent improvement over the PDF. I'd urge you, whenever possible, to go further and provide EpiDoc XML for download. The chief additional virtues of EpiDoc being (with regard to reuse) that the constituent elements of the edition (text, translation, commentary) are distinguished from each other in a consistent, machine-actionable manner, and that the semantics of the sigla you use in the text itself are represented unambiguously for further computational processing.

And here let me offer another parenthetical call for action. If EpiDoc is to live up to position of esteem it has now obtained in Greek and Latin epigraphic circles, we must make it easier and cheaper to use. I'll speak more about some aspects in a minute, but allow me to observe here that there is a critical need to identify and obtain funding -- probably a relatively small amount -- to convene some working meetings aimed at completing and updating the EpiDoc guidelines, and to pay for a postdoc or graduate student to support and contribute to that effort steadily for an academic year or so. The present state of the Guidelines is, I'm sad to say, close to useless. Many good people have tried to redress the problem, but job pressures and the lack of resources necessary to create real working time have so far stymied progress. The state of the EpiDoc Guidelines is a train wreck waiting to happen. We need to fix it.

I'd like to wrap up the discussion of digital benefits with a few words on subject of microcontribution. Microcontribution is another area in which we could enhance scholarly progress in epigraphy. By microcontribution I mean the incorporation into an online, scholarly publication of any contribution of content or interpretation in a unit too small to have been given its own article in a traditional journal. Have you ever read someone else's text and thought "I'd emend this bit differently?" Can you provide a realiable identification for a personal or geographic name that puzzled the original editor? We do have in print articles the genre of "notes on inscriptions" of one type or another -- and the annual bibliographic reviews strive mightily to keep these connected to the published editions for us, but what if that were made to happen automatically in an online environment?

So, are the tools now at hand for digital publication of epigraphy fit for purpose? Do they do all these things for us? Are they effective? Efficient? User-friendly?

No.

Of course, that's partly because we've had such a huge task of catching up to do. But we're making good, consistent progress on retrospective digitization. We cannot ignore the future.

And we are seeing important gains in some areas. The new interface to the Heidelberg database, for example, uses clear, stable URIs for each inscription's record (and for each image and bibliographic entry). Similar URIs were always available, but they were not foregrounded in the application and so required extra effort on the part of a user to discover. But I think you'll all agree that consistency of citation is better supported in the new system. On the reuse front, Heidelberg has long given clear guidance on expectations -- a reused text should indicate its origin in EDH.

But we have a long way to go in other areas. I would encourage those who currently manage (or are planning) large databases or digital corpora of inscriptions to look closely at what the papyrologists have been doing with papyri.info. Not only does the system provide descriptive information, texts, translations, and images -- at varying levels of completeness -- for some 80,000 papyri in a manner consistent with many of the desiderata I have enumerated above, but also it serves as the publication of record for a small but growing number of born-digital papyrological editions and microcontributions, all created and managed in EpiDoc.

The papyrologists have benefited, of course, from the significant largesse of the Andrew W. Mellon Foundation, as well as support from the U.S. National Endowment for the Humanities, in bringing this resource to life. Fortunately, both the institutions involved and the funders felt that it was essential to produce the software under open license, so it's ripe for reuse. But it's a complex piece of software that would require modification and extension to support the needs of epigraphers. It, like the major databases we already have in the field, would need an institutional home, tech support staff, and on-going funding. At NYU we are waiting for the National Endowment for the Humanities to tell us if they will give us funding to begin the customization work that would lead to a version of Papyri.info for epigraphy. If funded, this effort will move forward in collaboration with EAGLE and other major database projects, but we will also seek to provide, as soon as possible, an online environment for the creation, collaborative emendation, and digital publication of epigraphic texts from individuals and projects.