Presentation at Museums on the Web 2015

  • Posted on: 9 March 2015
  • By: warren
Palmer House Hilton, Chicago, IL, USA
April 8-11, 2015, 10:30am - 12:00pm
Grand Ballroom (4F) 
Joint work with David Evans, Minsi Chen, Mark Farrell and Daniel Mayles.
We review the possibilities, pitfalls, and promises of recreating lost heritage sites and historical events using augmented reality and "Big Data" archival databases. We define augmented reality as any means of adding context or content, via audio/visual means, to the current physical space of a visitor to a museum or outdoor site. Examples range from simple prerecorded audio to graphics rendered in real time and displayed using a smartphone.
Previous work has focused on complex multimedia museum guides, whose utility remains to be evaluated as enabling or distracting. We propose the use of a data­-driven approach where the exhibits' augmentation is not static but dynamically generated from the totality of the data known about the location, artifacts, or event. For example, at Bletchley Park, reenacted audio conversations are played within rooms as visitors walk through them. These can be called "virtual contents," as the audio recordings are manufactured. Given that a number of documentary sources, such as meeting minutes, are available concerning the events that occurred within the site, a dynamic computer-generated script could add to the exhibits.
Visitors' experiences can therefore react to their movements, provide a different experience each time, and be factually correct without requiring any expensive redesign. Furthermore, the use of a data-driven approach allows for the updating of exhibits on the fly as researchers create or curate new data sources within the museum. If artifacts need to be removed from an exhibit, pictures, descriptions, or three-dimensional printed copies can be substituted, and the augmented reality of visitor experience can adapt accordingly.

Presentation at the Department of History and Classics, Acadia University

  • Posted on: 5 March 2015
  • By: warren


Mapping the Western Front: the British and German experiences
March 26th, 7pm, BAC241
The static nature and scale of the battles on the Western Front was unwelcome to both Entente and Central powers during the Great War. Faced with logistical requirements on an unprecedented scale, standardized maps at different scales had to be produced of the battlefield quickly for both tactical and strategic purposes. This was a minor revolution in military thinking: previously cavalry officers were expected to ride with a sketch-board to map out terrain and enemy positions for their commanders.
In this talk I will contrast the Entente and Central efforts at mapping battlefields, highlight the differences in the approaches they took as well as evidence about local military intelligence activities. Both British and German coordinate systems will be explained as well as how to geo-reference these maps into modern mapping software.

Presentation: Bridging Communities of Practice: Emerging Technologies for Content-Centered Linking

  • Posted on: 1 April 2014
  • By: warren
Bridging Communities of Practice: Emerging Technologies for Content-Centered Linking
Thursday, April 03, 2014 - 1:30pm - 3:00pm
Watertable Ballroom (ABC), Renaissance Baltimore Harborplace Hotel
Baltimore, MD, USA
Presented by Douglas W. Oard
This paper describes the potential of new technologies for linking content among cultural heritage collections and between those collections and collections created for other purposes. In recent years, museum professionals, archivists, librarians, and digital humanists have worked to render cultural heritage metadata in an interoperable form as linked open data. Concurrently, computer and information scientists have been developing automated techniques that have significant implications for this effort. Some of these automated techniques focus on linking related materials in more nuanced ways than have heretofore been practical. Other techniques seek to automatically represent some aspects of the content of those materials in a form that is directly compatible with linked open data. Bringing these complementary communities together offers new opportunities for leveraging the large, diverse, and distributed collections of computationally accessible content to which many of us now contribute.

Presentation at LDG2014: From the trenches - API issues in Linked Geo Data

  • Posted on: 1 March 2014
  • By: warren


5th - 6th March 2014, Campus London, Shoreditch, UK
Joint work with David Evans
This paper reports on the experiences of building a linked geo data coordinates translation API and some of the issues that arose in the process.  Beyond the basic capacities of SPARQL, a specialized API was constructed to translate obsolete British Trench Map coordinates from the Great War into modern WGS84 reference systems.  Concerns over current methods of recording geographic information along with accuracy and precision of information are discussed.  Open questions about managing the opportunistic enrichment of geographical instances are discussed as well as the scalability pitfalls therein.
Note: The final report on the workshop can be read here.

Presentation at ACAT - Ask not what you can do for Linked Open Data but what Linked Open Data can do for you.

  • Posted on: 5 December 2013
  • By: warren


Ask not what you can do for Linked Open Data but what Linked Open Data can do for you.
Monday, 9th December 2013 - 12PM
Centre for Aboriginal Studies Boardroom, Building 211, Curtin University
Presented to the Centre for Culture and Technology (CCAT)
Digital Humanities scholars have long been hampered by the twin problems of getting the data into digital form and then managing ever-increasing amounts of it. Too often, the data behind the research becomes prisoner of a 'research portal' or lost on someone's laptop. In many ways the most successful data management tool so far is the spreadsheet - a 40 year-old technology!
This talk is about linked open data, or the semantic web, an approach to the management of data that is showing promise for researchers, libraries and archives. The talk is non-technical and focuses on explaining how real-world research data problems can be solved. These include the identity of historical persons, dealing with incomplete or false data; identifying or referencing lost geographical locations and encouraging the serendipitous reuse of data in other projects. Real-world examples of problematic data from the Great War will be shown from the Muninn Project and the solutions using linked open data approaches.

Presentation at SOTM 2013: Open Historical Map -- Re-using outdated information

  • Posted on: 20 August 2013
  • By: warren


Open Historical Map : Re-using outdated information
SOTM 2013
Track 1, Friday September 6th 2013, 14:20
Common work with Jeff Meyer, David Evans, Susanna Ånäs, myself and many others.

Presented by David Evans.
The Open Historical Map is billed as "The world's most out of date map". It is a complimentary approach using the Open Street Map tools that focuses on features that have since disappeared, changed shape or purpose. Because of scale and relevancy issues the Open Historical Map uses its own database instance that imports some of the features of Open Street Map. Editing of the map by users is encouraged, though most of the data comes from automated imports from other historical projects that contribute their mapping information. This presentation will review how the Street and Historical maps can be used together and the approaches used to handle problems of time and accuracy in the historical map.

Paper at SEXI2013: Sex, Privacy and Ontologies

  • Posted on: 30 January 2013
  • By: warren

Sex, Privacy and Ontologies
SEXI-2013 workshop at WSDM-2013
Tuesday, February 5, 2013 at 11am
Presented by Adriel Dean-Hall
Personal profiling has long had negative connotations because of its historical association with societal discrimination. Here we re-visit the topic with an ontology driven approach to personal profiling that explicitly describes preferences and appearances. We argue that explicit methods are superior to vendor-side inferences and suggest that privacy can be maintained by both exchanging preferences independently from identity and only sharing preferences relevant to the transaction. Futhermore this method is an oppourtunity for additional sales through the support of anonymous 'drive by' shopping that preserve privacy. We close by reviewing the computational advantages of accurate profiling and how the ontology can be applied to complex real world situations.
Paper is here

Paper at VLDB 2006: Multi-column substring matching for database schema translation

  • Posted on: 15 August 2006
  • By: warren


Multi-column substring matching for database schema translation,
Wednesday, September 13, 2006, 12:00pm-12:30pm
Abstract: We describe a method for discovering complex schema translations involving substrings from multiple database columns. The method does not require a training set of instances linked across databases and it is capable of dealing with both fixed-and variable-length field columns. We propose an iterative algorithm that deduces the correct sequence of concatenations of column substrings in order to translate from one database to another. We introduce the algorithm along with examples on common database data values and examine its performance on real-world and synthetic datasets.