Search Criteria

 






Key Word Search by:
All of these words









Organization Type


State or Jurisdiction


Congressional District





help

Division or Office
help

Grants to:


Date Range Start


Date Range End


  • Special Searches




    Product Type


    Media Coverage Type








 


Search Results

Grant number like: HAA-266444-19

Permalink for this Search

1
Page size:
 1 items in 1 pages
Award Number Grant ProgramAward RecipientProject TitleAward PeriodApproved Award Total
1
Page size:
 1 items in 1 pages
HAA-266444-19Digital Humanities: Digital Humanities Advancement GrantsGeorge Mason UniversityDatascribe: Enabling Structured Data Transcription in the Omeka S Web Platform9/1/2019 - 12/30/2022$324,733.00Jessica OtisLincolnA.MullenGeorge Mason UniversityFairfaxVA22030-4444USA2019History, GeneralDigital Humanities Advancement GrantsDigital Humanities32473303247330

The creation of a structured data transcription module for the Omeka S platform that will make it easier for scholars working with quantitative data (such as government forms or institutional records) to transcribe them into structured data which can be analyzed or visualized.

Datascribe is an application for a Level III Digital Humanities Advancement Grant to create a structured data transcription module, or plug-in, for the Omeka S platform for digital collections. Scholars often collect sources, such as government forms or institutional records, intending to transcribe them into datasets which can be analyzed or visualized. Existing software enables transcription into free-form text but not into tables of data. The proposed module will enable scholars to identify the structure of the data within their sources, speed up the transcription of their sources, and reliably structure their transcriptions in a form amenable to computational analysis. Scholars will be able to turn sources into tables of data stored as numbers, dates, or categories. This module will build on the Omeka S platform, enabling scholars to display transcriptions alongside the source images and metadata, to crowdsource transcriptions, and to publish their results on the web.