Class Blog

Week One: Reverse Engineering Photogrammar

For my first blog post, I chose to reverse engineer Photogrammar, a map-based platform built by a Yale University humanities research team. Photogrammar allows the user to search through photos sanctioned by the United States Farm Security Administration and Office of War information (FSA-OWI), beginning with the Great Depression and ending with World War 2.

These photos offer a snapshot of life during a pivotal time in American history, a time beset by severe poverty and population diaspora. To me, photogrammar offered a more personal view of the Great Depression. For example, a photo taken by Dorothea Lange presented a car full of dust bowl refugees, their faces offering a visual example of the despair of the Great Depression and the farming crisis.

dust-bowl-refugees

(photo credit: Dorothea Lange)

Navigating through a series of photographs showing slums in San Francisco, abandoned homes in Utah, and marching soldiers in Virginia provided me with a more emotional view of the 30s and 40s, one that a simple text never would.

The Yale research team used the FSA-OWI photos as their main source for the project. From there, the team’s process included scanning the photos into a digital format as well as geocoding the primary sources into a digital map in which users like me can isolate a location in the United States to search for the photos taken in that area. The team used two systems of organization, narrowing the database. One was a hierarchal system previously developed by Paul Vanderbilt in 1942, a method that included categories like “Transportation” and “War.” This system allows readers to view photos associated with one, expanding one’s education of the time period. The second system of classification diversifies the user’s search options, allowing them to isolate photos by their location, date, and photographer.

Part of the team’s presentation includes a large “Start Exploring” button, which directs users to the main map, the core of the project. This illuminates purpose of their project: to provide a clean, interactive format in which users like me (non-historians) can learn about US history in a visually appealing way. The map is aesthetically pleasing, with deep green indicating a wider array of photos to choose from as opposed to the lighter green locations. The map also included a “dots” mode, in which the user can search for photos across the map by photographer. The user can narrow their search by using a timeline at the top of page, isolating photos by not only county and photographer, but by year. 

screen-shot-2016-10-01-at-7-45-52-pm

Their map was presented using CARTO and leaflet technology.
I find that the diversity of search options make this project a huge success for users like me who are new to digital humanities. The wide array of search options helps segregate the 170,000 photos; without diverse search options the user could become either overwhelmed or bored.  I find the project visually stimulating, informative, and easy to use, a welcome introduction to the world of digital humanities.

Examining Early African American Film

Initial homepage of the DH Project, Early African American Film, created by Digital Humanities students at UCLA.

I chose to reverse engineer Early African American Film, a DH project and collaborative database that operates by using primary and secondary to
sources to ‘reconstruct’ the silent race film community of the early 20th  century. Race films were created for African-American audiences, aiming to showcase narratives by and for African-Americans. Most of the actual films have been lost or destroyed, and thus evidence of their existence is pulled from newspaper advertisements, posters, and other paraphernalia surrounding the film. Early African American Film works with these evidential primary and secondary sources to create a dataset that showcases Actors, Films, Companies, and their relationships.

This project pulled from a variety of primary sources such as newspaper clippings, posters, and advertisements that were pulled from archives such as the George P. Johnson Negro Film Collection at UCLA. The group chose its own criteria it deemed fit for project inclusion and verified the primary sources via scanned digital copy. It credited other archives such as the Mayme Clayton Library and Museum, The Black Film Archive at Indiana University, Pearl Bowser Collection at the Smithsonian, and Umbra. Secondary sources were also used for the data, utilizing essays, actor profiles, and scholarly works by several different authors that examine race films in depth.

After scanning the primary resources from archives into a digital format and using the secondary sources to further construct the database comprised of the actors, films, and companies that made up the community of race films the project chose to process the data in spreadsheet format. The “relational database” is hosted by Airtable and can be downloaded in CSV (comma-separated value document) format to be opened up in a separate application. The curated database contains the information found in the primary archives and scholarly essays in a table format, including the scanned copies of the film paraphernalia.

This database was then presented to visitors as more of a tool that can be user-manipulated rather than an exhaustive representation of the relationships that made up the race film industry. The table is simple and relatively basic in its presentation; however, the project provided a slew of different tutorials of what researches are able to do with the data at their own leisure. In addition, the project offers a few different visualization tools such as a bar graph showcasing the number of race films produced in a given year and a network graph created on Cytoscape (also included a tutorial on how to create your own).

As a visitor to the database with no prior knowledge about silent race films, I personally enjoyed the page that provided an in-depth explanation as to what race films actually are. I feel as though other DH projects that I have worked with seem to be created for audiences already familiar with the subject, so I enjoyed being able to familiarize myself with the topic before diving in to the data. I found the data to be presented efficiently, although I was a bit overwhelmed by having to constantly be looking to different windows for a tutorial on how to properly work with it. I also personally appreciated their detailed list of sources, especially for this particular blog post, because it was very clear and concise on how their information was retrieved and presented.