The Husband Stitch Network Graph

Carmen Maria Machado’s fictional short story, The Husband Stitch, is a first person narrative about a woman who wears a ribbon around her neck to keep her neck in place and connected to the rest of her body. I was interested in examining the relationships among the characters of the narrative and their actions. I created an edge list with three columns focusing on subjects and verb throughout the story. The first column, Source, includes the character name or pronoun. The second column, Target, includes the verb attributed to the given row in Source. The third column, Weight, counts the number of times the same combination of Target and Source appeared in the story.

The Husband Stitch Google Fusion Network Graph

The character Sources (I, you, we, she, he, mother, father, boy, girl, woman, man, women, boyfriend, girlfriend, son, baby, doctor, witches, killer, murderer, teacher, pig) are the blue nodes, and the verb Targets are the orange nodes.

The resulting Google Fusion network graph reveals that the most active characters are referred to by I, she, he, we, and you. The most commonly used verbs don’t reveal as much about the characters (“have,” “can,” “was”) . It’s interesting to see that more specific verbs such as “proposed” are limited to an individual (he), where as other more general verbs such as “can” are shared by more characters (I, you, he, woman). Certain verbs connect characters together, and reveal their similar behaviors. For example, “we” and “mother” are connected by “making.”

There are several limitations of this network graph. One flaw in the graph is that the gender pronouns are shared by various characters in the story, making the connecting verb nodes difficult to attribute to specific characters (i.e. she could be mother/girl/another female; he could be father/boy/another male). Another limitation is that the graph does not show the order or context in which the subject-verb relationships appeared throughout the narrative. The timeline of character appearances throughout the story could show which characters are more tangential (appear in a couple consecutive paragraphs) vs central characters (appear scattered across the narrative).

 

reVilna

screen-shot-2016-11-06-at-11-10-02-pm

reVilna is a digital mapping project about the Vilna Ghetto in Nazi Germany, focusing on eight narratives: Formation of Ghetto, Aktionen, Judennat, Health & Education, Life in the Ghetto, Art & Culture, Resistance & the FPO, and End of the Ghetto. Each of these stories first inform the user with historical context and an image gallery before revealing the map (created on Leaflet).

screen-shot-2016-11-06-at-11-06-04-pm

Formation of Ghetto, Aktionen, Resistance & the FPO, and End of the Ghetto are structured by a timeline. The points on the map are linked to a specific dates in history with corresponding images and text. The present point is most red in color while upcoming points are pink and light pink. The user can request more guidance through the story by clicking the play button, however, this functionality moves through the points very quickly making it difficult to follow (it’s easier just to manually click through the timeline).

screen-shot-2016-11-06-at-11-08-44-pmIn the non-temporal stories, such as “Health and Education,” the points are color coded by type of place, people, or activity (i.e. Soup Kitchen, Sport, Ghetto Police ). Certain locations can more objectively be categorized as “Hospitals.” Other categories such as “Life” are more broad and can include varied locations. These groupings create certain assumptions about the types of locations they include and impose different meanings onto the locations.

The purpose of the green line superimposed onto the map is unclear. It seems as though the map designer assumes that the user would understand it’s outlining the Ghetto areas (even though it’s never explicitly stated).

screen-shot-2016-11-06-at-11-12-34-pm

As Turnbull says in Exhibit 3, “Maps have been thought to be objective in that they are independent of the view of a particular observer.” reVilna is a subjective representation of Vilna Ghetto that constructs an educational narrative about the landscape’s history. At the same time, the aerial perspective imposes a territorial point of view. Users see the map through the lens of the Nazis who controlled the Ghetto. The POV is from the vantage point of those who had the most authority, exemplifying what Bruno Latour demonstrates in his theories about power hierarchies embedded in maps.

The exhibits in Turnbull’s Maps are Territories discusses the subjective nature of maps, even when most continue to consider maps as objective representations of the real world. Maps are inherently flawed by the very act of flatten a spherical shape onto a two-dimensional plane. Iconic representations, symbolic representations, color palettes, titles, labels, orientation, scale etc. are all variables that construct subjective views. Even the base map for Google Maps, the Mercator Projection, is inaccurate, yet we’ve grown so accustomed to it that we take it as a scientific representation of the world.

For an alternative version of this map, I would restructure the POV to be through the eyes of the Jews in the Ghetto. The map would use virtual reality (Oculus or HTC Vive) or renderings similar to Street View on Google Maps to adopt a humanistic perspective. In this way, the user could be more immersed into the Ghetto, rather than being removed with an aerial perspective. Furthermore, instead of using arbitrary colors and shapes for the markers on the map, I would use icons to symbolize the locations. Also, more advanced versions of the map could reach a wider audience by being translated into other languages and text-to-audio abilities for the blind or illiterate.

Data Visualization: Featured Composers from Seasons 1842-52

Print

I created the adjacent data visualization on Adobe Illustrator using the dataset for my final group project: New York Philharmonic History Metadata. I’m focusing on creating data visualizations for the final project, so I used this assignment as an opportunity to experiment with the dataset. Just to get a scope of the entire dataset, the first concert date is December 7, 1842 and it ends on April 2, 1911.  This visualization only depicts a snippet of the dataset, focusing on the first decade (a total of 10 seasons) from 1842-1852.

I focused on this shorter timeframe because it was more manageable for the assignment, yet was still enough material to see composer trends. I was interested in examining the popularity of the various composers. For each season, how many performed songs were composed by each composer?

In order to make the visualization, I cleaned up the GitHub spreadsheet by manually going through each row. I paid careful attention to the programID, workID, workTitle, and composerName columns. There were many rows that had repeating workIDs. In some cases it was because the same piece was included in two different programIDs. I kept these repetitions because they signified multiple performances of the same musical piece. In other cases, there were repetitions in order to incorporate more specific data in the other columns, such as movement, soloistName, soloistInstrument, and soloistRole. For the purposes of this visualization, I deleted these rows because it made it difficult to count the number of pieces performed by the composers, and I wasn’t interested in portraying data about the movements or soloists.

I used color saturation to indicate the number of performances. The more pink the color, the higher numbers of performances. The white overlay circles indicate that the corresponding composer had the highest number of performances that season (some composers are tied for highest number for each season). At a glance, the visualization communicates which composers are the most prominent each season. Observing these trends in the data was difficult to determine on the spreadsheet, but when visualized the trends become much clearer.

What We Buy

I examined the What We Buy dataset which reveals what the city of Los Angeles buys for its dwellers using their taxpayer money. The information is presented in the form of 15 datacards, grouping the dataset into relevant chapters:  $12.3 million on 1 AW139 twin-turbine helicopter, $21,929 on 72 pairs of custom fit motorcycle patrol boots, $1,159,775 on leased golf carts, $8,549 on 6,670 soccer balls, $646,533 on 100 Radar Speed Signs, $6,797 on 2,723 basketball nets, $629,218 on 6,492,750 ballots, $4,638,600 on 4,339,676 lbs of thermoplastic marking material, $21,243 Graffiti Buster $530,238 on 5 Toro Groundmaster 5900 Rotary Riding mowers, $13,368 on Federal L.U.S.T. Tax, $1,348,566 on 7,617 fire hoses, $10,654 on 11,988 high visibility white traffic gloves, $161,628 on 30,685 wet mops, and $129,218 on 52,100 frozen rats. The datacards demonstrate that the city spends a lot of tax payer money on recreational sports, policing, traffic systems, medical research, janitorial practices, gardening, petroleum spills, and fire emergencies. 

Each datacard then goes into more detail about why the city invests in the object, by answering the following questions: “What’s this?”, “Why do we buy this?”, “Did you know?” In this way, the makers of the LA Control Panel microsite are able to directly communicate with their primary audience: Los Angeles taxpayers and government officials. The questions provide justification for tax money investment decisions by the city government.

From the dataset, the user can see what problems or situations Los Angeles is facing, and the city’s priorities and values. For example, the spending in soccer balls and basketball nets demonstrate that Los Angeles strives to create sports and recreational spaces. The city values building a sense of community through athletics.

There are definitely gaps in the data collection. The data cards are not transparent about which companies and brands these purchases are made from. It would be interesting to see how these objects and materials are distributed throughout the city. The taxpayer demographic is also unclear, besides the fact that they are Los Angeles residents. But how old and which neighborhood?

Wallack’s and Srinivasan’s define a dataset’s ontology as follows: “Communities and states…[such as Los Angeles] represent the realities around them through distinct ontologies, or systems of categories and their interrelations by which groups order and manage information about the people, places, things, and events around them” (p. 1). The “What We Buy” dataset is a way of organizing a dataset into a relevant framework for the intended audience in a way that makes its content accessible.

If I were to start over with the data-collection process, I would be interested in focusing on instances where the city wastes taxpayer money, or makes investments that aren’t relevant to the people’s wishes. I’d juxtapose surveys by Los Angelenos about how they want their tax dollars spent, alongside the expenditure decisions by government officials. Every citizen has a different set of values and priorities for their community. Where is the overlap and how do cities compromise their spending decisions? Are there alternative ways of sourcing these things for the city. Perhaps purchasing used basketballs from the Lakers or local college basketball teams, rather than buying new ones.

screen-shot-2016-10-17-at-12-56-09-pm

Finding Aid: Justice for Janitors Los Angeles

In Room A1713 of the Charles E. Young Research Library at UCLA, researchers can find the Service Employees International Union, United Service Workers West records, ca. 1935-2008. Before unpacking the collection of boxes and folders, one could use its finding aid (prepared by Julia Tomassetti and Caroline Luce) to browse and target specific materials. This physical, archival collection is a “paradigmatic example of ‘social movement unionism’”, a hybrid of grassroots mobilization, in-depth research, and deliberate planning to transform laws, workforce demographics, and industrial frameworks.

The Los Angeles Justice for Janitors campaign (c.1986-2000), also known as LA JfJ, was comprised of mostly low-wage Latino immigrants workers from the de-unionized janitors industry. Through the Service Employees International Unions (SEIU) Local 399/Local 1877, they organized themselves and gained more rights in society. Their efforts enabled them to win three area-wide contracts for up to 9,500 janitors from 1987 to 2000, improving their lives with increased wages and full health coverage for 5,000 workers.

The collection is organized in series and sub-series according to the type of material. There are ten series (in numerical order) titled as follows: “Legal, older,” “Internal training and planning,” “Ephemera,” “Internal publications,” “Studies,” “Press,” “Video/computer records,” “Photos,” “Agreements, bylaws, and constitutions,” and “Staff records.” These series titles were copied by collection processors from the titles on the collection’s original folder titles. Each titled series is comprised of its own collection of boxes full of folders with specific subtitles (Box #, Folder #). Most folders include a time window for its respective content.

The series “Staff records” contains 16 subsidies with 15 names. Here, individuals affiliated with the campaign become known. It is unclear, however, why they aren’t ordered alphabetically. Perhaps, this arrangement suggests a hierarchy of campaign involvement. “Other research” and “Miscellaneous” folders follow the names, which records about staff that have fewer documents attributed to their names.

The media types in the collection include “photos, audiovisual records, ephemera, internal publications, internal training and planning records for the organization, staff records, research studies, collective bargaining agreements, and records pertaining to grievances, legal disputes, union recognition, and contract negotiations.” Most of the collection includes photos, audiovisual records, and documents from the three rounds of organizing and completing contract negotiations during the Justice for Janitors, Los Angeles campaign (1986 – 2000).

The heart of the collection is embodied by “extensive photos, ephemera, and organizing materials documenting the janitors’ dramatic and colorful demonstrations, civil disobedience, savvy use of media, and community alliances.” Here, the collection reaches its greatest depths. The activists put a lot of effort in documenting “the campaign’s strategic corporate and worksite research, organizing efforts, alliance building, and political strategies.”

The second part of the collection contains materials from LA JfJ Local 399 predecessor campaigns. These materials provide further historical context, enabling researchers to notice trends in the process of researching, campaigning, and documenting strategies for a worker’s movement. Remnants of JfJ campaigns in Northern California (Hewlett-Packard, Sacramento) and Southern California (security guards, Los Angeles) provide us with campaigning case studies. Documental evidence of legal disputes, recognition, collective bargaining relationships, and internal complaints reveal hardships that the campaigns experienced, and suggest coping mechanisms for enduring similar struggles.

By relying solely on the records in this collection, the LA JfJ’s narrative would be incomplete. The collection exhibits micro and macro gaps in the narrative. For example, the finding aid does not indicate any sign of maps, that would provide researchers with a macro-perspective of information. The campaign congregated its activity in six areas of the city: downtown, Century City, USC, LAX, Hughes Aircraft, and Toyota. A spatial visualization could show which parts of the city were more dense with working and/or demonstrating janitors. Researchers could visit specific addresses if they wanted to seek more information onsite.

The narrative also lacks sequential ordering of events. A JfJ LA timeline would enable researchers to see the progression of the campaign from its origins in the 1930s, or more recently in Downtown Los Angeles. Researchers could analyze how events played off of each other and begin a dialogue regarding the following questions: What were the highs and lows of the campaign? What events triggered fluctuations? Why did the campaign unfold the way it did in history?

Reverse Engineering: Inventing Abstraction MoMA Exhibition

screen-shot-2016-10-03-at-12-45-42-amInventing Abstraction: 1910 – 1925 is a website accompanying a MoMA exhibition by the same name that showed in New York City from December 23, 2012 – April 15, 3013. When browsing this URL, the user learns about the artists and artworks that brought about the abstraction movement. Every page educates the user with another layer of context and historical significance about Abstraction’s inaugural years.

Sources (assets)

In the MoMA Department of Painting and Sculpture, Curator Leah Dickerman along with Curatorial Assistants, Masha Chlenova and Jodi Roberts curated Inventing Abstraction: 1910 – 1925 both in the MoMA’s physical museum space and online in the digital space. The curators were responsible for selecting what would be presented in the exhibit: paintings, drawings, books, sculptures, films, photographs, sound poems, atonal music, and non-narrative dance. After researching active artists between the years of 1910-25, they selected a total of 92 artists to be featured in the exhibit, and catalogued which artworks to include. Throughout the process the art had to be photographed, scanned, transcribed, recorded, restored, shipped, and installed until it was fully integrated into the exhibition.

Processes (services)

The process of creating the exhibition and website required the MoMA’s staff to reach out to people beyond the museum’s walls including collaborators (i.e. Columbia and Second Story) and sponsors for funding. The main participants in the making of this website and its contents were people from the New York MoMA, Columbia Business School, and a digital design agency, Second Story.

The Artist Network Diagram lives as a 16 ft by 25 ft diagram upon a wall in the exhibit, an interactive interface on the “Connections” page, and as a downloadable PDF on the website. Before becoming an interaction diagram on the website,  the diagram was a collaborative effort by curators and designers from the MoMA, and data scientists from the Columbia Business School including Paul Ingram, Kravis Professor of Business and Mitali Banerjee, doctoral candidate.

In the Behind the Scenes: Inventing Abstraction, 1910 – 1925 YouTube video, Curator Dickerman explains how the network was constructed. The motif of a network represented how abstraction was constructed through a conglomeration of relationships where artists influenced or were influenced by one another. Dickerman was fascinated by the the social networks within the early abstraction artist community. The curatorial team created a spreadsheet (most likely in Excel) with 92 artist, and marked whether artist X knew artist A, B, C etc with 1 or 0 (yes or no respectively). Paul Ingram and Mitali Banerjee analyzed and graphed the collected data. In this initial data visualization, each node represented the artist and the diameter of the nodes corresponded to the number of relationships.

tumblr_mgu56brrh81s3xom1o1_1280

Designers Hsien-yin Ingrid Chou and Sbine Dowek from the MoMA Department of Graphic Design received Ingram and Banerjee’s data visualization and iterated 10 other variations in order to find the most appropriate form. The final diagram seemed to be designed in Adobe Illustrator, and was inspired by a hand-drawn chart from 1936, illustrating the development of modern art by MoMA’s first Director, Alfred H. Barr, Jr. As information designers, Chou and Dowek chose to limit the nodes to two possibilities: beige or orange. The diameter for both node types were the same. Beige nodes indicated that the artist had fewer than 24 connections, whereas orange nodes indicated that the artist had more than 24 connections. The number 24 became the data cap for the amount of information represented by each node. Although I appreciate the simplicity of this approach, I wish the Connections page would indicate the exact number of connections for the selected artist (rather than having to manually count all the names presented in the network).

Second Story fused the aggregated data, MoMA’s design, and code, activating the information into an interactive narrative. Second Story’s portfolio website credits the team members who developed Inventing Abstraction, including Michael Godfrey, User Experience Design Director, Mike Henderson, Senior Interaction Designer, Matthew Faro, Senior Interactive Developer, Elizabeth Bourke, Producer, and Michael Neault, Content & Media Producer. From my personal background in web development, I can assume that this website uses HTML (content), CSS (styling), JavaScript (behavior), and D3 (data visualizations) programming languages. According to the project description: “A flexible data structure gives curators the ability to update and add content, and responsive programming allows the site to be viewable across multiple devices.”

Presentations (display)

Inventing Abstraction is a data dense digital humanities project with organized information architecture, a cohesive interface, and intuitive user experience. No page is too overwhelming with the amount of information it presents. The goal is clear: to enable users to learn about Abstractionism by freely exploring relationships among the movement’s artists and artworks that are contextualized by accompanying text and media.

The initial splash page provides the user with historical context about the exhibition and Artist Network Diagram. In order to access the meat of the website, the user must click Explore Connections, an orange button that contrasts against the black background. One can return to this page by clicking Inventing Abstraction in the top left corner.

There are two navigation bars. The primary navigation (Connections, Artists, Conversations, and Programs & Events) is situated at the top, and the secondary navigation (About the Exhibition, Network Diagram, Checklist, Publication, Music, Blog, Credits, MoMA) is located at the bottom. This initial distinction in information hierarchy guides the user’s attention towards the crux of the exhibition: the relationships among the 92 artists.

Connections is where users should spend most of their time on the website, as reflected by the user flow of the website. The Diagram Overview text box overlay is a wonderful way of preparing the user for the interactive diagram. Without reading more about the project’s development process, it’s difficult to know why certain artists are highlighted in orange. The animated transitions between network paths make the experience more dynamic and engaging. The selected artist is accompanied by their portfolio with their name, lifespan, works, birthplace, places worked, and interests. Most artists have captions; selected pieces have “About the Work” descriptions or relevant recordings by art historians. Overtime the formation of the information as networks become Abstractionist artworks in their own right.

The Artists page offers the user an alternative way of viewing everyone from the network as an alphabetized lists, and every name links back to their profile in the Connections page. The website offers multiple ways of experiencing the exhibition other than the network. In the Conversations page, the user can listen to a a series of audio recordings where contemporary creatives highlight pieces from the exhibit. The Programs & Events page promotes performances at the MoMA and online music on WQXR, adding another dimension to experiencing the exhibit.

The bottom navigation, though less important for the average user, nevertheless contains important information regarding the exhibition, and contributors to its research and development. I appreciate the icons embedded in the navigation to indicate downloadable PDF files for Network Diagram and Checklist with a downward arrow, and external links to Music (WQXR) and Blog (Tumblr) with a diagonal arrow.