Week 9: Wordsmithing and Gendered Language

Wordsmithing and the implicit framing of the digital humanities as masculine.

As someone who is excited by the delicate art of “word-smithing” and pondering word choice for hours on end, Natalia Cecire’s “Introduction: Theory and the Virtues of Digital Humanities” was most fascinating to me this week because it contextualized digital humanities in a frame that I am most familiar with. Arguing the Digital Humanities as an arm of the humanities that strangely embraces “doingness”, Cecire points out words like “ “hands-on,” “getting your hands dirty,” “digging,” “mining,” “building”—these terms offer quite a specific vision of what constitutes doing, conjuring up economic productivity (stimulus packages and infrastructure initiatives loom into view)”. The kind of activeness carefully implied by the digital humanities in its work based on the words it uses to describe its work are familiar – we as a class use these terms all the time when we discuss “data mining” or “web building”. And, as Cecire explains, these terms are very comfortable within the realm of digital humanities when other arms of the humanities are more so interested in a more general, intangible approach to negotiating the borders of their quandaries. This difference is made even more stark when Cecire points out that the terms used to describe the digital humanities “doing-ness” is obliquely male. Framed as a masculine form of productivity, rather than female, this difference seems to imply a feminization of the other arms of humanities studies when in reality “it is just as it is the subdiscipline of the humanities most closely implicated in the postindustrial “feminization of labor””.

This kind of gendered word-smithing used to describe the digital humanities that is explained and discussed in Natalia Cecire’s piece reminded me of a Gendered Language class I took in the Applied Linguistics department a few quarters ago. I never knew that language could be considered in this way and was surprised how much language was affected by gendered thinking. As someone who thought words just “meant” things, thinking about gendered language gave me an entirely new perspective on how to choose words and frame my own writing. A broad example of this can be observed in the Chinese characters “女“, generally meaning “girl”, and ”子“,generally meaning “boy”. By themselves, they simply gesture to the gender each refers to. However, when used together, they form the character “好“, generally meaning “good”. Implicit in the construction of this word is a cultural mindset in Chinese culture that in order for a family to be “good”, it must figuratively strive for the balance and order as represented in the character for good in its inclusion of both the feminine and the masculine, as well as literally have both female and male children for the health of their lineage. These kinds of distinctions and observations can also be observed in other languages like Spanish and Russian and indicate greater attention to gendered language is not only fascinating but incredibly telling, as seen in Cecire’s initial observation in her piece.

  1. Cicere: http://journalofdigitalhumanities.org/1-1/introduction-theory-and-the-virtues-of-digital-humanities-by-natalia-cecire/

Week 8: 8 Rules, 4 P’s

The importance of “consumer-centricity”.

I was really interested in this week’s piece on the “Eight Golden Rules of Interface Design” by Ben Schneiderman because of its explanation of what attributes generally make for a successful website. In these rules, Schneiderman explicitly details features like shortcuts, consistency, communication, and ease in relieving troubleshooting errors. As someone who is new to web building and design, some of these details were a little too technical for me to totally grasp. Technical terms and concepts aside however, I was very excited to notice Schneiderman’s repeated emphasis on the specific UI issues he pinpoints for the explicit purpose of making all aspects of the site cater to the user as much as possible. I read many of the DH 101 posts this week and enjoyed exploring the gorgeous, streamlined, unique websites that were shared. Perhaps most remarkable to me after exploring these sites was realizing the sheer amount of ease I had navigating each and every single one, despite my basic computer skills and lack of experience with each website. I had always wondered what kind of skills made for an enjoyable website and always imagined these skills being some kind of “coding superpower”, when in reality, as this week’s reading demonstrates, a website’s success is more so ultimately dependent on its creator’s attention to his/her audience. (After all, what good is a website if no one can figure out how use it?)

The Four Principles of Marketing.

These stipulations and their overall focus on the end-user and their experience of a site struck me as an extension of the more broad principles dictating marketing strategy. Called the “4 P’s (Principles) of Marketing”, these principles broadly dictate a marketing strategy as built upon product, price, placement, and promotion. Thus, in order for a marketing strategy to be successful, it must acknowledge and make clear to its customer base the product involved, the price of the product must be clear and competitive, the placement of the product that will allow the customer access to it, and the promotion of the product to inform the audience of all these details. While all of these details, when delved into, are incredibly technical, these rules, like those delineated by Schneiderman, are overwhelmingly customer-centric. A product is only as useful as it is useable by its customer, as it is affordable it its consumer, as its is available to its consumer, and as it is acknowledged and understood by its consumer. As a fellow consumer of products and services, whether it be a chocolate bar or a website, it is certainly incredibly comforting to know that these products and services are invested in making these products and services for its consumer and, more importantly, that these products and services are only as successful as they are invested in their consumer.

Schneiderman’s article: http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html

Marketing Mix: http://www.3msage.com/?p=197

Week 7: Electronic Maps, Paper Maps, and Paper Towns

Agloe, NY is a copyright trap.

In the “Introduction to Web Mapping”, the author, Jim Detwiler, provides an interesting discussion on the development of online cartography since the first map was placed on the Internet. Once a simple image held up with a few strings of HTML, maps are now expected to be highly interactive and specific. Most important in his overview, in my opinion, is the author’s insistence on arguing for the benefits of both paper and web-based maps. Reviewing the impressive development of online cartography since the first map was scanned and placed on the Internet, the author spends an equal amount of time on the benefits of paper maps – including its mobility as a service that doesn’t require Internet access, extreme high-definition quality that is incredibly difficult to put on a screen, and reliability as a product produced by certified experts rather than cartography hobbyists editing open source code. Interestingly, Detwiler helpfully injects an element of perspective supporting online cartography as he reveals European countries seem to value online cartography more because they don’t have the same kind of open-information regarding mapping that Americans do. As a result of the internet and online cartography, individuals from European countries are able to build for themselves a network of information regarding geography within their own living network.

While I am relatively new to John Green and his books, Detwiler’s discussion of cartography on and off the internet immediately reminded me of his novel Paper Towns and its preoccupation with paper towns – fictional towns arbitrarily created by mapping companies to ensure that a map is fairly copyrighted and not reproduced by a rival mapping company. Arguably most famous of these towns is Agloe, New York, the city prominently featured in Green’s novel. Strangely enough, because the town was published by the creators, the map’s users actually began to visit the point of the map labeled “Algoe” and had enough visitors to encourage the development of a general store. While more of a copyright trap than an innocent mistake, people actually visiting this spot on the map demonstrates the ramifications the error had that could have come from both experts and hobbyists alike. Both striving to preserve their ownership of the object they are working towards, expert cartographers and their ownership of their product and hobbyists and their ability to continue to contribute to a product they have worked on, today’s reading revealed to me the problem that both paper and electronic materials both suffer from. Both are created by an individual and, are therefore, susceptible to error, intentional or not. As objects providing security and guidance during travel, maps shouldn’t be error prone and hopefully, will become less so.

Week 6: Demystifying Networks, Untangling Wikibinges

A super-network of information.

As someone who is (still) relatively new to Digital Humanities, I found Scott Weingart’s post, “Demystifying Networks”, incredibly helpful and interesting because it helped to answer a lot of Digital Humanities I knew and didn’t know I found confusing. In his post, Weingart goes through a quick-guide to what kinds of digital tools should and should not be used in certain projects. Emphasizing the project over the tool being used, Weingart urges his readership to look first to the needs of their project rather than the “coolness” of the tool they are drawn to using. In doing so, Weingart expresses as sense of reverence for both the “tool” and the “project” as not identical puzzle pieces that can be appropriated in any way we might want, but unique variables that require a great amount of understanding and consideration. Extending this ideology as he continues his explanation into network analysis and “nodes”, Weingart discusses the different kinds of nodes, or connection patterns that are used. Here, he outlines a major difficulty Digital Humanities faces as it incorporates tools from the sciences into the humanities. While the sciences works with largely uncomplicated connections between nodes, the Digital Humanities is rich with connection because its trademark ambiguity is largely due to the massive amount of influences that can contribute to a single node. Warning his readership of using a tool to analyze a network that doesn’t accomodate for the kind of complex networking that the Digital Humanities requires, Weingart emphasizes useability over convenience since “given that humanistic data are often uncertain and biased to begin with, every arbitrary act of data-cutting has the potential to add further uncertainty and bias to a point where the network no longer provides meaningful results” (Weingart).

Weingart’s discussion of the meaningfulness of connections, regardless of amount or level of complexity, reminded me of Wikipedia and the abundance of links each and every entry is peppered with. Allowing the reader to read an article and click on hyperlinked text to more articles to flesh out knowledge on your initial entry of inquiry, “Wikibinges” are understandably tempting as the site allows anyone with any degree of knowledge on the article’s subject to deepen their conceptualization of a subject. My own late-night Wikibinge started with “pumpkin pie”, and subsquently lead to “Starbucks”, “Moby Dick”, “Nathaniel Hawthorne”, and finally “allegory”. With each and every Wikipedia article a “node”, readers reading one article strewn with hyperlinks are actually being bombarded with an entire network of nodes that connect each of these nodes together. Thus, for Wikipedia, networks are hyper-“spaghetti and meatballs” set of angles Weingart describes of the Digital Humanities, as Wikipedia strives to capture the same kind of multi-facetedness that information ambiguously influences and is influenced by. While insistent curiosity can be blamed for late-night Wikibinges, perhaps we can also now assign some blame to Wikipedia’s “hyper-noded” articles as well.

Weingart’s article: http://www.scottbot.net/HIAL/?p=6279

 

Week 5: Form and, not VS, Content.

The Space Pen

Andrew Smith’s commentary on William Turkel’s work on the Old Bailey project was particularly eye-catching this week to me. Comparing this study with another recent study on speech analysis from the Civil War, Smith humorously points out not only the speech-analysis project’s lack of foresight, but a more broad problem the Digital Humanities faces is it seems to feel increasing pressure to be quantitative and technically in order to substantiate itself. The Civil War speeches, when analyzed, were concluded to be incredibly preoccupied with notions on slavery – a fact that isn’t entirely surprising. On the other hand, Turkel’s analysis and finding of an unusual rise in guilty pleas in British courtrooms complimented the changes in the way criminal trials were structured to proceed and the kinds of punishments and consequences would be executed for the individuals each person on trial depending on their crime and sentence. This compare-and-contrast demonstrating the “right” and “wrong” ways in which the digital humanities structures and approaches questions in the humanities reveals the importance of the question and issue in the first place, and not just the elaborateness of the tool being used to study the same “artifact”.

This notion of a solution being implemented to inefficiently solve a project because of a seeming fixation over “form”, the way in which an artifact is studied, and the “content”, the significance of the artifact itself, reminded me of the popular anecdote/urban legend of the US creation of the space pen. Noticing the difference in atmosphere in space and on Earth affecting astronauts’ ability to write with a regular pen, US manufacturers allegedly poured millions of dollars into developing a pen that could compensate for the pressure differences and then, could be used in environments like space. This pen was then manufactured as the “space pen”, a pen that had a pressured ink cartridge that allowed the individual to write in conditions where gravity is inconsistent with the way it is on Earth. Russian astronauts, on the other hand, solved the issue with a pencil. Just as the research on the Civil War documents should have considered the purpose of their project before embarking on it, the space pen manufacturers should have considered the purpose of theirs – making both instances humorous and didactic. While both projects admittedly used and developed incredibly fancy, remarkable pieces of engineering and technology, these projects are demonstrated as clearly ridiculous because of their inability to consider the needs they were expected to fill, erroneously focusing their efforts elsewhere.

Space Pen: http://www.snopes.com/business/genius/spacepen.asp

Smith on the Old Bailey Project: http://pastspeaks.com/2011/08/21/the-promise-of-digital-humanities/

Week 4: When Excel Can’t Excel

A bug in 2007 version of Excel.

In the online-guide “Data + Design”, various authors collaborated to discuss the complexity of comprehending and organizing various forms of data. Alistair Croll’s piece on data aggregation was particularly interesting to me because it was the first document I had ever seen that grouped and explained different kinds of ways data can be combined and explicitly delineated the logic behind the rules of these combinations. Particularly striking was the piece’s definition of “summable multiseries data”. A group of data connected by their representation of a larger statistic, these “subgroups” are often more fickle to identify and arrange than they seem. Using their example of coffee consumption, a statistic on how many cups were served to men and how many cups of regular cups of coffee were sold cannot be compared because their basic subgroups (for a visual aid, think of subgroups as “graph axes”) are not the same – one breaks down consumption by gender, the other by the kinds of coffee purchased. Even further, as Croll demonstrates, these figures cannot be leveraged against each other to “back” into the statistics of another subgroup. For example, just because you know 36.7% of cups were sold to women DOES NOT mean that 36.7% of regular cups of coffee were subsequently sold to women – those two figures did not correlated with each other and thus, do no indicate something about the other. Thus, data and context are equally important in statistics.

However, as the article points out, subgroups and categories are strictly anthropological. While working on a set of important excel data, I once made the mistake of selecting every piece of data to generate a graph, instead of the more specific set of data I intended to work with. As a result, I got an unintelligible set of strings of lines instead of the orderly, legible graph I was expecting to work with, similar to the image in the above. While I immediately registered the graph as incorrect, Excel however, never once issued an error signal. Thinking the graph was an accurate amalgamation of the data it was fed, Excel couldn’t tell the data I selected did not make sense and proudly generated the tangled lines I had before me – slapping one line charting evaluations cores over time over another line plotting satisfaction per class. While Excel is very good at interpreting data, human logic is obviously a whole other ball game it is far from winning.

Week 3: Netflix and Jeopardy!

The article on Netflix was particularly interesting because it put to practice the extreme level of sophistication and specificity Netflix incorporated into its genre-ization algorithm to create specialized profiles for each Netflix user. Alotting for a variety of combinations of periods, genres, and even actors, Netflix attempts to precisely categorize every film ever made, even if that category has just one film. A set of generated genres attributed to each viewer, Netflix not only has the capacity to describe what each viewer watches, but ultimately predict it. Case in-point is Netflix’s educated purchase of “House of Cards”, a show that lined up perfectly with Netflix’s average profiles. However, as the article points out, “the data can’t tell them how to make a TV show, but it can tell them what they should be making.” The automation of something as emotional and complex as movie-making/movie-watching seems to have been reduced and perhaps even mocked by Netflix’s algorithm. Yet, the opposite is true. “House of Cards”, and Netflix for that matter, are not successful because the human capacity to enjoy films can be trivialized to an algorithm but because the preferences related to enjoyment can be more accurately communicated via metadata. Thus, the creation of the media in response to these results remains an essential, anthropological product.

Watson: i.kinja-img.com/gawker-media/image/upload/s–Gbchunvr–/18mhcmpj5aul1png.png

As an avid fan of “Jeopardy!”, this article reminded me of IBM’s Watson. A supercomputer put up against “Jeopardy!’s” greatest champions, Watson easily won the contest with its huge data storage and processing capacities, as well as precise “buzzing” within a millisecond to first “question” the answer given. Its speed and accuracy improving with every clue, Watson also had a “learning” algorithm to remember combinations of answers that proved evident throughout the contest. Remarkably obvious, however, was Watson’s inability to “pattern” human thought and speech. Things like puns and jokes in clues went unregistered by Watson, and was unable to perceive answers to these trickier clues. Thankfully, Watson’s capacities were created with the medical field in mind, even though its stunning calculation abilities are hotly contested and marginalized by professionals in the medical community who worry about the economic and moral ramifications of automating medical practice (“The Robot Will See You Now”). Indeed, Watson is far from capable of understanding human gray areas like fear and morals to present proper diagnoses. However, perhaps medicine as nothing to fear – just as Netflix needs viewers and filmakers, surely medicine requires both human patients and doctors.

Netflix: “www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/”

Watson: “www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/”

 

Week 2: Imagined Communities

Spread of ethnic backgrounds across the US.

For me, Julia Gaffield’s “Haiti’s Declaration of Independence: Digging for Lost Documents in the Archives of the Atlantic World” was a particularly fascinating and powerful read this week. Gaffield’s account documents the discovery, or rather, “re-discovery”, of the Haiti’s Declaration of Independence. Acknowledging the Declaration’s overwhelming significance to the Haitian people, Gaffield understands the document’s discovery as not only historically significant but culturally importantly to advance “the obvious hunger for an alternative narrative of Haiti, one that emphasized the global significance of its achievements during and after the revolution”. Acknowledging the document’s significance to Haitians across time – from 1804 to present day, Gaffield’s conscientious understanding of Haitians becomes more than morally significant.

Overwhelmingly key for Gaffield was her understanding of Haiti’s Declaration of Independence as a cultural product of the Haitian people, not the Haitian country. Gaffield aptly conjectures the Haitian Declaration of Independence, created at the dawn of globalization, and many other documents relating to countries involved in Atlantic Trade, would be found scattered amongst these involved countries. “The movement of people, goods, and ideas created an integrated Atlantic community”, thus delineated a community bound not by border lines and nationalities but networks via tangible and intangible goods. As a result, Gaffield’s account perhaps more importantly signals the significance of acknowledging communities existing beyond geographic classification.

Gaffield’s understanding of community beyond geography immediately reminded me of Benedict Anderson’s Imagined Communities. Imagined Communities argues for the “imagined community”, individuals identifying with the same national identity regardless of geography. Originally a commentary on nationalism post-globalization, the concept of the “imagined community” evolved to facilitate broader discussions of communal solidarity – here, Haiti’s part in Atlantic Trade and these countries’ part in Haiti’s independence. This concept is literally illustrated in the map of the US linked to at the start of the post. The colors marking populations of ethnic-identification illustrates the spread of ethnic identification as having no correspondence to the state borderlines strewn across the map. While individuals of the same ethnic-identification are connected, they are not connected geographically via the states they live in but their own identification to an ethnic background linking them to a “community” of others throughout the US who share the same background. Yet, despite the increasingly-apparent porousness of these borders certainly doesn’t diminish the significance of communities within borderlines and geography, though, as this post hopefully highlights, those without physical distinction deserve just as much regard.