Week 9: VR in a Classroom Setting

I found that reading Cecire’s article before diving into “Virtual Reality for Humanities Scholarship” by Lisa Snyder gave me a different perspective in assessing the challenges that VR technology faces in the near future. Having taken Snyder’s 150 class in 3D modeling, I particularly enjoyed exploring the variety of mediums that this new digital technology allows both scholars and students to research and interact with humanities subjects. For example, our class was able to take a virtual field trip in Second Life that was led by Professor Gill from ASU who spoke to us through the program and talked about the historical facts and decision-making in rebuilding the environment of the heritage site. She gave us ample time to ask her questions and do some of our own exploring as some bits and pieces of information were also embedded into the plantation models themselves. However, I feel that for a good half of the tour, many of the students were getting sidetracked either by the novelty of the virtual trip, having difficulties navigating, or getting distracted by many of the in-game mechanics that Second Life provides.

Although this was a great example of a concrete way in getting students to learn about a piece of history and space, I couldn’t help but reference Cecire’s point about “formulating a theory out of lived experience” and “how to communicate tacit knowledge.” This experience was probably a better example of more doing than saying as the act of partaking on this trip overshadowed much of the content, which was the point in the context of our digital humanities class. Yet, if the professor who led this project were to have set a guideline of evaluating and answering questions through the exploration of the site, students may be able to learn much more about the historical context of the model.

I also found Scheinfeldt’s post about “niceness” in digital humanities very revealing about the nature concerning method and theory. The debate between which tool to use when conducting an academic project using virtual technology is much easier to solve as opposed to the controversial research concerning the project’s topic. Moreover, Snyder mentions a “5:1 ratio for time spent on research versus computer modeling.” However, I believe that many of these projects end up becoming under appreciated by students depending on the backgrounds they come from and their interests in the topic of investigation. Cecire’s article raises an issue that perhaps someone from a background of unlimited childhood computer access would take away or even focus on a different portion of a project than someone of a lower socio-economic status who may get easily lost in the methodology that the theory is presented in. These limitations should be further explored while embracing the inherent difference in the act of saying and doing.

 

Natalia Cecire’s “Introduction: Theory and the Virtues of Digital Humanities”

Lisa Snyder’s “Virtual Reality for Humanities Scholarship”

ASU’s virtual campus: https://alysongill.wordpress.com/2014/05/26/2008-asu-virtual-campus-sl/

Week 8: User Experience and Interface Development

martin

I learned a lot from reviewing the presentation by Jess James Garrett about the elements of user experience. He relays the information in a very concise and structured manner that makes it easy to digest. Furthermore, the way he unveils each layer and places them on a spectrum from abstract to concrete stuck out to me because I’ve never considered these elements connected in that fashion. However the concept of dividing each component into ‘product as technology’ and ‘product as information’ is not new to me since I’ve learned at least from a computer science point of view that websites are composed of a front-end interface and an equally important back-end that controls the function and the look. Yet, this presentation as a whole successfully illustrates the amount of detail that goes behind what the general public sees.

Afterwards, I used Garrett’s presentation as well as Schneiderman’s “Eight Golden Rules of Interface Design” to analyze Bissel and Loyer’s projects. To reiterate, Garrett’s definition of user experience design is, “the design of anything with human experience as an explicit outcome and human engagement as an explicit goal.” Although ‘The Knotted Line’ had a very interactive experience for the user, which accomplishes the first half of the definition, I felt like I quickly lost my interest in the interface because of how difficult it was to navigate the line and get to the informational dots hidden behind it. What actually kept my engagement from wavering was reading other people’s answers to the changing questions along the bottom of the screen. On the other hand, ‘Freedom’s Ring’ was a beautiful compilation of Martin Luther King Jr.’s iconic “I Have a Dream” speech. It not only dissected the speech into comprehensive snippets that linked to outside sources, but also displayed background images which flowed across the screen and brought the speech to life. In this sense, the site required minimal user interaction, but did a better job in keeping me engaged for a longer period of time than the previous project.vsim

Personally, this past week marked the subtle end of my research work for my professor and Shneiderman’s article allowed me to reflect on what I’ve learned in terms of asking the right questions to create the best way to display information for optimal user experience and to analyze a program’s interface that would facilitate both components. I feel like there are definitely more things to consider outside of the eight rules such as graphical implications and placements of certain functions, knowing who the audience is, and clearly conveying the intent of the project, which Garrett briefly describes as the ‘product’s personality.’ After working on the VSim project for over a year, I can proudly say that the prototype manages to accomplish many of those elements. And even though I wasn’t part of the decision-making process in designing the interface, the program had allowed me to compile a large amount of data in an efficient way, albeit an occasional bug or two, while presenting them using a relatively new method. Being able to critically analyze the software from the perspective of a user while having insight on why certain decisions were made, gave me a better appreciation for the amount of effort and consideration it takes to develop a piece of software.

Week 7: Maps and Apps

I thoroughly enjoyed going through the “Anatomy of a Web Map” presentation by Alan McConchie and Beth Schechter. It covered many major and informative points without making it feel like a long read. I’ve now learned the appropriate way to describe web maps as they load in raster of vector tiles and the different components that make up a cohesive map. It’s eye-opening to think of how far we’ve come from the MapQuest days and of the possibilities when one can layer on different pieces of information within certain locations and have them load quickly for the user. There are plenty of apps now with the ability to track real-time locations of multiple objects to make our lifestyles even more efficient than ever.

screens-4

screens-2For example, there is an app called Glympse that I taught my Dad how to use because he sometimes worries about my drive back and forth between LA and my hometown. Its interface allows you to send a “glimpse” of where you’re headed, your location as you’re heading towards your destination, and (sometimes unfortunately) the speed in mph you’re traveling. Although apps are a different medium of maps, it’s still amazing that we’ve advanced this far and that it’s all made possible because of new interactive web mapping technology.

 

Furthermore, other open-source tools were briefly mentioned in the presentation that illustrates how one can manipulate and feed certain data into a map to relay one’s argument. From Jim Detwiler’s article of “The Introduction to Web Mapping,” I learned about the advantages and disadvantages of using certain types of web-mapping tools. In particular, the public API providers already supply the base layers so the developer would only have to add more information on top. However, the downside is that if the base layers are not to the developer’s liking, it can’t be changed. Also, they are not very compatible with complicated data sets that involve a large amount of polygon data.

Picture 2

This reminded me of a map that my urban planning professor had shown our class that used a Google Maps API and the 2010 census to map out every individual’s race and ethnicity. The visualization is basically an American snapshot of the racial diversity in every neighborhood of the country and is composed of 308,745,538 color-coded dots. From a zoomed-out perspective, one can see how clustered certain colors are even though there are sporadic splashes of varied dots. However, upon closer inspection of the interactive map, one can see in more detail, the network of specific races in context of the different cities.

https://www.glympse.com/

http://demographics.coopercenter.org/DotMap/index.html

Week 6: Digital Humanities Network

Stanford Analysis of Digital Humanities definition using MALLET and GEPHI

Scott Weingart’s article about Demystifying Networks was both informative and eye-opening. It does seem like some digital humanists have a tendency to try and analyze everything they come into contact with. Especially with the number of resources that we’re discovering on the internet, it becomes difficult to restrain oneself from putting too much into one tool or combining the wrong sets of data with the wrong type of tool. He talks about the differences of a multimodal network and the complications with fitting a bimodal dataset into software that is meant to analyze single mode networks.

What ties back to Drucker’s article from last week is the idea that “humanistic data are almost by definition uncertain, open to interpretation, flexible, and not easily definable.” Furthermore, Weingart explains that node types are concrete and it gets difficult when a digital humanist is trying to mold data into a shape it’s not meant to be in. And it’s definitely interesting to note that depending on the context that someone views the data, it can easily distort what that data means when being placed in the network.

I found a project online from Stanford that uses the MALLET topic modeling kit to analyze a small corpus of the Digital Humanities definition by members of the DH community. Of the data collected, they were defined by faculty, graduate students, and staff in academia. The author used Gephi as well to run two separate interpretations of the data end product by MALLET. It was interesting to look at someone else’s process of analyzing text found online. Even though he didn’t address the type of network and how that shaped his methodology, it made me more aware of its potential influence and how the data can be looked at and presented differently.

Week 5: Infographics and Data Visualizations

“Infographics of infographics of infographics…”

As with many other new digital humanities minors, I found that I had struggled in the beginning to give a precise definition about what the subject entails to other students. Yet in Smith’s blog post, it seems like there is one general consensus about the greatest potential in the field, which is the analysis of text. Furthermore, I realized that often times I had also used this as an example of using digital tools to investigate and present data in a different way for clarity and research purposes.

Because this is a relatively new field of academic scholarship, there is bound to be projects based on utilizing these digital tools for the sake of analyzing evidential texts to prove the obvious. I believe that some critics are too harsh and should give room for the field to grow and time to flesh out creative ways to approach, mine, and present data. As a proponent for a more humanities approach to graphical display, Johanna Drucker raises awareness about approaching data differently as a capta, which is “taken” rather than “given.” Thus, the former allows for a more interpretive aspect to making visualizations whereas data that is given is recorded and observed in a more fixated manner.

In the article, the differences between figures 16 and 17 are not only found in the purposes of those charts, but also in the level of complexity and what they’re set out to measure. Drucker illustrates that there is a lot more information visualization can display as soon as one uses a more humanistic approach to analyzing the capta. It is important to address multiple questions when attacking an issue and analyze the data through all sorts of dimensions such as time and space as well as other categories such as gender and age.

These articles reminded me of another popular type of visualization, called infographics, which are making its rounds in social media and deployed as a marketing tool. In this blog post, Vincey adamantly criticizes the confusion that the public seems to have with using infographics interchangeably as data visualization. The blog makes some interesting points, especially about the fact that some infographics try to pile too much information that emphasizes qualitative data over quantitative data that its message gets lost in the chaos. Although I do agree with some of his opinions, the bottom half of the post seems to be arguing for more simplicity and clarity when compiling data. This alludes to the idea that there is a fuzzy boundary between what is too much, too little, or just the right amount of qualitative information one can display on a data visualization.

Works Cited: http://insights.qunb.com/why-we-hate-infographics-and-why-you-should/

http://pastspeaks.com/2011/08/21/the-promise-of-digital-humanities/

http://digitalhumanities.org/dhq/vol/5/1/000091/000091.html

Week 3: Movie and Music Genre Generators

Picture 1This was a very comprehensive article that triggered a couple examples of other generators I encountered after reading it. Near the beginning of his piece, Madrigal had an interesting point about tracking the URL and its incremental values at the end of the web address. I’ve also been able to navigate between pages using that same line of logic. It strikes me that for some databases, a lot is revealed to the public, while others are much more privatized depending on how the system was created in the first place. Once you discover the thought process behind the ways some things are categorized, you can easily find what you’re looking for in a general sense.

After playing with the generator for a few minutes, I started to wonder if any director will rise up to the challenge and actually create a movie based on the results of this generator. Maybe that way, we’ll be able to see more original movies. I’ve always believed in the notion that people can do great things once they’re given some limitations. It’s an intriguing thought that something original can be made based on unoriginal words, descriptions, and genres produced by algorithms. This truly showcases the wide range of possibilities that any given combination can produce.

I also love this quote from the article, “It’s where the human intelligence of the taggers gets combined with the machine intelligence of the algorithms. There’s something in the Netflix personalized genres that I think we can tell is not fully human, but is revealing in a way that humans alone might not be.” In a very digital humanitarian sense, this project was able to produce many eye-opening graphs that gives the public an inside look to what types of things human beings prefer just by analyzing and presenting the data a different way. Instead of being recommended different genres and searching through them, we are now able to generate our own and the results that pop up says a lot about our diverse preferences and creativity of past directors that have shaped the movie industry.

Because of how many times Spotify was mentioned in the class, the first thing I did after reading the article was to google for a ‘Spotify based music genre generator’. What I found was this playlist generator site linked to Spotify that lets you search for playlists that were made from other users and were categorized by either mood or genre. It’s unfortunate that you can’t search for both simultaneously, but when creating a playlist, you can tag descriptions under both types of categories. Another site that I encountered was a more minimalist music genre generator that operated under a similar idea as the Netflix one in that it combined a couple of descriptive words from a database to create new music genres. Lastly, I found an actual site that lets you generate your own generators. Even though there was a user-created movie genre generator, it only allowed you to mash together two random genres and I’m willing to bet that the database it was pulling the genres from is a lot less detailed than the one Yellin created under Netflix.

Sites Used in Order of Appearance:

http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/6/

http://playlists.net/

http://jbdowse.com/genres

http://www.generatorland.com/

Week 2: Multipurpose Metadata

A data visualization of the movement patterns of 100,000 players in 10,000 games.

The National Information Standards Organization had pointed out that, “Metadata is key to ensuring that resources will survive and continue to be accessible into the future” (1). Throughout the article, the three main types of metadata that were discussed seemed to focus on how to properly document a resource’s identity, origin, and other information needed to preserve or archive the resource. However, what I found in Gilliland’s article that felt more relevant to the general public’s association with metadata was that multiple versions of a digital object can be reformed to fit any user’s need or point of view. Thus, the identification of one form of metadata can become a very complicated path of tracing one type of information object to the next. It really fascinates me that such a complex system exists that we can use metadata to categorize any information object at any level of aggregation no matter how different the information object can become in its lifetime.

As a gaming enthusiast, this particular article jumped out to me because it was a form of data visualization that captured the movements of 100,000 players and mapped their patterns across 10,000 games. The video in itself was a collection of metadata comprised in the recordings of 10,000 games hosted by another site. As you scroll to the bottom of the page, the article consists of another set of videos that show the movements of different roles in the game along with gifs on the side that portray examples of those roles. Upon inspecting the elements of the site, I was able to find all of the information objects’ metadata found in the form of URLs of the gifs and other images used in the article. This relatively new way of interacting with metadata raised a few questions that were addressed in Gilliland’s article.

What I imagined when I read this article was a series of nesting dolls in which this article was composed of multiple digital objects that all had different metadata on their own. However, when they were all aggregated into this one article through hyperlinks and embedment, it became another digital archive with its own identifiable metadata. This relates to another point brought up by Gilliland that along with the development of the Internet and its networked digital information systems, many issues arise that must be addressed with new types of metadata. In the brief section about multiversioning, she talks about a need for metadata that can differentiate what is qualitatively different between versions of digitalized objects beyond descriptive metadata. For example, by creating a gif composed of either animated or static images, a different form of metadata would be necessary to tackle the variations of the original information objects. This also leads to further questions regarding legal issues and intellectual property rights that would warrant another form of metadata to shift through the multiple layers of an information system and identify the original creators of its contents.

Works Cited:

National Information Standards Organization, “What is Metadata?” (Bethesda, MD: NISO Press, 2004)

Anne Gilliland, “Setting the Stage,” from Murtha Baca, ed., Introduction to Metadata (Los Angeles: Getty, 2008)

Tom Giratikanon, J. (2014). Watch 10,000 League of Legends Games in 30 Seconds. [online] Nytimes.com. Available at: http://www.nytimes.com/interactive/2014/10/10/technology/league-of-legends-graphic.html?_r=0 [Accessed 13 Oct. 2014].