Week 9: Rethinking Virtual “Reality”

According to Favro, the interpretation of history “informs subsequent development.” This means that the perspectives and knowledge that one takes away from an experience affect what happens from there on, either as it is later taught or how one understands it and how it shapes their beliefs. This could also be applied to the virtual simulation experiences that are talked about in this week’s articles. These simulations are made by “researchers experimenting with new technologies” (Favro), and the knowledge that the experience produces are affected by the researchers’ initial understanding of what they are portraying, but also how the information  and experience comes across to the user. This reminds me of previous weeks’ readings about data visualizations, and how there is much room for error in how the user interprets what they are seeing, especially if they have room to assume. This miscommunication between visualization and user is a recurring issue in digital humanities work. Favro, when discussing his Roman historical site modeling project, mentions some ways to prevent it, such as “expansive metadata” explaining the context as well as including “surrounding environmental context” to give users reference. There are still problems, however. Favro says that users get too focused on the “knowledge representations” that are the visualization and lose the content. I think this is because there is too much focus on the visualization being an accurate “simulation”. The users assume and want a “hyper-realistic simulacra,” and then lose interest when the reality of the program’s technical incapabilities gets in the way. This could be avoided by the researchers shifting focus about what the experience’s goals are. The solution to sub-par “reality” visuals is to approach it in a different way, focusing on the experience and content rather than the visuals. A slightly unrealistic visualization that claims to be “virtual reality” is misleading to users. They either get a false understanding or what is reality, or get frustrated because it can’t compare to reality. Instead, digital humanities researchers should think of new ways to use the programs, and make the execution a personalized project just like the step of choosing a technology to work with in the first place. For example, there are more and more applications of augmented-reality coming out. Augmented-reality is a way to combine real with virtual, but still, many companies think of the virtual as trying to simulate the real as much as possible. There is a reason it is virtual, and this new medium should come with new ways to approach it, instead of “simulating” something that it is not. There are so many possibilities that can expand the content of the experience because it uses a virtual environment, if only the concept of “reality” is rethought. Instead of trying to be realistic, take advantage of being unrealistic and make it work to the program. Looking “unrealistic” can be an advantage just as well, and can work better for users, because it can get them to focus on the content instead of how “realistic” the experience looks.

Screen Shot 2014-11-30 at 9.22.52 PM

Week 8: Responsive Interaction

While reading about design in terms of user experience rather than visuals, I thought about Google’s “Material Design” spec, for which Google won a 2014 UX Award. It was recognized for its ability to “add clarity to user-initiated events by creating strong visual connections from user input.” This Google spec is about establishing visual language, new possibilities with mobile design, and “meaningful motion.” As in, meaningful design that has the purpose to help the user’s experience be more intuitive and pleasant, rather than gimmicky. There is a section specifically about design responding when interacted with by the user, which is an interesting cross between experience and visual design in my opinion. I kept thinking about Shneiderman’s “Golden Rules” of designing with your users in mind. Google’s spec is basically a guide to their language, and to the logic behind it. It explains why their designs react and interact the way they do, and is also interesting because many of the designs are made in a way to not be noticeable by the users, so seeing them as a blatant design shows just how smart they are. The “Responsive Interaction” section is a guide on consistency in actions, the number one rule of Shneiderman. It combines with two other rules – feedback for actions, and making users the initiators of actions – to become a successful guide to achieving this “meaningful motion” they mention. Consistency is important because the user needs to know what their actions are doing, and what the hierarchy of their content is. Buttons should react the same way when pressed, transitions should be consistent and subtle, hover functions should indicate active content the same way, and windows (browsers and pop-ups) should be layered according to a system. If something is suddenly different, the user will expect a different action to occur, and if there is no reaction from the design, the user will think their action didn’t get processed. This is why feedback on actions is important, and why designing them needs to be part of the ux process. It can also make the experience of using your website enjoyable for your users, and make them want to click on more things, which leads them to discovering more of your content! These reactions from the design are part of the interactive design process, but they are also very dependent on their visual design. A ux designer can decide that there needs to be a shadow under a window, for example, but the look of this shadow (subtle, gradient, behavior) is just as important in the user’s experience of it.

http://www.google.com/design/spec/animation/responsive-interaction.html#responsive-interaction-material-response

Week 7 – Time vs Space

archive41-500x256

“Wayback Machine,” Internet Archive

https://archive.org/web/

This week’s readings were very focused on spatial characteristics associated with data and visualization mapping. Although temporal characteristics for this data must also be considered, there are often problems with the databases becoming too complex when factoring in both time and space. As a result, space is focused on and time is left to be secondary. Dunn mentions a problem with historical landmarks not being included in digital maps, and how this creates an unintended reconstruction of our notion of the past. Goodchild talks about the “airbrushing of history,” resulting from lack of real-time depictions of Earth’s surface in Google Earth. Gregory discusses geographical information systems’ favoring of spatial references within their databases. All of these articles about tools that prefer space made me think of a tool that relies on time: the Internet Archive’s “Wayback Machine.” The Wayback Machine is a search engine with a database of screen caps of a variety of websites, over the course of time. One can type in a URL, and the Machine shows the user a timeline of when this particular URL has been archived, and then it will show the user what this website looked like at that particular moment in time. Since websites are constantly changing in terms of usability, visuals, and content, the Wayback Machine allows a snapshot look into the past, through these documented and archived images of websites over time. For example, Google is a website that is very actively archived. With the Wayback Machine, I can see all the times it has been screen capped and view these. I can see what Google looked like in 1999, for example, and although the website isn’t interactive, I can get a sense of how it has evolved throughout the years. I think this is a very interesting tool because it archives the internet and documents its changes – changes that would otherwise go undocumented because previous versions are lost when a new design is implemented. It is a look into the past, into the “historical landmarks” that Dunn writes about that otherwise get lost in real-time digital mapping.

Week 6: Simplifying Networks

While reading “Demystifying Networks,” I kept wondering how spreadsheets for my final project would fit into the structures that the author was talking about. My data has nodes, but instead of one article of “stuff” per node, I had multiple. For example, in my category of “ingredients,” I have many different foods, and I was afraid that my situation is what the author is describing as difficult to translate into a network. “Once you get to three of more varieties of nodes, most algorithms used in network analysis simply do not work; most algorithms were only created to deal with networks with one variety of node.” If I understood correctly, my spreadsheets contain more than one variety of stuff per node. I scrolled down to the comments, and I found someone with a similar problem to mine – figuring out how to translate my complex data into a network visualization, that is. The author’s response was that although this type of data was possible to visualize through some kind of algorithm, it wasn’t the best way to go about solving this problem. Instead, the author suggested multiple ways of visualizing the data, because one method would not be enough. This is kind of what I was already doing to solve my problem – I had created a word cloud, and the data that the word cloud had narrowed down for me could be something that I plug into a new spreadsheet and visualization. “An option might be to represent each node type individually in a separate representation,” the author suggested, in order to easily translate spreadsheet into something visually readable. Then, these individual representations could be combined to see a bigger picture. Although I wish I could visualize something like the image I attached below of the Flavor Network, the process is too complicated for me to translate into a spreadsheet that visualization programs can read. It also might result in too much data in the visualization, because although the Flavor Network looks interesting and connected and color coordinated, there are a lot of connections that are too small to be interpreted, and therefore that data is lost.

srep00196-f2

Week 5: Avoiding Assumptions

Screen Shot 2014-10-31 at 7.21.53 PM    Screen Shot 2014-10-31 at 7.28.18 PM

A theme that resonated with me from this week’s readings was information loss, both through miscommunication between reader and content, and through lack of a voice in history translating to lack of a voice in documentation – which can also be switched to say that a lack of voice in documentation leads to an assumed lack of voice in history. I thought it was important that Drucker mentioned in “Humanities Approaches to Graphical Display,” that “the history of knowledge is the history of forms of expression of knowledge.” This to me meant that history is only as much as how it was documented and interpreted, and the miscommunications through information loss become very dangerous in this sense. This week we’ve been learning about different data visualization techniques to use in our Final Project, and these readings emphasize the importance of being smart about our techniques and tools. The first step is our data – choosing what to gather, how to gather it, then gathering it, and then thinking about how the reasoning behind why we gathered it can translate into knowledge through a visualization. Our data for our project is metadata about the most popular LA food trucks, looking at categories such as common words, food types, ingredients, names, and prices. We want to take this metadata, visualize it, and then use it to prove and analyze our findings about success and trends of food trucks. One of the first steps of visualizing could be done through a word cloud that makes common words bigger, and then links them to words that they are most commonly paired with. This would give insight into the main items or catchphrases that food trucks are using the drive business, and would also give insight into what consumers are consuming the most of. From here, I would take the most common ones and do further analysis on them, in order to get specific, accurate, detailed data. This could be a timeline, incorporating time frames into the visualization to show the rise of trucks and when the trends proved in the word cloud were realized. I think two different types of visualizations would allow for flexibility and accuracy with our data. It would also encourage the readers to interact with the data more and figure out how the two relate to one another. Of course, this could be a problem in itself. To make sure the readers don’t assume too much, our graphs would have short descriptions for accuracy, and then a further detailed “about” paragraph. In addition, we would address any data that doesn’t quite fit into the map. For example, uncertainties. In addition to addressing visualizations, Trucker’s article also points out some faults with “data.” Data assumes that it is a black and white fact that can be plotted onto a visualization, when in reality there are many uncertain pieces of data that don’t quite fit in. In order to not have to omit these pieces of data – which would result in the reader thinking they just don’t exist – a visualization tool has to be created with these humanities issues in mind. How our tool expresses the data, defines it. The representation of knowledge is just as important as the knowledge itself.

Week 4: Databases Visualized

“If [this] data were published in books, a bookshelf 450 miles long would be required to hold them” (Kroenke & Auer). This quote from “Database Concepts” made me think about different ways of representing the data in a database. It could be written out in physical books, stored in limitless tables online, or it could be visualized. Since, according to this article, the “largest databases are those that track behavior,” I wanted to find a metadata visualization that could communicate that type and that volume of information.

Data Paris (http://dataparis.io/#) is a visualization of the city of Paris in the form of metadata. At first glance there are a lot of different buttons that I’m not sure what to do with, and this is a problem that I’m assuming one runs into when trying to turn so much data into a simple graphic – it would make sense to translate the idea of rows and columns (from a traditional database) into this visual, because the logic of such a structure would be easy to understand. I began to understand the website after playing around with it for a while, but the context of “Paris” was lost for me because I am not familiar with the area. I did, however, find patterns in the metadata that I wouldn’t have been able to detect as easily without visuals. I started by looking at areas with the least amount of single people. I noticed that these areas also had the most married people, most retirement aged people, least population density, most home owners, and highest home prices. All of these metadata statistics made sense that they would go together, so it was cool to click on metadata categories and predict which areas would light up. I had to make these data connections myself, but the visuals confirmed the predictions I had made based on previous information that I had gathered from this metadata visualization.

Another metadata visualization source is http://create.visual.ly/, which allows anyone to create visualizations based on their own or chosen metadata. For example, you can log into Facebook and if you have a Page, you can see basic stats of page fans such as demographics and geographics, how your page is doing in terms of shares, views, and clicks, and data about use over all time vs last 30 days. Another visualization on this website allows you to log into twitter and search any hashtag to see metadata about its lifetime, common sources, and twitter accounts with the most influence on the hashtag. These visualizations are great ways to show relationships between gathered, available data. It puts metadata into context because it is very specific and relevant. This also means, however, that these visualizations stay very basic. They can only give you access to a limited amount of metadata and in a very specific context, but they still give a nice simple visualization timeline that provides insight through contextualized knowledge about the data.

Overall, visualizations are a great way for people to make sense of databases and turn data into knowledge. They provide a seemingly simple process and are an enjoyable way for users to learn.

Week 3: Information-Loss Through UX

While reading Wallack and Srinivasan’s piece on information miscommunication, I was reminded of miscommunications that happen in a website-user interaction. The website should be made for the person using it, but often the website’s structure and content are the only focus and the user is left unconsidered.

A local community can’t fully control their problems or resulting data, so it is up to the state to help, document, and guide in a way that makes sense to the community. Since administration is in a position of power, it is responsible for its people and for using its power in a helpful, accessible way. Similarly, it is not the user’s responsibility to adapt to a way of learning different websites. User experience should be designed in a way that takes into consideration who needs to use the website, for what purposes, and in what context.

An example I thought of was elderly people on the web. User research points out that seniors are slower and less comfortable using the web than the average user, therefore relevant websites need to be designed in a way accessible to seniors  to not exclude them. Priceline.com is an example of a relevant website. Data shows that travel websites have significant traffic from older users, so accommodating them on Priceline through ux is important. Older users need larger font sizes, fast response time, less actions per page, and ample white space between clickable objects. They mainly use tablet and desktop, so using a horizontal scroll feels natural. They prefer to read information, so limiting video content is optimal. They are uncomfortable trying new things for fear of failure, so making tasks straightforward with on-boarding guidance will encourage them. Seniors often blame themselves if they can’t figure out how to use a website. However, when designing, the user is always right. It is important to design with the specific user in mind, in this case including senior citizens, because if the user can’t use your website, it is useless and poor design.

A designer has power over the usability of their product, and they have the responsibility of making relevant websites easier and faster for, in this case, older people to use. Similarly, if administrative power was used in ontologies to tailor to specific community needs, less information would be lost along the way. People in a position of power have the responsibility of considering who, why, and in what context their services are needed in order to achieve efficient and accurate communication.

http://www.nngroup.com/articles/usability-for-senior-citizens/

http://www.priceline.com/

Week 2: Bias in Archives

An important theme that stood out to me from this week’s readings was the notion that archived history is not the same as accurate history. Noriega’s “Preservation Matters” points out flaws surrounding the system of archiving information. An archive implies a complete collection of all existing documentation, gathered and preserved to represent history in its entirety. However, the reality is that not all existing documents find their way to an archive – if the information is even in a documentable form to begin with. A lot of data does not find its way into an archive, and therefore these pieces of history don’t “exist.” Julia Gaffield’s article, “Digging for Lost Documents,” brings up the issue of a fact – a historical occurrence – actually being doubted if it isn’t documented; Haiti’s Declaration of Independence could not be located, and because this physical document didn’t exist, the country’s independence could not be “reaffirmed” (Gaffield). A document was needed to validate this country’s important occurrence, and without it in a locatable archive, that piece of history was not valid. There were many reasons for this archival mishap; Gaffield identifies that “negligence, natural disasters, lack of funding, and political turmoil” can all lead to history being lost and undocumented. Even though those historical events happened, it was not archived, and therefore lost its value to the rest of the world until it was able to be located.

If a document isn’t included in an archive, it’s automatically excluded. Similarly, if a “group isn’t included in an archive, they’re automatically excluded” (Noriega). I took a class last year in Information Studies, where we discussed this social responsibility of the archivist. Noriega’s article reminds me of this modern day issue that if something isn’t online, we assume it doesn’t exist – this can be applied to the notion of an archive as well, as the internet serves as an online archive. Professor Srinivasan of Information Studies at UCLA is a strong advocate for “indigenous and ethnic communities’ appropriated use of media technologies” (Srinivasan) so that underrepresented groups can create their own presence online instead of risking misrepresentation or lack of representation. Srinivasan’s project “Tribal Peace” (http://www.tribalpeace.org/) does exactly this. The website is not only a tool for this Native community, but also serves as an online presence and therefore representation, depicted in a way chosen by the group themselves. Without the help of activists like Srinivasan, financial constraints and bad experiences (Noriega) working on a collaborative project – such as the archivist imposing their own historical account onto a group or event instead of the people being allowed to create their own vision – lead to the lack of an archive for a group. It is important to “facilitate and support” (Noriega) archival efforts in order to achieve complete, unbiased archives and therefore a more accurate account of history.

Chon Noriega, “Preservation Matters”

Gaffield, Julia, “Haiti’s Declaration of Independence: Digging for Lost Documents in the Archives of the Atlantic World”

Srinivasan, “Tribal Peace” http://www.tribalpeace.org/