Week 3: Information-Loss Through UX

While reading Wallack and Srinivasan’s piece on information miscommunication, I was reminded of miscommunications that happen in a website-user interaction. The website should be made for the person using it, but often the website’s structure and content are the only focus and the user is left unconsidered.

A local community can’t fully control their problems or resulting data, so it is up to the state to help, document, and guide in a way that makes sense to the community. Since administration is in a position of power, it is responsible for its people and for using its power in a helpful, accessible way. Similarly, it is not the user’s responsibility to adapt to a way of learning different websites. User experience should be designed in a way that takes into consideration who needs to use the website, for what purposes, and in what context.

An example I thought of was elderly people on the web. User research points out that seniors are slower and less comfortable using the web than the average user, therefore relevant websites need to be designed in a way accessible to seniors  to not exclude them. Priceline.com is an example of a relevant website. Data shows that travel websites have significant traffic from older users, so accommodating them on Priceline through ux is important. Older users need larger font sizes, fast response time, less actions per page, and ample white space between clickable objects. They mainly use tablet and desktop, so using a horizontal scroll feels natural. They prefer to read information, so limiting video content is optimal. They are uncomfortable trying new things for fear of failure, so making tasks straightforward with on-boarding guidance will encourage them. Seniors often blame themselves if they can’t figure out how to use a website. However, when designing, the user is always right. It is important to design with the specific user in mind, in this case including senior citizens, because if the user can’t use your website, it is useless and poor design.

A designer has power over the usability of their product, and they have the responsibility of making relevant websites easier and faster for, in this case, older people to use. Similarly, if administrative power was used in ontologies to tailor to specific community needs, less information would be lost along the way. People in a position of power have the responsibility of considering who, why, and in what context their services are needed in order to achieve efficient and accurate communication.

http://www.nngroup.com/articles/usability-for-senior-citizens/

http://www.priceline.com/

Week 3: Geeks Make Art

I define art as any creation that reflects something about the artist or creator whether it be an interest, a feeling, a fact about their life, their dreams, etc. Art can be a painting, a film, a book, or even a drawing that you give your mother for mother’s day. Just the concept of using resources around you to create something that, without you, would not exist is impressive to me.

While reading Alexis Mardigal’s article on The Atlantic named “How Netflix Reverse Engineered Hollywood,” I found myself intrigued and amazed by his quest to make a list of all the altgenres (a term Madrigal uses to describe the various genres found on Netflix) that the streaming service has to offer. However, there was one sentence that caught my eye in which he describes a conversation between himself and Todd Yellin, Netflix’s VP of Product. Madrigal says, “‘It’s a real combination: machine-learned, algorithms, algorithmic syntax,’ Yellin said, ‘and also a bunch of geeks who love this stuff going deep.'” I also have a definition for geeks: anyone who is very passionate about something whether it be movies, video-games, data, or any other field of interest.

This quote resonated with me, reminding me of a conversation I had with one of my friends a few days ago when I showed him a video on Youtube by a Poketuber (someone who makes videos about the media franchise, Pokemon), Nathan Smith, who goes by the name of “natewantstobattle.” Smith has over 200,000 subscribers on Youtube which he has garnered through his “Let’s Plays” and song parodies. Below is one of his parody videos called “Hoenn’s Out,” based on the song “Love Runs Out” by American pop band OneRepublic.

Smith’s song, “Hoenn’s Out” talks about the excitement Pokemon fans felt upon hearing that Pokemon Omega Ruby and Alpha Sapphire were confirmed for the Nintendo 3DS family of systems. Smith has used what many would call a “geeky” interest to create these parody videos that many people enjoy listening to and watching. Other Youtube users leave comments expressing how they feel about his parodies. Here are a few:

TyranitarTube: “I’m gonna jam to this as I drive to BestBuy in November.”

Taylor Nordman: “I heard the original song for the first time today, and in comparison to this, I thought it sucked.”

Blaze The Mincraftian: “Cant stop listening to this song.”

When Yellin mentioned the people who watched the movies for Netflix to stream and rated the movies based on violence, the level of romance, etc. to create altgenres for Netflix, he described people who were passionate about the subject and committed to doing the work. Even Madrigal, who followed road they left behind by giving each genre a certain title, was passionate enough about creating a list and data to work until the it was complete. Whether you call these people geeks in a good way or a bad way, I call them artists for creating something they believe in.

The Problem isn’t in the Data

While I understand the argument for improved data connection between state and community, I doubt that it is the only problem. Mismatched Ontologies argues that with improved connection, problems will be fixed, siting India as an example. I disagree. Data does not plague these nations and it is the least of their concerns. Every government in the world is corrupt and is essentially nothing but a group of rich citizenry. For example, Alaska has tried many times, and succeeded sometimes, to build a road to nowhere; despite the cost and uselessness of the project.

http://www.nytimes.com/2005/08/20/opinion/20lende.html

http://www.latimes.com/opinion/op-ed/la-oe-babbitt-road-to-nowhere-alaska-20140311-story.html

In this case, there is data yet no improvement. The main problem with governments is self-interest and improvement in power and wealth. The governments in most “third world” countries do not answer directly to the people. Even if the people had a problem, there would be no need to listen. Powers are not motivated to act unless there is a threat behind the complaint. For instance, my apartment is brand new and yet the dishwasher was installed so that it only opens half way. Due to lack of communication and intelligence, the dishwasher hits the stove. Even when I make a complaint, maintenance refuses to fix the mistake. When my father makes the complaint, he is the one who pays, people are more inclined to listen. Still the problem has not been fixed. It is the same with every problem in these countries.

Even if the government wanted to listen, growing population and increasing national debt discourage any infrastructure improvement. Most “third world” countries are heavily in debt to their former colonizers that they cannot afford to fix their own problems.  Instead, the poor economic system allows foreign countries to manipulate the poorer workers into working for less. Very few outside organizations will help improve the infrastructure as it is their source of cheap labor.

The fact is, the average citizen has so much working against him that reporting his problems in the correct manner is useless. The only way I can see change in the flooding of India is if a major factory or foreign company becomes water-logged. Only with influence from a rich source can improvements be made; either that or one enormous rebellion. I can only see data collection as solving moderately small communities in wealthy countries; Neighborhoods in a city for example or cities in a county. Places where citizens might actually come face to face with the person in charge of their well-being.

What Doesn’t Belong?

what doesnt belong

http://www.demotivers.com/5412/Who-Doesnt-Belong-Here

I was struck when reading Madrigal’s article by the phenomenon at the end which he dubbed the “Perry Mason effect.” It instantly made me think of these humor posters about which of these things don’t belong? It was incredible that in a categorization system with literally tens of thousands of genres, that such a strange little hiccup could occur in what one would consider a relatively important category: most popular actors. Plus, this weird occurrence was not linked to recommendations made to Netflix customers, nor did it indicate that tons of people were watching Perry Mason episodes or movies featuring Raymond Burr. In fact, it was just something that happened during the process of using human preferences, fed into a computer, to create these altgenres. There is really no explanation for the Perry Mason effect. Yet when extrapolating this to wider fields in Digital Humanities, I think this occurrence of computational serendipity may be one of the reasons that humanists are so drawn to analyzing their data with machines. The strange feed line of research to computational model or analysis, back to human presentation elucidates incredibly interesting “Perry Mason effects” which the researcher alone would not have seen.  However unlike Madrigal, I believe in some cases of research the explanatory reasoning behind the “something in the code and data” can be traced and found incredibly useful by the researcher.

For instance, archaeologists have been feeding information, spatial and quantitative data about artifacts, into databases and mapping programs to show distribution patterns over a whole site or region. Often, nothing strange happens in the translation of the data back to human presentation (the final map for instance), and it shows generally what it was expected to. But in some instances, new spatial relationships, groupings, etc. come to light during this final stage which were not readily apparent, either in the field or straight out of the field notes. Because these computer systems/programs are mechanical, they help the human researcher to investigate the data without our inherent biases and expectations (though those might still be present in the data itself), and let us see things that we would not have otherwise. Usually in these cases, once the “Perry Mason” effect has been identified, it is possible for the archaeologist to retrace how/why/where this might have happened, and to outline something about the site or culture that may otherwise have gone unnoticed.

Archiving Documents: Preservation of History and Reference For The Future

The best part of visiting the libraries is walking through the stacks full of books in a fast pace, running through the the small, white call number stickers with my finger and looking for the exact call number I have scribbled down on a small piece of paper. Going through such vast yet narrow space full of information makes me feel like I’m on a treasure hunt.

In the world that is turning increasingly digital with an astonishing amount of data collected and saved  everywhere everyday, it is important to have a system that allows us to navigate through the pile of information in an efficient and effective manner. And more, it is also highly significant that we follow that system not only to find things but also when we are archiving them. Julia Gaffield’s search through the atlantic countries seeking for Haiti’s Declaration of Independence that had gone missing for centuries is an example of the significance of the archiving systems that connect our past to our present and future.

The main problem of navigating the Declaration of Independence of Haiti wasn’t only that the people who archived the document did not follow the same archiving system that we use today, nor was it that the people searching for it had no evidences in where to look for the document. It was more because while digging through the history the people were thinking in today’s terms rather than when that of the time when the document was archived. Centuries ago when the declaration left Haiti, the world wasn’t divided into countries as we know today, but it was more intertwined and connected through different colonies, people constantly traveling through and between them, transporting and trading goods. Gaffield was able to find the missing document that the historians and government officials could not navigate for years because she did not use today’s archival search as it is but understood how it was built and dug through the archiving system’s history along with the missing document’s history.

Navigating through data will only grow more complicated with an ever-growing amount of information pouring in and storages running out of spaces, but at the same time it will grow easier and more efficient as we adjust to the system. National Archives of Malaysia has not only built an official portal for government servants, students, researchers, and general public to find documents easily on their website but also offers consulting, virtual tours, and seminars on how to use the archive.  United States’ National Archives portal also allows the public to look for documents by people, places, foreign policy, events, etc. along with their founding documents. The use of data and technology has brought us closer than ever to our history, and it will only continue to do so.

 

References:

Vintage fashion data


Vintage Fashion Data

http://vintagefashionguild.org/label-resource/town-travel-ware/

 

While the importance of preserving valuable artifacts and data may seem like common sense, Noriega emphasizes and explains in detail all the reasons as to why it is so critical to maintain catalogues and records of what has come before us. One thing that I love is vintage clothing.  If it were not for people taking the time to document the clothing tags, and photographing different brands and their styles, over time, it would become very difficult to know what garment of clothing came from what decade. I feel like knowing minor details like that makes clothing feel particularly unique. I have a swim suit form the mid seventies and it was thanks to the website listed below which has a huge “Label Resource” page where you can see how the logos of different companies have changed over the years. I feel like knowing a bit of history gives it better context in the world at large and that allows us to tell better stories about those objects.

The cool thing about vintage clothing is that it never really goes out of style and that designers are always looking back to it for inspiration, as art collector Armando Durón stated, “no one can tell what the ‘historical cut’ will be 100 years from now—that is, what or who will be seen as important or forgotten” (Noriega, 10).  Thus it is important to try and catalogue everything so that nothing is missed or overlooked in the future.  You never know what will be trendy in the next few decades, so you might as well have access to as much of the past as you can.

 

Another important thing to keep in mind about preservation is that natural disaster can destroy material things and if things are digitized, they can be saved forever.  Sadly, “during the summer of 2001, a short circuit in the air conditioning system caused a fire on the roof that might have destroyed the building… But the event provided a harsh reminder about the fragility of the historical building and its contents” (Noriega, 12). It would be really tragic if a beautiful collection of vintage clothes were to be flooded or something, but if there were at least photo documentation, new designers could try and replicate old patters, especially if the patterns were saved as well.

 

And finally, after reading Classification and it’s structures, I learned it is important not to cross classify because things can easily get lost and confused and so for the vintage clothing website they have a few broad categories such as furs, fabrics, and labels which all have their own sub categories that are easy to distinguish.

 

 

What happens to a dream deferred?

“Digital Harlem” documents the hustle and bustle of the neighborhood of Harlem between the years of 1915 and 1930, as a part of a collaborative project by historians at the University of Sydney. The main focus of the project is to capture the lives of “ordinary” African Americans living in Harlem, rather than the majority of studies on Harlem at this time that focus on the “unordinary,” black artists and the middle class. Ordinary life is captured through an interactive map on the website. The site allows users to search by events, people, and places through a vast record of information collected through legal records, newspapers, and archives. I noticed at first, when searching the database, how the majority of the events documented are crimes, which prompted me to figure out why this was the case. By searching through the website even further, I found out that Harlem at this time had such high crime rates because many citizens in the neighborhood struggled with poverty and desperation. This reminded me of a Langston Hughes poem I have read before titled “Harlem” (http://www.poetryfoundation.org/poem/175884) which addresses the limitation of the American Dream concerning African Americans. Like “Digital Harlem,” which avoids the common, positive study on the art and success that came out of Harlem at the time, this poem disregards the success of many African Americans and focuses on the issues during this time. This unusual perspective gives the reader/user much more to think about since it is not generally focused on in society. By looking at the problems at this time, it helps us understand and acknowledges the achievements at this time.

Harlem
by Langston Hughes

What happens to a dream deferred?

Does it dry up
like a raisin in the sun?
Or fester like a sore—
And then run?
Does it stink like rotten meat?
Or crust and sugar over—
like a syrupy sweet?

Maybe it just sags
like a heavy load.

Or does it explode?

Lack of Wiring

Surveillance is omnipresent.  Unbeknownst to us, we are simultaneously observing, analyzing, and implementing hundreds, thousands of aggregated data/information.  Essentially, we are acting as surveillance cameras, through our eyes and ears; we decipher information pertinent to ours likes, needs, and wants on a daily basis.  We have heard the controversy surrounding our government agencies with this regard.  There is no need to regurgitate. However, it is important to note that surveillance is both good and bad.  Aggregation of information, metadata, is beneficial to scholars and other professionals for analysis and what to do with this a wealth of information.

Last week, UCLA received an $11 million dollar grant from the National Institutes of Health  to create a specialized group (center) that “would study biomedical data analysis,” says Amanda Schallert, Daily Bruin senior staff member.  The grant serves as a crucial aid for standardizing complex biomedical data.  It will employ researchers who garner data from other researchers in order to provide comprehensive and easy to understand information needed in this focused field.  To be frank, it all sounds incredibly complex to me just from reading ‘biomedical data’ (Schallert).  Nonetheless, UCLA is incredibly appreciative of this generous gift and aims to execute its agenda in the forthcoming years.

I would like to back track a bit to an article I read about data, Metadata.  At first, the title sounded intriguing, Understanding Metadata.  Sure, I’d like to understand metadata.  The conventional definition suits it right: data about data.  The article defined metadata clearly, and differently, along with other terms I had not been acquainted with: Interoperability, Dublin Core, The Text Encoding Initiative (TEI), and the list goes on.  These are all sub-terms under the umbrella of what constitutes metadata as a field of study.  I quickly saw my eyes glaring over and losing focus when terms upon terms rolled up on my screen.  What I noticed was that I was unable to process and digest this information.  I did not posses the tools necessary to absorb the information due to a lack of brain wiring/coding if you will.  Then, I made the connection.  The grant I mentioned earlier, the group focused on collecting biomedical data, is precisely what I lacked.  It will create the tools and equipment necessary to understand information.

‘Ah ha.’  That was the moment I connected the dots.  Naturally, it all made sense afterwards once I translated that experience to my life.

Daily Bruin:

http://dailybruin.com/2014/10/10/ucla-receives-11m-nih-grant-to-create-center-for-big-data-computing/

Understanding Metadata:

http://www.niso.org/publications/press/UnderstandingMetadata.pdf

 

 

 

 

 

http://dailybruin.com/2014/10/10/ucla-receives-11m-nih-grant-to-create-center-for-big-data-computing/

Week 2: #Metadata (on Twitter)

3013208-inline-inline-3-these-amazing-twitter-metadata-visualizations-will-blow-your-mind

I finally understand.

Metadata is a term I’ve heard thrown around a lot by some friends and coworkers, but never completely understood until now.

I’m an avid user of social media, especially Twitter and Facebook. Some people can say my obsession is a disappointing quality of my character, but I love it for a very specific reason. Social media is almost like a science that can conclude more about us than what is on our profiles. A scary thought in terms of privatization, but fascinating nonetheless.

Like many other social media outlets, Twitter utilizes many aspects of metadata that can record your location, what language you speak, interests, and learn a great deal about you just by monitoring your behavior on social.

I knew about how much information about me was being recorded, but I never knew what it was called or where it was going. According to Neal Ungerleider’s Fast Company article, Twitter can determine what language you speak based on messaging metadata, meaning the language in which you sent messages was recorded. This information can then use geographic data from your location of sent tweets to determine where you live, essentially.

Another interesting aspect of metadata that I thought of while reading the article, “What is Metadata?” are hashtags. The hashtag is its own form of metadata that groups related things to one another in an incredibly vast, and almost daunting un-navigable digital space. It has actually been used in the metadata world for quite some time to categorize. But in the world of social media, has only really become commonly used when social media outlets like Twitter and Vine emerged. The use of hashtags is where metadata shines. By using hashtags in regards to certain topics, people can connect with those who share in similar interests.

10721214_10204874311362659_993161795_n

The world of Twitter and even social media in general utilize metadata and the form of metadata in HTML as its asset. It creates a digital space for users from all around the world to connect and make it easier to connect in conversation through hashtags or “suggestions to follow” from the web application itself. By using data to collect more information about you, Twitter is enhancing a user’s experience and, in many ways, positively enhancing the way we use metadata in our every day lives. Especially with the millennial generation’s use of it, it seems to be working.

“These Amazing Twitter Metadata Visualizations Will Blow Your Mind.” Neal Ungerleider. Fast Company. 2013. http://www.fastcompany.com/3013208/these-amazing-twitter-metadata-visualizations-will-blow-your-mind

National Information Standards Organization, “What is Metadata?” (Bethesda, MD: NISO Press, 2004)

Week 2: Metadata and Nutrition

reeses-peanut-butter-400x400

https://www.hersheys.com/reeses/products/reeses-peanut-butter-cups/milk-chocolate.aspx

http://www.fda.gov/Food/ResourcesForYou/Consumers/ucm267499.htm

http://myfitnesspal.com

“Setting the Stage” by Anne J. Gilliland is an introduction to the definition and purpose of metadata. Metadata, or “data about data”, “is understood in different ways by the diverse professional communities that design, create, describe, preserve, and use information systems and resources”. Metadata’s function differs from each metadata standard because Gilliland argues that there is no standard adequate for describing all the collections of data. But it is crucial that metadata is stored and structured in an effective way so the process of retrieving the data is effective, now and in the future. Without the proper storage and maintenance of metadata in databases, it is incredibly difficult to decode information objects and make it public knowledge.

In the early 90s the US Food and Drug Administration (FDA) decided to make one of the now most used types of metadata available to the public – nutrition facts. Since 1994 the nutrition fact label has been printed on every sold packaged food product sold in America to help inform customers about the fat, carbohydrates, and sodium content, along with the percentage of daily calories in the given serving size. For example, one of the links above leads us to the nutritional information of a Reeses peanut butter cup. The second link given above is the FDA’s guide on reading a nutrition label. So using the latter to, in a sense, “decode” the nutrition label and turn it into useful data, we find that based on a 2,000 calorie diet, two Reeses peanut butter cups is about 10% of the amount of calories one should be eating per day, 20% of the total fat, 6% of the sodium; the list goes on. Thanks to nutrition fact labels, we know that (unfortunately) eating twenty Reeses peanut butter cups would be doubling the amount of total fat our bodies should be having per day. The nutrition facts act as the metadata, and the labels act as a way to organize this metadata in an effective way so we as consumers now know about exactly what we’re eating.

Another publicly accessibly tool besides the nutrition food label is the database that contains this type of metadata. MyFitnessPal is an open-source database, or record-keeping system, that has most of the world’s nutrition fact labels stored. The web app allows users to add metadata and use the existing metadata to track their daily calories and other nutrition data. MyFitnessPal is an example of giving the public an easy way to access useful metadata to help improve their lives. Nutrition fact labels are an incredibly valuable development, but expanding the storage of metadata in a useful way is the future of informational systems.