Week 9: Reconstructing Medieval Paris

Screen-Shot-2014-10-17-at-9.12.09-PM-copy-986x576

http://paris.cdh.ucla.edu/

Paris Past and Present is a project of Professor Cohen’s that will make a 3D reconstruction of medieval Paris. It started last Spring Quarter and I joined this Fall. I am starting to learn how to use Vectorworks, but the other students, who include Tori and Haley, have already made the Galerie des Merciers at the Palais de la Cité (pictured above) and are working on the Lady Chapel of Saint-Germain-des-Prés. There is going to be a class next Spring Quarter to work on the project, but you can join before then!

I don’t know a whole lot about the project so far, but it looks like the process-based questions and product-based questions that Lisa M. Snyder discusses in “Virtual Reality for Humanities Scholarship” are fairly balanced. Each category is or will be addressed with different software. Vectorworks is being used for making the reconstructions and learning how the buildings were built, and Cinema 4D might be used in the future to present the models with better rendering and fly throughs. If I’m remembering correctly, Alec was working on the floorplan of the Lady Chapel and found that the measurements on the outside and inside of the building (somewhere around the transitional bays around the apse) were different. The emphasis is really on learning things about how these buildings were made that you couldn’t learn in any other way. By reconstructing a building, you can get into the mind of the architect.

At the end of her article, Snyder discusses how students have much higher expectations of virtual reality since they have so much more exposure to 3d graphics. Diane Favro addresses the same issue in “Meaning in Motion: A Personal Walk Through Historical Simulation Modeling at UCLA.” Snyder points to students’ ability to distinguish between the scholarly value of less visually impressive academic VR projects and 3d graphics made for movies and computer games, while Favro notes the exact opposite. Maybe student responses were different because of the way the projects were presented. Snyder’s projects are more process-based and the 3d models set up more modest expectations, while Favro’s projects are more product-based and set up higher expectations by being presented in the high-tech, theater-like Visualization Portal that promises kinetic experiences.

I think there could be a middle ground presentation that is both immersive and relatively affordable. In 1993, the groundbreaking computer game Myst was able to create an immersive environment with static 3d graphics and MUD-style movement (click on the left side to turn left, click on the center to go forward, etc.). Static images can be more finely detailed than images rendered on-the-fly, and the composition of each image can also be controlled to give the player the most beautiful and/or informative view. The sense of immersion came from the combination of comparatively better detail and the imposition of a slower pace that encouraged paying attention to particular views for longer periods of time.

Week 8: Retro Camera Controls

camera_interface

The first example of an interface that I thought of was the controls on a camera, which are historically specific. Classic SLRs and rangefinders like the Nikon F and Leica M had very simple, mechanical controls: a shutter release button, a shutter speed dial on the body, and an aperture ring and a focusing ring on the lens. Beginning in the ’80s, camera controls became electronic. The interface was freed from the limitations of mechanical systems, and it shifted to an interface based on buttons and one or two dials whose function changed depending on the exposure mode. However, for the last several years, there has been a retro trend to bring back some of the physical controls from the past. For example, the Fuji X100T and X-T1 have a shutter speed dial and an aperture ring just like classic cameras, as well as an additional exposure compensation dial that some cameras with more modern controls even have, such as the Sony A7ii.

In the chapter “‘So the Colors Cover the Wires’: Interface, Aesthetics, and Usability,” Matthew G. Kirshenbaum asks “What role, then, should aesthetics have in interface design?” His answer to this question is to create what David Gelernter calls “deep beauty,” the “idealized integration of form and function.” This idea has roots in the Bauhaus, which influenced the design of the Leica M, and is frequently reiterated in online discussions where cameras are noted for being beautiful or ugly. In the case of camera interfaces, it’s a matter of creating a user experience where the photographer “feels in control” of the camera. The exercise of control is set in opposition to the “laziness” and “uncertainty” of automation, which was once advertised as liberating the photographer from the distracting, time-consuming process of setting controls so that they could pay full attention to the task of taking photos. It is interesting that both types of camera interfaces claim some sort of transparency, the first making it easy to see and change settings on the camera, the second making it easy to ignore the camera. The fact that people have preferences for different types of interfaces shows that we do live in an “interface culture” as Steve Johnson asserts. They are a matter of taste and aesthetics, not something that’s merely utilitarian. As Kirshenbaum argues, interfaces are “where representation and its attendant ideologies are most conspicuous…where we deploy our most creative features and imaginative flourishes.” The rings and dials on cameras like the Fuji X100T are highly visible aesthetic statements about usability, and combined with its specialized function and stylish industrial design, succeeds in creating “deep beauty.”

Week 7: Flood Control and the Issue of Study Areas

4545550639_8caec8d787_b

This is a map of flood control features in Los Angeles as of the late ’70s from Blake Gumprecht’s book The Los Angeles River: Its Life, Death, and Possible Rebirth. There are hundreds of miles of concretized flood control channels, a handful of flood control basins and dams, and dozens of debris basins. Winter is coming and when the rain starts (well, if there’s rain in the first place), we’re going to start seeing mud slides and flooding on the news pretty soon.

One of the arguments that Ian Gregory makes in the chapter “Using Geographical Information Systems to Explore Space and Time in the Humanities” is that GIS can perform an analysis over a broad study area, both spatially and temporally, adding a new capability to humanities research. Previously, scholars could approach a phenomenon either spatially or temporally (which could lead to similar but irreconcilable theories, e.g., Cronon’s and Turner’s theories of urbanization), or focus on a smaller study area that theoretically represented what was happening in a larger area. The benefit of GIS is that by integrating space and time over large areas and spans of time, scholars can now reevaluate orthodoxies that previous studies may have led to. However, one weakness of GIS is that they are not as detailed and thus have less explanatory power. A possible methodological implication of GIS’s strengths and weaknesses is that scholars will use it as a preliminary step in research projects. After a broad overview is established with GIS, scholars would then do an in-depth case study to come up with an explanation or theory of the phenomenon.

The complementary relationship between GIS analyses and case studies reminded me of something I read about the flooding of the Los Angeles River. In the article “Flood Control Engineering in the Urban Ecosystem,” Jared Orsi writes about how “[m]oderate storms in 1978 and 1980 surprised flood controllers by triggering severe flooding, even in areas they had thought were protected” (148). The engineers were surprised, Orsi explains, because their theory of flood prediction was too generalized. If a fifty-year flood is only supposed to happen once every fifty years, why do they happen more often than that? The explanation is that the specific geographies of mountain ranges interact with complex storms in an uneven manner. One area may only get a drizzle while another may get a heavy downpour. While the average rainfall is not too bad, it is the smaller problem areas that you have to be careful about. This example from civil engineering illustrates why small study areas and case studies are an important part of research in the humanities, even though they are not sufficient.

Week 6: MoMA’s “Inventing Abstraction” Network Diagram

moma_inventing_abstraction

http://www.moma.org/interactives/exhibitions/2012/inventingabstraction/?page=home

This JPEG is the static version of an interactive network diagram made for the website of MoMA’s Inventing Abstraction exhibition that ran from December 2012 to April 2013. It was made by Paul Ingram and Mitali Banerjee, a professor and doctoral candidate, respectively, from Columbia Business School, in collaboration with the curatorial and design team of the exhibition. “Vectors connect individuals whose acquaintance with one another…could be documented,” states the description on the website, i.e., their relationship was explicit. Each node represents an artist whose work was in the exhibition, and they are arranged more of less geographically. The names marked in red have 24 or more connections. The interactive version on the website serves as a navigational device; clicking on a name zooms in on that artist and their network of relationships. Simultaneously, the webpage presents thumbnails of artworks that are in the exhibit and a short biography for the more highly connected artists marked in red.

While this network diagram was not made for research purposes, it still raises issues that face the use of network analysis in the digital humanities. One of the dangers that Scott Weingart discusses in his blogpost “Demystifying Networks” is the reduction of data that is imposed by current limitations in network analysis algorithms. In order to keep the network manageable for software and sparse enough to visually comprehend, all of the possible relationships that can exist between artists are reduced to the vague concept of “acquaintance.” Unfortunately, the website never defines what they mean by that word. Did they meet in person? Did they carry on a correspondence? Did they work together or otherwise exchange artistic ideas? None of these different types of relationships are depicted in this diagram. Furthermore, the website does not explain why 24 connections was chosen as the criteria for an artist being marked in red. What does having at least that many connections signify, if anything? Does it mean that their ideas were more influential? That they were more extroverted? That they traveled more? MoMA have produced a very provocative data visualization, but it would have been more revealing if they had included a little bit more documentation.

That said, this diagram is interesting because it highlights some artists who may not be as well known as others. Three artists are marked in red that I would not have thought were highly connected: Sonia Delaunay-Terk, Natalia Goncharova, and Mikhail Larionov. The significance of these lesser known Russian and women artists is one of the many questions that this network diagram raises.

Week 5: Moodstats, a Realist Data Visualization

moodstats06

moodstats07

moodstats05

http://www.cubancouncil.com/work/project/moodstats

Moodstats is a program made by Toke Nygaard and Per Jørgensen in 2000. It’s a personal diary where you can record notes on the day’s events and the changes in your mood, creativity, stress, and three more customizable variables. The data can be displayed either as a line graph or a stacked column chart.

The data visualizations that this application produces are good examples of what Johanna Drucker would characterize as an observer-independent, realist visualization of qualitative experiences. It is analogous to her example of a “standard map with a nuanced symbol set,” except the symbol set isn’t even nuanced. The programmers did not see data as capta when they made this program, and the structure of the visualizations is not based on interpretive, co-dependent relations of observer and phenomena. Each variable is rated on a scale from 1-10, a reductivist approach that eliminates uncertainty and ambiguity, and the axis for time is linear and homogenous. The parametrization of data is scientific, and the graphic design reinforces the cuteness of how silly it is to represent your mood with such precision and certainty. Still, the rhetoric of objectivity is attractive when considering the possibility of finding patterns in your mood and identifying triggers to mood swings. Sometimes you want a detached observer or method of observation to get a more authoritative perspective on emotional matters.

Looking at the experimental graphical expressions of interpretation in Drucker’s article, it is clear that they would be much more effective at representing a person’s daily mood changes. Crises and their self-conscious interpretation would be more apparent and revealing if they were shown “as a factor of X.” The subjective experience of time could also be represented by expanding, contracting, and warping the timeline. However, maybe it would be hard for a computer program to make graphical expressions automatically. It seems like producing graphical expressions involves a lot more thought and work than regular graphs and charts (which isn’t to say that data visualizations are easy to make). The word “expression” implies human subjectivity, and the visualizations in the article look sophisticated and for lack of another term, artistic. Can graphical expressions be made in about as practical a manner as realist data visualizations?

At the end of her article, Drucker refers to Edward Tufte’s book “The Visual Display of Quantitative Information.” She contrasts that model of information design against the humanities approach to graphical expression that she proposes. Just to put this program into context, Tufte’s book was originally published in 1983, and it become popular at the turn of the century. Maybe the design of Moodstats would have been different if it was made after Drucker’s article was published.

Week 4: Wardrobes and Relational Databases

image

This chart is an example wardrobe plan from the book New Image for Men: Color and Wardrobe by Marge Swenson and Gerrie Pinckney (published in 1983). It shows all of the pieces of an imaginary wardrobe and puts them into categories according to level of formality and type of clothing. There are pieces for business, dress, and casual wear, and they include suits, sport coats, shirts, pants, ties, jewelry, belts, shoes, socks, sweaters, and jackets/coats. Each piece has attributes such as color, pattern, and material. Also, there are a number of pieces for each type of clothing at each level of formality, such as five shirts and three ties that match a suit for dress wear. This plan results in a flexible, efficient wardrobe that it is easy to make outfits with and avoids extraneous or redundant pieces that clutter up your closet.

Stephen Ramsay discusses relational databases in the chapter “Databases,” which are based on the idea that a database can be “a set of relations.” If all of the outfits that you can put together comprises a database, a wardrobe plan is analogous to a database design. A simple, old-fashioned tabular database would mean that each piece in an outfit is only used for that outfit. If you had 18 outfits that included black Louboutin pumps, you would actually have 18 pairs instead of one, which is an improbable situation. A relational database describes the reality of wardrobes much better, since a single pair of shoes can be used in many outfits (what Ramsay calls a one-to-many or 1:M relationship), thereby minimizing redundancy. In a relational database, each outfit would be a record or entity with its own primary key, and the black pumps and other pieces in the various categories (tables) would be referred to via foreign keys that can be reused in other records. Furthermore, the ways that pieces are mixed and matched, indicated here by horizontal lines that separate the levels of formality, would be described by entity relationship diagrams. However, like Ramsay notes in regard to real-world data, actual wardrobes are more complex than this idealized wardrobe plan.

Just like databases, a person’s wardrobe reveals things about him or her. The particular items of clothing they buy and the way that they make outfits can give clues about a person’s tangible and intangible characteristics such as body type, their “color season,” personality, age, occupation, socioeconomic background, etc. Likewise, what data goes into a database and what is left out, and how the database is designed, reveals the ideology of the people who made it.

Week 3: Course Evaluation Forms and Mismatched Ontologies

image

This is a screencap of the webpage for the Evaluation of Instruction Program’s (EIP) course evaluations. Toward the end of each quarter, students at UCLA receive emails to complete and submit these forms, but instructors give the impression that few students actually fill them out. They always stress the importance of this feedback mechanism to improve the quality of instruction and to better serve their students.

This source is related to the discussion in “Local-Global: Reconciling Mismatched Ontologies in Development Information Systems” by Jessica Seddon Wallack and Ramesh Srivasan. It illustrates how the school collects data on the academic activities of community members, in this case faculty and students, in order to make better decisions about policy. However, this evaluation form also illustrates the phenomenon of mismatched ontologies between students and administration. Whereas common academic problems for students may involve thoughts like “I don’t really get what this assignment is asking me to do and how I’m supposed to do it,” “My reading comprehension and note-taking skills need work,” “I don’t know how I should be preparing for the exam,” or “It takes a really long time to do all of the reading and writing assignments,” this form was not designed to address such issues despite its role in improving the quality of education. As far as I know, there is currently no mechanism directed at collecting data on improving academic services, and unsurprisingly there is no Academic Skills Center that provides formal training in effective learning skills. This evaluation form demonstrates how the administration’s meta ontology influences how it attempts to address community problems, but has difficulty taking local knowledge into account and therefore incurs an information loss.

Wallack and Srivasan make several recommendations for how meta ontologies can incorporate local knowledge. The first is to develop collaborative and inclusive ontologies. This online form does not provide for student input on what questions are asked and which ones are the most important, though the technological capability does exist. The second recommendation is to allow the community to provide feedback on the data that the administration has collected, and to help them make good critiques of that data through education and appropriate communication strategies. Currently, the contents of these evaluations are confidential, and students remain unaware of how other students felt about the course (outside of the usual gossip), or more importantly how the administration understands the data. Finally, the third recommendation is to provide for alternative means of communication and decentralizing decision-making to more local levels. The final comments section on the form allows for some flexibility regarding the former, though the response may not be relevant to the question, while the latter issue is beyond the scope of this form entirely.

Week 2: What’s in your camera bag?

https://www.flickr.com/groups/camerabag/

This is a link to the “What’s in your camera bag?” group on Flickr. Members can join and add a photograph of their camera bag with its contents laid out around it: the cameras, lenses, and other photographic equipment they own, their favorite film types, their cellphone, laptop, tablet, notebooks, and assorted personal effects. It’s the photographer’s equivalent of “What’s in your bag/purse?”

Like members of other groups on Flickr, they are basically making their own collection of information objects. In addition to content, they create metadata in the form of titles, descriptions, and tags. Each person writes as much or as little metadata as they like, which can lead to difficulties in searching for specific information. For example, the group currently has 2,382 photographs, yet only 252 have the “camera+bag” tag attached to it. As a result of the inconsistency in tagging, searching for photos like this outside of the group’s photostream is completely ineffective. The titles that people write are also inconsistent because, for instance, some might prefer the title “What’s in MY camera bag” over “What’s in YOUR camera bag?” Obviously, there are problems with the quality of user-generated metadata on the Internet, i.e., folksonomies, because there are no standards and no controlled vocabularies.

In “Setting the Stage,” Anne Gilliand addresses the bewildering variety of metadata standards by categorizing them according to the functions of metadata. There is no one standard that works well for everybody, so Gilliand argues that by understanding the various types of standards, as well as the attributes and characteristics of metadata, and the phases of its life cycle, information professionals will be able to choose the most appropriate standard or combination of standards for the needs of their institutions, collections, and users. Creating and maintaining metadata is extremely labor intensive and costly, so information professionals have to choose which standard(s) will cover their needs both now and in the future. The example of inconsistent tagging and titling in photos of camera bags and their contents is admittedly trivial and does not require high quality metadata, nor is it likely that casual Internet users will spend much effort improving their metadata, but it does illustrate a few of the problems that can be caused by low quality metadata. The importance of high quality metadata is much higher for museums, archives, libraries, etc., especially nowadays when the Internet has created a larger, more diverse audience for collections and repositories. Luckily, digital tools allow for the accommodation of multiple audiences that include scholars as well as less highly-trained students, teachers, and the general public.