3D Modeling Projects and Problems

Having the opportunity to create a 3D model provides a chance to take something from the world, the stuff of real life, and translate it into computable data for an operating system to create something from. As Raphael has already posted about, he, Tori, and I are part of a Digital Humanities and Art History collaborative project to model architectural sites from medieval Paris which I’ll talk a bit about here.

Snyder’s essay, Virtual Reality for Humanities Scholarship, lays down the basics for anyone who wants to tackle a DH project in 3D modeling and creating virtual reality. Her case studies on different virtual reality projects really capture what it is to work on one of these projects and highlight some of the problems that arise. Once the idea for a 3-dimensional project has been proposed, every decision after that point affects the final outcome. Even the choice of what software to use plays a huge role in the look and feel of the final product. (It’s important to note that not all modeling software is created equal; most offer very similar functions but there are special functions for each program and they each carry out tasks in different ways.) The Paris modeling project uses Vectorworks, a modeling system designed with architects in mind. Vectorworks and Google’s SketchUp, a program perhaps more people are familiar with, both offer 3D modeling services but Vectorworks seems much more sophisticated in layout and variety of functions offered. Perhaps this sophistication comes from the fact that I’m a novice in the 3D modeling world but it is true that Vectorworks has a large amount of specialized features for its users.

SketchUp prism
A rectangular prism made on SketchUp

With SketchUp I was able to make this prism very intuitively without knowing much about the software already. I used the pencil tool to create a rectangle and then selected the tool that looks like a box with an arrow point up to make it 3-dimensional. There aren’t too many fancy buttons here but it gets the job done.

Vectorworks prism
A rectangular prism made with Vectorworks

Vectorworks, however, is its own story. Basic functions like drawing shapes can be found in the Basic palette of tools off to the left, which is where I selected the rectangle tool to draw a 2D shape and then I had to extrude it to convert it to a 3D shape. At this point I know that I can use the shortcut command+E to extrude, but a few months ago I spent a good amount of time hunting through all the menus to find the function that would make my shapes 3D. Looking at all the options in Vectorworks, especially compared to those in SketchUp, can seem a little overwhelming since there are so many more specialized modeling features offered here.

After the software for a virtual reality project is chosen the researchers and modelers can begin translating the information they have into the beginnings of a 3D model. Much like in Snyder’s example of creating a virtual Florence from historical sources, the professors and researchers behind the Paris project have also compiled sources that range from primary documents, like letters, building plans, and engravings, to recent scholarly research, and photographs of surviving elements.  Snyder hints at the complexities that even this beginning stage presents, but it is not until you actually have to sift through repositories of text and images to begin creating the foundation of a building, that the large scope of the project becomes clear.

I could go on for days about the importance of small details behind this DH project, and I’m sure Tori and Raphael could too, but there is so much more beyond this project that I don’t have the space to discuss, so I’ll end with this big-picture statement about the Paris project: one of the outcomes is to create models that will allow us to test the theories of Gothic historians, to see if they stand up to the test of recreation, and conversely, if our recreations stand up to the test of scholarly research.

Interfaces and Art History

After reading Kirschenbaum’s article in A Companion to Digital Humanities, my eye was caught by another article about Art History. This section of Schreibman, Siemens, and Unsworth’s larger text discusses the progression of art history’s visual records from a material based to technology based repository. This struck a chord with me because art history seems to be one of the fields that particularly resists being digitized, some of this resistance coming in part from the difficulty of transitioning all the physical resources into digital ones that can be remotely accessed.

In keeping with the interface design aspect of this week’s readings, I will present some of the art history and museum related databases and websites that I interact with – namely The Museum System and the Getty’s Union List of Artist Names. The Museum System (TMS for short) is a database used to store records of artwork and artist information held within a museum’s collection. The edition of the software that I use is not as new looking as the most recently available versions and has a Windows ’98 sort of feel to it.

module_bibliographie_tms-tms_bibliography_module
The older version of TMS
Biblio
The most recent version of TMS

With regards to functionality, the outdatedness of its appearance does almost nothing; everything still operates as needed, but in this age of flat neon colors, as with the most recent iPhone and Android updates, it is a little startling to revert back to beige and blue color schemes. Even though it is not as aesthetically pleasing as other interfaces, it still loosely follows Schneiderman’s Eight Golden rules of Interface Design. TMS could be made less cumbersome (and it perhaps has been in more recent editions) but its interface quirks are within reason, especially when taking into account the amount of information that it processes. Museums use TMS to keep extremely detailed records of their entire permanent collection, and every loaned work that has passed through the museum so there is an enormous amount of information stored in the database.

Another art history related digital repository of information is the Getty’s Union List of Artist Names (ULAN) which is related to the Getty Arts and Architecture Thesaurus that we have learned about for controlled vocabularies. The website also follows the golden rules of interface design and has a more modern appearance to its interface in comparison with that of TMS. The two products offer very different services within art historical and museum studies contexts so I will leave the comparisons here, but I find it interesting to look at which sites and softwares follow trends in their surface design and if usability is always prioritized over aesthetics for these kinds of websites and programs. From what I’ve seen, base functionality (in the sense of presenting information, not necessarily the ease of finding this information) wins over sensory elements of the interface.

Where did MapQuest Go?

Accessibility to step-by-step directions from point A to point B has gone from being a luxury to a necessity since the early 2000s. When I was younger, I remember being entrusted with our family Thomas Guide, an old and quite thick booklet of maps we kept in our car to navigate freeways and streets on long car trips. As I got older, we stopped carrying around the Thomas Guide and transitioned to using the internet to map out our journey in the most efficient way possible. When planning these trips, MapQuest was our go-to route planning site.

il_570xN.439968747_ahrl
I think my family actually owned a copy of this 1994 LA County Thomas Guide. Now it’s old enough to be considered vintage and is available for purchase in an Etsy shop.

As Jim Detwiler writes in “Introduction to Web Mapping,” MapQuest helped break new ground in the age of digital mapping technology. It was part of the first generation of sites in the web mapping industry and even when other websites started to outpace it, the site continued to update its services. When Google made its map interface public so did MapQuest, and from there, a whole new generation of custom mapping applications was born.

I recall using MapQuest as my default mapping website up until around 2009 or so when I decided Google Maps offered a more convenient service. (This convenience was mostly due to the fact that Google was the homepage for my family computer and it was far easier to click on the “Maps” tab than it was to type MapQuest’s URL.) A 2009 Business Insider article addresses MapQuest’s decline at this time and comments on the increasing site space being devoted to advertisements, while Google Maps kept its interface clean and to the point. The article also goes on to give MapQuest some advice to turn their company around and return to its role as a leading online map service, a key point being that presentation of the site is important (something we can all relate to now after learning about the importance of data visualization techniques).

It’s been a few years since 2009 now, so I wanted to go back and see how MapQuest is doing and how the site has evolved over time. The layout of the site is very similar to the (now) older Google Maps layout, with directions and options on the left side and a map on the right. There was one momentary flaw in MapQuest’s direction system; when I inputted “UCLA Young Research Library” on both sites I was met with an unexpected surprise. Google Maps gave me a direct image of UCLA’s campus with YRL pinpointed on it but MapQuest got confused and gave me “related” locations in Pennsylvania. Only 2400 miles off.

Screen Shot 2014-11-16 at 10.58.35 PM
Google Maps search result

 

Screen Shot 2014-11-16 at 10.58.58 PM
MapQuest search result

 

Week 6: Networks and Friendship Paradox

I started this week’s readings off with Kieran Healy’s post, “Using Metadata to find Paul Revere”. His research into the analysis of personal metadata got me thinking of what constitutes a breach of privacy and when does a governing entity go too far in looking through our personal lives in the name of security. However, I won’t dedicate my post to any alarmist ideas or writings on the need to respect privacy any more than I already have, instead I will write about an interesting aspect of social networks that I heard about a while ago, the friendship paradox.

I originally heard about this as a short story that made it onto the daily news; it does not really have much to do with cutting edge news, but nevertheless it caught my attention. The idea behind the friendship paradox is that, on average, a person has fewer friends than their friends have. This tallying of friends is most easily done on social networking sites where there are friend lists available to immediately quantify the number of social network friends in one’s life. It is fairly easy to log onto social media sites and take a quick look to see if this is true; some of my Facebook friends have over one thousand friends each, easily outstripping me, my Instagram follows are much lower than those of the people who follow me, and a general look over Twitter shows that most people follow at least one celebrity or relatively famous account that can have thousands or even millions more followers than their own. There is actually an article available on JSTOR from the American Journal of Sociology that is dedicated to this phenomenon if you’re interested in reading more about it! If you don’t want to spend quite as much time there is a helpful Wikipedia article that presents the paradox more succinctly and seems reliable (as far as I can tell).

The friendship paradox is loosely related to our foray into network analysis, and could provide interesting data if analyzed in the same way Healy conducted his research or in other ways of analyzing networks. For example, with a sampling of one’s friends from a social media site it is possible to see if there are any other correlational elements that connect the friends with more friends to each other. Perhaps there is a relation between “popularity” on social media sites with frequent posting of statues or photos, or maybe serial posting has the opposite effect and reduces the number of friends. There may even be a specific personality type that attracts more friends that could become apparent when examining their “likes” on Facebook or hashtag patterns on other social medias.

Week 5: Andrew Smith and Text Analysis

Andrew Smith’s “The Promise of Digital Humanities” brings up some very interesting points on the impact digital scholarship and the increasing incorporation of technology have on the humanities fields. I tend to think of the digital humanities field as one that allows for traditional humanities research to expand and become more accessible through technology, however not all digital humanities initiatives accomplish this end. Smith brings up a valid observation that some projects use digital components to re-state the obvious and then applaud themselves for incorporating technology to support their findings. Apart from this pitfall, I think that the use of data mining to examine the veracity of humanities research opens up exciting avenues for scholarly thought. Because analysis tools are becoming more widespread, there is an added incentive to use data that supports the trend being examined or written upon rather than ignoring it to further the specific thesis with hand picked anecdotes . Analyzing text in this quantitative manner opens up new avenues for researching and promoting ideas across the humanities and even into the scientific fields.

After finishing this commentary, I was drawn to a related linked article of his, “So what’s text analysis actually good for?” From Smith’s first article, it seems like text analysis can open a lot of doors for innovative questions and methods but that it’s taking a little longer for researchers to utilize these resources to their fullest. The video (embedded below and also on the article page) gives some insight into what current scholarship is doing to integrate digital tools. This research incorporates traditional lines of humanities research inquiry but uses search tools to facilitate the process by allowing the researcher to not get caught up in the vast number of sources to sift through.

This second article made me realize that using digital resources for a project is only the first step in making a real difference in digital scholarship. Digital humanities research is most effective when technology is used to significantly advance a realm of scholarship; the maximum impact is made with innovative ideas combined with effective use of digital resources. This is something we should all keep in mind as we continue our research for this course and for our future endeavors; incorporating technology should support our end goal, not be the end goal.

 

Infographics and Data Visualization

Many of us are familiar with the easy knowledge acquisition opportunity presented to us through infographics. I know that when I encounter a hefty article online I look to see if there are any visual aids to break up the monotony of reading through all that information or better yet, if there is an image that digests and  summarizes the main point of the article.

Reading through the Data + Design online book, I began realizing just how much work goes into constructing an appealing infographic. In addition to actually making the graphic there is so much thought that goes into how to effectively collect data to display. The book dedicates pages, and even chapters, to describing some of the best modes of data collection for research; researchers have to be careful in the language they choose for questions and the format in which they present the questions and possible answers. Once the information has been collected, there is a long process of data cleaning and prepping (who knew data needed so much attention before going out into the big world!). This cleaning involves sorting through things that are relevant to the research question/goal and figuring out a way to organize it nicely so that it can become the best visualization it can be.

Browsing through this lengthy process gave me so much more respect for the work that anonymous internet people do in making up these easy to use and disseminate informative images. One of my favorite sites that has some really great infographics/visualized data is Brain Pickings. (The creator, Maria Popova, actually got a mention in the Data + Design foreword!) What I love about this site is that Popova does so much research on some very interesting topics, usually on literary or art related topics. In many of her articles she includes images from the books she is discussing, or hand-drawn visualizations of quotes from famous literary folk.

wendycamus

Reading a quote from someone is one thing, but having a way to interact with the author’s ideas in a new way gives it a level of engagement that continues to fascinate and bring me back to their words.

 sontagart

I recently came across an article on the Brain Pickings site about Infographics and some key principles to keep the visualization interesting and trustworthy. This article features designs that compile information on jazz musicians from the 1920s, junk emails, and the London Tube. This goes to show that data visualization does not always have to come in graph form or present information on groundbreaking sociopolitical topics. As with any presentation of data though, there is the possibility that data is incorrect or skewed in some way, so there is always that thought to keep in mind. With that said, however, here are two of my favorite images from this article that organize two very different types of information – statistics about the world’s makeup and sleep habits of famous writers.

bai1

sleepproductivitywriters_500_2

Phrenology and Classification

Bowker and Star continue to expand upon a topic we began discussing last week, classification. They discuss means of classification as a “spatial, temporal, or spatio-temporal segmentation of the world”. In lecture we went over many classification and organizational systems that are currently in use, from library classifications to those used in social medias. These exist to make data retrieval easier, they create ways to expedite retrieval by categorizing information into relevant subgroups. We have been discussing so many useful and innovative systems of classifying observations and information that I want to bring in a very specific, and now obsolete, classification system subset related to how crania appear: phrenology.

Phrenology hails back to late 18th century Germany, springing from the observations of physician Franz Joseph Gall. Now considered a pseudo-science, phrenology was invented to classify specific physiological features of the skull as belonging to specific characteristics or faculties. There were 27 faculties that Gall identified, ranging from reproductive instincts and the love of one’s offspring to murderous instincts and metaphysics. Franz Joseph Gall’s classification of specific traits within this range of physical contours in the cranium is an example of how even if something can be classified or has already been classified that does not necessarily lend truthfulness its subject. With that said, the phrenology classification can be seen as a precursor to psychology and its physiological correlations.

1895-Dictionary-Phrenolog

In the more modern era, we think of phrenology as a funny side note in medical and scientific history. It seems laughable now that something as arbitrary and even changeable as the lumps and indentations on a head could denote specific personalities or behavioral tendencies. Had there been a better reception of phrenology, or perhaps more scientific evidence, this system could have become a standardized method of evaluating personalities. These shortcomings on a scientific scope are tied to its shortcomings as a classification system as Bowker and Star describe it. It is interesting to me that even though Bowker and Star lay down three qualities for a classification system to have (consistent unique classificatory principles, mutually exclusive categories, a complete system) they only exist in a theoretical ideal setting. Phrenological studies identify different faculties, categories of behavior and personality, but they are difficult to quantify on their own; it becomes difficult to say with certainty that a specific node on the head is of a size to denote that a person has an aptitude for education. This lack of ability to definitively categorize what size bump or indentation in the skull leads to personality traits, combined with pseudo-scientific reasoning (this being the biggest reason) that ultimately makes phrenology an idea of the past.

 

You can learn more about phrenology from books that UCLA currently has in its collection (and you get to see the Library of Congress classification system in action!). The Biomed History and Special Collections Cage has a few really old books on the subject! See here and here

Week 2: Library of Congress

Of the examples of classification structures that Sperberg-McQueen introduces in his article, his discussion of different classification systems used in cataloguing books immediately made me think back to some of the systems I have encountered. Sperberg-McQueen provides examples of the modes of classifying and separating books under the Dewey Decimal System, a system I am sure many of us encountered in school or public libraries. The classification that UCLA employs, as do many other large libraries, is the Library of Congress system. Someone has already discussed the Dewey Decimal Classification system so I’ll devote my blog post to a bit of the Library of Congress Classification.

The Library of Congress divides works into broad categories that are each ascribed a letter of the alphabet. From there, the primary letter of classification can be followed by another letter that denotes a specific field in that genre of writing. For example, the class ‘N’ refers to fine arts, and a subclass ‘NB’ denotes sculpture while the subclass ‘NK’ refers to decorative arts.

Library of Congress, Class N

This first letter or first two letters of classification can then be followed directly by a number that also narrows the scope of the subject. Taking the decorative arts subclass ‘NK’ as an example, a book classified as NK 7310 falls within the ‘other art and art industries section’ of decorative arts, more specifically it falls into the ‘metalwork’ category.

Library of Congress, Subclass NK

At this point any further letters or numbers that follow will specifically reference the work’s title, niche within the field of reference, author, year of publication, edition number, and sometimes whether the work has been translated. To continue the previous example, a book categorized as NK 7310.3 A78 R3813 1985 refers to a book on Art Deco jewelry of the 20th century, written by Sylvie Raulet (hence the R in the classification), and published in 1985. For a more complete overview of how the Library of Congress organizes works see here.

In an earlier blog post, another classmate points out the problem of classifications systems filtering data which can unintentionally lead to a loss of information (as touched upon by the article on the Haitian Declaration of Independence). Some books and sources are not so easily put into a category because they encompass so much information. The Library of Congress has a very good system in place to classify materials but it is hard to create something that is entirely foolproof. This leads me to  wonder who or what has the last say in classifying a book? Is it the job of a group of humans or is there some sort of database/technology in place to resolve this conflict?