Using visual simulations to tie the past and the future

In 1997 the UCLA CVRLab launched a 6-year project on Visualizing Statues in the Late Antique Roman Forum to digitally recreate the Roman forums from the year 400A.D. and to study the spatial information of Rome at the time and the kinetic physical motions of the Romans who considered walking through the city an important activity. The digital model included over 20 features such as buildings and monuments of the Roman Forum, an important center of ancient Rome. When the project was finished their project was shared on their website not only so that anyone can easily view the digital Rome but more importantly to provide the resources and guidance for the future digital studies in archaeology and antiquities.

 

Upon entering their website the users can view an interactive timemap where they can select each building or monuments, by just viewing the building plan, an aerial photo with the timeline, or just the digital model itself, as the construction changes with the timeline cursor moves horizontally on the top of the image. The lab also provides geospatial data in x, y, z values to find the grid per each construction, as well as resources per building to specifically provide evidences and sources of their project.

Such effort of the CVR Lab has provided a strong foundation for the current and future studies of digital modeling of archaeology, antiquities and beyond, practicing every resources and technology there is for digital humanities research, ranging from the data and resource aggregation based on timeline and secondary research materials to geospatial technology.

 

Note as a beginner in the field of UX

On a personal note, I cannot be any more excited to learn about User Experiences in class, especially this week.

As I have noted on an earlier post, I built my experiences as a (social media) marketer and a brand manager with a focus on analytics and design sense–and although companies (especially startups in early stages) that hire me may find it efficient to have a go-to person for small tasks from flyer design to high-level business strategies, on an entry-level job seeker and her co-workers’ end it may make things more difficult to collaborate because I would get overly picky about design details and even programming details with the limited amount of knowledge I have. To subsidize the frustration for both parties (me vs. designers and engineers), I started learning the basics of design software to 1) make things myself and 2) get my ideas across in a more efficient style; and after a little over 1 year of slow development from drawing mock-ups and wireframes on paper and making flyers with Microsoft Publisher to designing websites, company logos, and mobile app designs on Photoshop and making brochures on InDesign, I can finally say that I feel comfortable being called a graphic designer.

Just earlier last week I began working my current startup’s mobile app design and website in collaboration with an outsourced UI/UX designer, and faced the same dilemma again: me designing a product as a brand manager and product developer while having little to no experiences in UX design made the communication quite complicated and inefficient, more so because our designer wasn’t in-house. I started interviewing friends who work as creative directors or UI/UX/Graphic designers to figure out the difference between a general designer and UX specialized designers, and upon finally understanding that what I have been designing wasn’t only based on making products look pretty but also on how users may use it with an easy, time-saving, and ever-interesting experiences, and that exactly was the fundamentals of UI and UX. And I realized that it was easier for me to understand the ultimate goal of creating a better user experience based on my studies on analytics–where you as a marketer or web developer must always measure the number of visitors, number of clicks, which SEO worked, which blog posting your audiences shared or liked the most, and minimize the bounce rate, all based on how the users may perceive your website, mobile app, or social media platforms.

Last weekend between working on the website landing page design, I was researching a career as a creative director or a communication designer. Although it seemed that it was exactly what I was looking for and that I was close to being on a right-ish track (despite the fact that there is no ‘right’ track for this job, according to successful creative directors and brand designers), I still need to learn and build experiences in UI/UX among other skillsets, hence the excitement for this week’s course materials.

As explained in Jesse James Garret’s “Elements of User Experience”, UX isn’t limited to websites or mobile apps (although the term became more popular with the emergence of the screen devices) but has been around us this whole time in forms of magazines, machines, and just about anything that we as humans experience. The key is to understand the needs that exist and how to efficiently solve that problem with your product. UX isn’t only about the aesthetics of the product, but also the usability and interface.

Mobile Profile_Lauren
Preliminary mobile app design from last week for a 3D printing startup company

The Effect of Crowd and Cloud on GIS Technology

monroeGIS (1)

 

The development of geospatial technologies has been exploding in the government and consumer markets, creating social impacts ranging from user-generated maps for mobile applications to solutions to disaster management[1]. Although until the recent past it was majorly specialists that used GIS to create maps in the back room, today untrained users can also routinely access sophisticated GIS via the Internet on their PCs, laptops, mobile phones or dedicated geospatial devices. Mark Reichardt, president of the Open Geospatial Consortium (OGC), said that they’re “seeing this movement of geospatial and location-service functionality seamlessly into the business decision cycle and business tools and consumer services” and that it’s the crowd and the cloud leading that movement. On the consumer market end, GIS application is often combined with Global Positioning System, better known to us as GPS, and is heavily used for mobile applications to add maps to their services so users can find nearest restaurants or gas stations. The government is also beginning to use the GIS technology outside their back room, for projects such as Pacific Disaster Center that created DisasterAware Platform, which “continually monitors information feeds from meteorological and geological agencies and delivers information and alerts in real time to subscribers. Users can share analyses and situation reports and can query the underlying databases of DisasterAware, which has a presence on Twitter.” Other similar devices such as WebMapSolutions is built to reduce the impact of a disaster and its long-term results through coordinated multi-agency response system, where the relief activities include rescue, relocation, service repair, providing temporary shelter and emergency health care, as well as suggesting recovery solutions focusing on rehabilitation through providing detailed pictures of the disaster and its impact and its assessment. Jack Dangermond, president of Esri, a geographic information system company, said, “GIS is being exposed on the Web, through browsers and through mobile phones. That’s making it more accessible and usable. It will spread through whole new audiences. Executives who want to look at sophisticated geographic information and do sophisticated geographic analysis can do so through a mobile device.” The GIS technology today isn’t limited to specialists and engineers, and the crowd learning to use the system to apply it to user-generated contents, bringing it into the consumer market end, is reshaping the future and bringing the places closer and smarter, making it available for day-to-day use.

 

 

 

 

 

References:

[1] New GIS Colutions for Disaster Management

 

Digitalizing six degrees of separation

As Kieran Healy explores on his “Using Metadata to find Paul Revere”, simplifying the networks through using metadata that begins with 1’s and 0’s to connect members (listed in rows, summing up to 254) who belonged in same organizations (which started with only seven), and multiplying the matrix with another matrix that sums up to 254 people again, or in a similar sense, using 7×7 for multiplying different organizations, can create a social network that connects millions of people.

“Notice again, I beg you, what we did there. We did not start with a “social networke” as you might ordinarily think of it, where individuals are connected to other individuals. We started with a list of memberships in various organizations. But now suddenly we do have a social networke of individuals, where a tie is defined by co-membership in an organization. This is a powerful trick.” (Healy)

The difference between this project and other social networks is, as Healy mentioned, that the team wasn’t tying to create a social network but created a metadata with list of organizations and their members, which enabled the project to have a multiplied result of metadata.

Such whole concept of connecting people through their common denominators, from family and mutual friends to organization relations, had become a huge boom when Six degrees of separation was first publicized. The theory that everyone is connected through six or fewer separations to anyone in the world was first studied in 1929 by a Hungarian author Frigyes Karinthy. As the theory continued to develop, in 2001, a Columbia professor Duncan Watts recreated this concept on the digital world in an experiment where he attempted to deliver a package to a random group of people, the result of the average intermediaries was six.

“I read somewhere that everybody on this planet is separated by only six other people. Six degrees of separation between us and everyone else on this planet. The President of the United States, a gondolier in Venice, just fill in the names. I find it A) extremely comforting that we’re so close, and B) like Chinese water torture that we’re so close because you have to find the right six people to make the right connection… I am bound to everyone on this planet by a trail of six people.” (Six Degrees of Separation Website)

The theory has been widely popular, especially after the actor Kevin Bacon has launched SixDegrees.org, connecting him to any actors or actresses in the world; and the same concept has been used on many social media platforms ranging from Facebook (mutual friends), LinkedIn (n-dgree connections), and Twitter (follow suggestions).

This concept of creating matrix of metadata is shrinking the world and re-defining the “small world” as a part of science-backed digitization, and it’s only very interesting to see where this technology can be used to create other forms of connection than mere personal connection, forming a new type of networking system and bringing professionals together to collaborate etc. creating even bigger and unimaginable impacts.

Can digital humanities change history?

Andrew Smith’s article “The Promise of Digital Humanities” starts with a doubtful question: Sure, data mining through the machine analysis of text can potentially close the gap between humanities and hard sciences by “allowing us to subject historical texts to quantitative analysis”; but can it actually extract impactful information? The criticism roots from the large amount of investments that have been made in digital humanities technology that make the data mining possible, and the unsurprising, “we already knew that” results of the researches done in multiple universities. However, among the less-than-ordinary findings, some research projects do manage to find information that “fundamentally undermines the scholarly consensus about a particular history topic”, and William Turkel’s project on Data Mining with Criminal Intent was just that.

This umbrella project that required them to put in 127 million words into the database for data mining included The Old Bailey project, for which they digitized and transcribed records of 198,000 trials between 1675 and 1913 that took place in The Old Bailey, the central criminal court in London. The result was a surprising pattern that history had never figured out:

  • There was an unusual increase in the number of guilty pleas and very short trials since 1825 (By 1850, one-third of all cases involved guilty pleas)
  • In the 1700s there were nearly equal numbers of male and female defendants, but in 1800s men outnumbered women by nearly 10 to 1

And these findings contradicts the general historical understanding that mid-1700s was “the turning point in the development of the modern adversarial system of justice in England and Colonial America, with defense lawyers and prosecutors facing off in court”.

As Turkel’s project is a supporting evidence of digital humanities researchers’ hypothesis that data mining through machine text analysis is the key to digging through the history again for more data-backed findings, my conclusion is that these projects are when the fundamental basics of research plays a critical role: creating a hypothesis that provides value to scholars and our society, doing an extensive secondary research for the topic, thesis, and its possible outcomes, creating a specific guideline on how to efficiently perform the research project to minimize the negative impact i.e. time, effort, and especially the funding, if the hypothesis of the project is proven wrong, etc. The growing field of digital humanities need more set examples like The Old Bailey project to receive more attention and funding to support them, and each projects can really make a huge impact on the field.

 

References:

Why do we visualize data?

Here’s the background: I am an art history major whose hobby is to collect interior design and craft project pins on Pinterest, as well as a marketing and branding enthusiast who holds certificates in market research and marketing with concentration in social media and web analytics. I am highly design-and-user experience oriented and analytical, and I love to teach and explain the concepts in a visual manner. That has led me to become an aspiring brand designer and a creative director whose forte is in strategic and analytical background. That being said, I am not a designer, or at least yet. Working as a marketing strategist and brand manager in a number of startups with great ideas and lack of manpower, I’ve taught myself how to use Photoshop, Illustrator, InDesign, Muse, Final Cut Pro, iMovie, and even Microsoft Office Publisher to create images and concepts that are vivid in my head into tangible works. It takes a great deal of time and frustration for a strategist to deal with highly detail-oriented design works, and that’s the reason Data + Design grabbed my attention like no other.

 

The concept of learning data visualization is fascinating. With an ever-growing amount of data, there is a necessity to fill the gap between collecting and analyzing the data and explaining and creating results with it. As complicated as data can be, backed with statistical sources full of numbers and graphs, the most efficient way to explain it has been visualization of the data and information such as infographics and interactive web design. Based on my personal experiences, the communication gap between data collectors and graphic designers is often too large, leaving both parties in assurance that they belong in the opposite poles of the world, lost in translation. Data + Design in collaboration with Infoactive (whose landing page had an error and I couldn’t conduct a research on) provides simple steps of collecting and analyzing data and building a visual summary of it, which will be a quintessential guide to any data scientists or designers alike; and its format as an open source site and the growing size of the community only proves the need for the combination of data and visual sources in today’s world.

 

 

Reference:

Data + Design: A Simple Introduction to Preparing and Visualizing Information

On the Netflix Quantum Theory, or, Microtags

In her article “How Netflix Reverse Engineered Hollywood”, Alexis Madrigal explores the categories that divide the genres and sub genres of movies and TV shows on Netflix, how the company structured what amasses up to over 76,000 “micro tags” as they call it. These tags that were built through an algorithm, where professionally trained (with a 36-page packet on how to watch and rate movies) movie-watchers tagged each movies using the commonly repeated adjectives and the program divided those adjectives in a systemized order, of date of the production, name of the producers, actors, targeted audience, and many more specific sub genres.
So confronting this unprecedented, jarring number of tags that categorize the movies and TV shows we see (or will see) on Netflix, we naturally ask ourselves questions: what was the reason behind Netflix creating such system? how does this system affect the users’ viewing experiences on Netflix? and furthermore, how does this innovation reflect on the present and future of our cultures and societies?
When Todd Yellin, the Vice President of Product Innovation at Netflix, came up with this tagging system, he had one goal in mind: “Tear apart content!” The team then tentatively named the system the “Netflix Quantum Theory” and created a guideline that “spelled out ways of tagging movie endings”, such as the “social acceptability” of lead character. They also created a rating system for each genre, from the scale of 1 to 5. The tagging and rating system continues to much deeper, much more specific level, into the happiness-to-sadness-ending, the plot, lead character’s jobs, movie locations, and everything we can compare and categorize about movies. Once the base of this tag pyramid system was built, their team of engineers created a syntax for the genres based on these microtags to create the alt genres, combining the human-built system of hand-tagging objects with a machine-based program to categorize them.
Netflix’s microtag system shows us the process of not only categorizing the objects around us but also going much deeper into the specifics of it, getting as close as it can to relate to the human’s brain process of explaining and dividing things, and beyond that, building an algorithm based on it to understand us better, which fundamentally changes the way we communicate, interact, sell, and purchase objects in the society.
Ever since the industrial revolution when products were produced in mass quantity, businesses have been developing systems to understand the market’s behaviors and thought processes in order to make products that sells, and create marketing tactics that affects the customers’ decision to choose among myriad products. As much as it was an unprecedented event to categorize movies based on tags, and sub tags, diving and sub diving, to the level of microtags, we can understand that the combination of the categorizing system we have been using to archive data from the ancient times with today’s technology of automatically systemizing the data gives us an access to understanding, and better, predicting human behaviors. We are living in intriguing times, to be able to witness the development of the mix of humanities and digitalization to understand how we ourselves work, to change the way we live based on the system that we have developed ourselves, and to open more doors to the future.
References:
Alexis C. Madrigal, “How Netflix Reverse Engineered Hollywood,” The Atlantic, January 2, 2014

Archiving Documents: Preservation of History and Reference For The Future

The best part of visiting the libraries is walking through the stacks full of books in a fast pace, running through the the small, white call number stickers with my finger and looking for the exact call number I have scribbled down on a small piece of paper. Going through such vast yet narrow space full of information makes me feel like I’m on a treasure hunt.

In the world that is turning increasingly digital with an astonishing amount of data collected and saved  everywhere everyday, it is important to have a system that allows us to navigate through the pile of information in an efficient and effective manner. And more, it is also highly significant that we follow that system not only to find things but also when we are archiving them. Julia Gaffield’s search through the atlantic countries seeking for Haiti’s Declaration of Independence that had gone missing for centuries is an example of the significance of the archiving systems that connect our past to our present and future.

The main problem of navigating the Declaration of Independence of Haiti wasn’t only that the people who archived the document did not follow the same archiving system that we use today, nor was it that the people searching for it had no evidences in where to look for the document. It was more because while digging through the history the people were thinking in today’s terms rather than when that of the time when the document was archived. Centuries ago when the declaration left Haiti, the world wasn’t divided into countries as we know today, but it was more intertwined and connected through different colonies, people constantly traveling through and between them, transporting and trading goods. Gaffield was able to find the missing document that the historians and government officials could not navigate for years because she did not use today’s archival search as it is but understood how it was built and dug through the archiving system’s history along with the missing document’s history.

Navigating through data will only grow more complicated with an ever-growing amount of information pouring in and storages running out of spaces, but at the same time it will grow easier and more efficient as we adjust to the system. National Archives of Malaysia has not only built an official portal for government servants, students, researchers, and general public to find documents easily on their website but also offers consulting, virtual tours, and seminars on how to use the archive.  United States’ National Archives portal also allows the public to look for documents by people, places, foreign policy, events, etc. along with their founding documents. The use of data and technology has brought us closer than ever to our history, and it will only continue to do so.

 

References: