Course blog

Week Seven: Working through Space

Screen Shot 2014-11-16 at 5.02.43 PM

Jim Detwiler’s Introduction to Web Mapping outlines the basic history and understanding of web mapping. Detwiler begins by stating the advantages and disadvantages of both digital maps and paper maps. There is a lot of current debate about the value of digital maps – there seems to be a lost sense of adventure/exploring with the advent of digital maps. However, one clear advantage of digital maps is their relative low expense to produce compared to traditional maps. They are also easier to distribute to a larger audience. Because they are online, it is also so much easier to update them – no need to redraw, reprint, and redistribute. Digital maps also have the capability for interactivity.

However, this is not to discount the advantages of traditional maps. Digital maps require the Internet – and if you think about the nature of needing a map, you are most likely exploring somewhere you are unfamiliar with – will this location even have Internet? Not necessarily; therefore, digital maps are “vulnerable to problems of servers and networks going down” (Detwiler). This is where paper maps have the upper hand – they are actually much more reliable in this sense. Also, paper maps have far superior resolution (1200-3400 DPI). This is advantageous when you need to see the map very clearly (which is probably most of the time).

Reading through the web map categorization by Dutch cartographer Jan-Menno Kraak that Detwiler includes in his overview, I remembered an app that I read about. A London-based company POKE developed an app called Pints in the Sun, which helps users to find the nearest pub that’s out of the shade. The necessity for an app like this is distinctly British – but nonetheless interesting as an example that embodies multiple kinds of web maps. Users can find a pub in one of two ways – “searching for a specific spot, or just browsing the map to find one that you like the look of” (Dombrosky). The next step is to adjust the ‘sun timeline’ at the bottom of the map to indicate the time of day (which then projects shadows on the three-dimensional buildings).

Pints in the Sun could be classified as several different types of maps, including most obviously an Analytic Web Map and a Collaborative Web Map. For example, POKE developers used HTML5 geolocation and the FourSquare API (Application Programming Interface) “to locate a suitable list of pubs before loading building outline data from OpenStreetMap and rendering it in 3D using three.js (map projection conversion courtesy of the D3 library)” (www.pokelondon.com). Its use of OpenStreetMap classifies it as a collaborative map, in that it uses a “distributed network of people to create and maintain the map” (Detwiler). Pints in the Sun classifies as a Analytic Web Map in several ways, but most distinctly in its use of Solar Almanac Calculation through SunCalc (implemented in JavaScript).

 

Dombrosky, Pete. “Want to Find a Pub in the Sun? There’s an App for That.”Thrillist.

Thrillist.com, 12 June 2014. Web. 16 Nov. 2014.

Jim Detwiler, “Introduction to Web Mapping

“Pints in the Sun: A Minimal Viable Side Project | POKE.” Pints in the Sun: A Minimal

    Viable Side Project | POKE. N.p., 5 June 2014. Web. 16 Nov. 2014.

Week 7 – Digital Karnak TimeMap

From this week’s reading, I found the post by Alan McConchie and Beth Schechter entitled “Anatomy of a Web Map” to be very informative and easy to follow. I learned that Google maps and Google earth, tools that I use daily, are so efficient because they are “slippy maps”. McConchie and Schechter explained that most web maps utilize tiles and slippy maps are effective because the tiles adjacent to the tiles you are viewing are pre-loaded so when you scroll over, they are already loaded and you don’t have to wait. Mapquest became so irrelelvant because it wasn’t a slippy map and everytime you scroll over to a new tile, the whole page has to reload.

Last year, I took Architecture and Urban Design 10A and one of the projects we had to do was based on a web map made by Diane Favro at UCLA. The project can be found at Digital Karnak. The project contains a timemap of the Karnak site in ancient Egypt and shows when each pharaoh made renovations to the site. The map goes from 200 BCE to 500 CE. When I was taking this class, this project was super helpful because it was an interactive, visual example of the different architectural structures being made and taken down as time passed and each new ruler took charge. In ancient Egypt, each new pharaoh wanted to assert his or her dominance and did so in the form of new architectural feats. When I took this class and studied Karnak, my research was focused on Hatshepsut and her son, Thutmose III. Until recently, Hatshepsut had been overlooked in the history books because there were no records of her rule. This is because her son was resentful of her power and defaced all of her statues and tore down many of the temples she resurrected. Without humanities work uncovering the ruins of Hatshepsut, her legacy would have been forgotten forever. Without digital humanities projects such as the Digital Karnak, this information would not be as accessible and easy to understand for the public.

Web maps really are a useful tool and I think their accuracy, efficiencies and capabilities will continue to grow. My younger brother has always been interested in maps, but was hesitant about studying geography because cartographers are no longer needed; however, I am going to tell him about the field of web mapping as I think that combines two things that he really enjoys and will only expand in the future.

 

Week 7: Flood Control and the Issue of Study Areas

4545550639_8caec8d787_b

This is a map of flood control features in Los Angeles as of the late ’70s from Blake Gumprecht’s book The Los Angeles River: Its Life, Death, and Possible Rebirth. There are hundreds of miles of concretized flood control channels, a handful of flood control basins and dams, and dozens of debris basins. Winter is coming and when the rain starts (well, if there’s rain in the first place), we’re going to start seeing mud slides and flooding on the news pretty soon.

One of the arguments that Ian Gregory makes in the chapter “Using Geographical Information Systems to Explore Space and Time in the Humanities” is that GIS can perform an analysis over a broad study area, both spatially and temporally, adding a new capability to humanities research. Previously, scholars could approach a phenomenon either spatially or temporally (which could lead to similar but irreconcilable theories, e.g., Cronon’s and Turner’s theories of urbanization), or focus on a smaller study area that theoretically represented what was happening in a larger area. The benefit of GIS is that by integrating space and time over large areas and spans of time, scholars can now reevaluate orthodoxies that previous studies may have led to. However, one weakness of GIS is that they are not as detailed and thus have less explanatory power. A possible methodological implication of GIS’s strengths and weaknesses is that scholars will use it as a preliminary step in research projects. After a broad overview is established with GIS, scholars would then do an in-depth case study to come up with an explanation or theory of the phenomenon.

The complementary relationship between GIS analyses and case studies reminded me of something I read about the flooding of the Los Angeles River. In the article “Flood Control Engineering in the Urban Ecosystem,” Jared Orsi writes about how “[m]oderate storms in 1978 and 1980 surprised flood controllers by triggering severe flooding, even in areas they had thought were protected” (148). The engineers were surprised, Orsi explains, because their theory of flood prediction was too generalized. If a fifty-year flood is only supposed to happen once every fifty years, why do they happen more often than that? The explanation is that the specific geographies of mountain ranges interact with complex storms in an uneven manner. One area may only get a drizzle while another may get a heavy downpour. While the average rainfall is not too bad, it is the smaller problem areas that you have to be careful about. This example from civil engineering illustrates why small study areas and case studies are an important part of research in the humanities, even though they are not sufficient.

Week 7: Static Maps

When I read about the types of web maps in Jim Detwiler’s “Introduction to Web Mapping,” I thought about each one and thought back to my project. Because I’m researching Earthquakes, the majority of my still images will be maps so I found this article very useful especially when Detwiler explains static web maps. Static web maps are basically digital versions of paper maps so they are usually digital scans of the physical copy.

Seismic Hazard Map

http://earthquake.usgs.gov/earthquakes/world/central_america/gshap.php

The map above is a representation of the level of seismic hazard in each region of Central America. The areas marked in brown are the most vulnerable to seismic activity followed by the  areas marked in red because of where their faults lie. Below is another map that shows all of the plates that compose the Earth’s crust and vibrate, causing seismic activity. The plates are all labelled and the green arrows indicate the direction that they are traveling in. The purple lines indicate the borders of the plates but the choppy areas are the subduction zones where the plate sinks in below its neighboring plate.

http://www.maps.com/ref_map.aspx?pid=12871

This subduction is what has caused the largest earthquakes in recorded history like the 1960 Chile Earthquake. Notice that in the second map, there is a subduction zone right along Chile’s coast meaning it is vulnerable to large earthquakes seeing as how they were hit hard twice: once in 1960 and again in 2010.

Each individual map focuses on a certain aspect of seismic activity, the first one concentrating specifically on which areas in Central America are most vulnerable while the second showing the borders and directions of the tectonic plates. Separately, these maps give us useful information but together, they show a larger picture and give us an understanding that they cannot convey alone. Aside from not being able to zoom in and out of these maps and move around on them, these static web maps can only offer a certain amount of information and nothing more. Unlike interactive web maps that allow you to input data to see certain trends, static web maps are “what you see is what you get” meaning that there usefulness is very limited. In no way are static web maps bad but they are definitely limited in their usefulness and require multiple maps to display the bigger picture. This is merely a minor inconvenience, however because you have to search for multiple maps but overall, they can get the point across.

Works Cited

http://www.livescience.com/43220-subduction-zone-definition.html

 

Week 7 – Time vs Space

archive41-500x256

“Wayback Machine,” Internet Archive

https://archive.org/web/

This week’s readings were very focused on spatial characteristics associated with data and visualization mapping. Although temporal characteristics for this data must also be considered, there are often problems with the databases becoming too complex when factoring in both time and space. As a result, space is focused on and time is left to be secondary. Dunn mentions a problem with historical landmarks not being included in digital maps, and how this creates an unintended reconstruction of our notion of the past. Goodchild talks about the “airbrushing of history,” resulting from lack of real-time depictions of Earth’s surface in Google Earth. Gregory discusses geographical information systems’ favoring of spatial references within their databases. All of these articles about tools that prefer space made me think of a tool that relies on time: the Internet Archive’s “Wayback Machine.” The Wayback Machine is a search engine with a database of screen caps of a variety of websites, over the course of time. One can type in a URL, and the Machine shows the user a timeline of when this particular URL has been archived, and then it will show the user what this website looked like at that particular moment in time. Since websites are constantly changing in terms of usability, visuals, and content, the Wayback Machine allows a snapshot look into the past, through these documented and archived images of websites over time. For example, Google is a website that is very actively archived. With the Wayback Machine, I can see all the times it has been screen capped and view these. I can see what Google looked like in 1999, for example, and although the website isn’t interactive, I can get a sense of how it has evolved throughout the years. I think this is a very interesting tool because it archives the internet and documents its changes – changes that would otherwise go undocumented because previous versions are lost when a new design is implemented. It is a look into the past, into the “historical landmarks” that Dunn writes about that otherwise get lost in real-time digital mapping.

Week 7- Why Do We Keep Making Maps? (Possibilities of Open Source Mapping)

Well first off, Alan McConchie and Beth Schechter, “Anatomy of a Web Map” might be the coolest article I have ever read on the internet. Its mind blowing, plus I loved the interface they used (Big)! I thought that the way their presentation moved was super informative and loved getting to explore all of the tools they listed.  In the article, they brake down the exact definition of a web map and present ways for individuals to make their own (often through open source applications).  They distinguish that a “web map” is a type of “digital map” which is available on the internet.  The key tipping paint here would be that the map is on the internet.  Because of this, web maps are able to go in a whole other direction that was previsouly impossible for maps: they are now available to be created by anyone and data for these maps seems to be more readily available.  In Jim Detwiler’s article, “Introduction to Web Mapping” he gives a brief history of web maps, reminding us that web maps didn’t begin to exist until the mid- 1990s.  It wasn’t till even later that these applications began to become wildly available for individual adaptation.  He connects this revelation with the launch of Google Maps in 2005:

“Shortly after the launch of Google Maps, hackers figured out ways to build applications that integrated the map layers from Google with other forms of data.  The most famous of these was housingmaps.com, which took real estate data “scraped” from craigslist.org and plotted it on top of Google Maps.  Google, seeing the potential in allowing third-party developers to mix in their own content, decided to make the Application Programming Interface (API) for their Maps application public.  Yahoo! and MapQuest followed suit.  The result was an explosion of custom mapping applications and the addition of a new term to the web mapping lexicon — mashup. (The term mashup traces its roots to the music industry, where it is used to describe the mixing of tracks from two or more songs to produce a new song.) Thus, the 3rd generation of web mapping can be thought of as the “mashup generation.”

I love the idea of thinking of current web maps as a mashup generation! I think that this term also clarifies what is meant by opens source, it is not necessarily referring to “coding from the ground up,” but rather taking from others, building and merging. From my experience (one day) playing around with Map Stack and OpenStreetMap, it seems that a lot of these interfaces are designed to be easily picked up.  This is not to say that web mapping is not highly technical (it definitely is), but that this “mashup generation” has provided a way for individuals to get in easily at an at least basic level.  This is allowing for web mapping to become more widespread and more understood.

http://maps.stamen.com/m2i/image/20141115/mapstack_vubQZfjb0rM
http://maps.stamen.com/m2i/image/20141115/mapstack_vubQZfjb0rM

While reading these article, I got stuck on the purpose behind being able to create your own open source map.  I played around with Map Stack which encouraged me to make a map of my hometown (so I did).  In the process of making it, I got stuck on the idea that Open Source mapping applications exist, perhaps, so that people can create their own representations of their environment.  Its a way to personalize the growing urban sprawl of “modern life.”  It also made me think about this as tradition in human history. Throughout Art History at least, the representation of the urban form, often shown through a handful of physical landmarks, is omnipresent. One can easily find a physical landmark, distinguishing the site, in almost any Renaissance, Medieval, or early Modern (western) painting. It is the tradition of connecting the individual with the city of origin, city as integral part of identity in an increasingly inter-connected world.  A more specific art analogy would be Edward Ruscha’s Every Building on the Sunset Strip from 1966. (The link will take you to a Getty exhibition article about Ed Ruscha) In the series, Ruscha took a picture of each building on the Sunset Strip, stringing them together so that an individual could follow the whole street from start to end.  In the context of web mapping, I can’t stop thinking of Ruscha’s piece as a proto Google Street View.  It perhaps also confirms the long standing human interest in the personalized representation of temporal environment.

I am curious to hear about others interpretations of the “humanistic” value in web and digital mapping! I would love to learn more about digital humanities specific web mapping projects (How do maps illuminate humanistic inquiry? Why do humans make maps? Are web maps becoming more than just maps, as a result of their interactive features? Is the way that web maps make arguments different than regular maps?)

Network Analysis in MMORPGs?

Back in high school, I had quite the embarrassing secret. I’d always loved playing games. When I was 7 my dad bought me a mac and I played games like Pajama Sam, Treasure Cove, or Barbie Detective. Over the years, I accumulated quite the collection of consoles, including a Nintendo 64, a Gamecube, as well as an Xbox 360. But around the 9th grade, a different kind of gaming sensation entered into my collection: the World of Warcraft (or WoW) took its place within my interactive media canon.

The reason I’m bringing up such a formidable yet relatively awkward and humiliating phase of my life this week is because I feel as though there are interesting parallels to be made regarding topics in network analysis, topic mapping and metadata with WoW and MMORPGs (Massively Multiplayer Online Role Playing Games) in general. In his blog post, Scott Weingart defines networks as “any complex, interlocking system.” In the game, there are an overwhelming number of possibilities of network analyses that can be created because of the overwhelming quantity of metadata available regarding game players and related player statistics. What I found so fascinating about the possibility of using WoW data to create network analyses after reading Weingart’s post was the fact that in many ways because we’re talking about an interactive computer game in which users make choices about their race, class, realm type, as well as their social interactions with other people while playing online, this environment might represent a place where the digital humanist and computer scientist can both extract data and conduct different types of network analysis.

Screen Shot 2014-11-14 at 7.55.29 PM

A visualization of guilds, and players by race, class and level on my old server, Moon Guard [RP].

From the digital humanist’s perspective, a scholar might look at individual players on the server as “nodes”based on race, class, level or even a player’s name and use the guild system as “edges” to draw connections between these agents. In doing this, a scholar interested in online social interactions may be able to determine how different social groups on the server are configured and make inferences from that relationship. Using a unimodal schema, centrality of each agent could be determined to analyze its importance to the network in general.

I think there are a lot of other possibilities and applications for analyzing data taken from other MMORPGs, not just WoW. I also believe WoW could be a potentially valuable resource to discuss in our “ALT Narratives” project group when discussing online communities and narratives created socially with a preexisting lore or framework provided by entertainment companies like Blizzard (the creators of WoW).

Historical GIS and Present Day Implications

Gregory’s subsection on space and time in Historical GIS reminded me quite a bit of its historical and present day implications. The presentation of the past is easily manipulated, as is any data however, analysis can easily be incorrect. Currently, I am in a GIS class and I have learned that how you divided the data, choropleth maps can give stronger impressions than what the truth may actually be. In some cases, the truth is better kept secret. Location is a valuable thing on the individual scale as well as the international scale. GPS was first developed by the Defense Department and kept secret from the public. Eventually they sold it to the public with a certain margin of error put in to ensure public access to locations could not be as specific as the military’s. However, many mathematicians easily got around the error and eventually Clinton threw the switch and allowed the public full access to the data. The military feared that if exact locations of secret facilities were known, enemies within and outside the US would target them. As a personal experience, I recall working with pinpointing locations of sites in Northern Sudan for an archaeology class on Google Maps. My professor told us that in order to protect the sites, we had to use our better judgement on where to pinpoint the temples. The problem being if a site was precisely labeled on an international GIS database like Google Maps, others across the globe would also have access to it. The next day, we might receive a report saying that the site had numerous pits dug all over and that the temple had been raided by people who saw the location online. It is for this reason that my professor directed us to put our labels several meters off from the actual site.
Not every individual though is out to get rich. In fact, citizen science can provide large amounts of data over a short period of time without costing much money. Another one my professors recently pointed me to EBirds, a collection of bird sitings provided by birdwatchers across the globe that allow researches to track migrations and populations of birds. Just as with Google Maps, I do believe collective input in an international database is helpful to solving problems. I do think however, that we must be wary of who has access to this kind of information.

Evolution of Maps

http://www.archaeology.org/travel/interactivemap-texas/

Both the Intro to Web Mapping and Anatomy of a Web Map were excellent tools in understanding the development of web maps over time. This evolution began with the incredibly simple static map, which was essentially a paper map that had been digitized, and is still very commonly used in research or visual representations. The first great evolutionary leap was to dynamic or distributed maps, which reflected changing data by loading and presenting a new, current data set each time. These dynamic maps are ideal for things which we need up-to-the-minute information on, although they are also applicable for use with data sets which change more slowly. The next step were animated maps, and “real-time” maps that are essentially dynamic maps automated and linked to sensors which provide the real time data. Finally along came Google Maps (interactive maps) with their ability to toggle layers on and off, link map features to external websites, etc., followed by the analytic map  which used these interactive features for more in depth analysis of the data. The final stage, collaborative maps, are connected with distributive maps in that they have multiple sources – in collaborative mapping, anyone can contribute. A good example of an interactive, potentially analytic map is the one linked about about the different types of archaeological sites in Texas.

However, I wonder why no one has made mention that the use of maps for analysis, and even interactive maps (though not in the digital sense), existed long before the web, when maps were only on paper. Detwiler only details this evolution within web or digital maps themselves, even though humanist scholars have been using maps for analysis for decades. In much the same way as the Texas map above allows you to click on a site and read about it, so paper maps had sites or buildings within sites marked in order to reference you to a discussion somewhere else in the text. For instance, archaeological site reports often have a map with locations or buildings labeled (toggle ON your label layer), which you can then reference in the text (not quite a pop-up explanation box, but the same concept). Also in trying to understand the development of the state, archaeologists often plotted things like trade routes of least energy consumption (‘shortest route’ option rather than the ‘lightest traffic’ option).

DH101 Preferences

Jane Eyre, by Charlotte Bront�
4
Jane Eyre, by Charlotte Bront�
Oh, the Places You’ll Go, by Dr. Seuss
12
Oh, the Places You’ll Go, by Dr. Seuss
One Hundred Years of Solitude, by Gabriel G�rcia M�rquez
4
One Hundred Years of Solitude, by Gabriel G�rcia M�rquez
The Bluest Eye, by Toni Morrison
2
The Bluest Eye, by Toni Morrison
The Great Gatsby, by F. Scott Fitzgerald
13
The Great Gatsby, by F. Scott Fitzgerald

1
Florence + the Machine
14
Florence + the Machine
Grizzly Bear
7
Grizzly Bear
Justin Bieber
1
Justin Bieber
Kanye West
8
Kanye West
The National
4
The National