This piece looks at an American sexual hygiene film from 1919, using it to illustrate the fraught relationship between sex and money in post-World War I American culture. The publishers sent me a discount code if you want to buy a copy of the book. Use BB130 here to get 30% off. (Or just get it from the library!)
This is an overview, history, and typology of digital humanities within the field of media studies. It also contains a step-by-step walkthrough of a digital humanities project I created. I think this will be really helpful for anyone trying to figure out what the heck DH is and how people go about building DH projects.
Motion pictures’ utility for surgeons might seem to be their ability to show things just as they appear to an observer present at the scene. But a film like Sarnoff’s suggests that there is a gulf between what mechanical reproduction shows and the way that something like circulation actually appears to the surgeon present.
For surgeons like Sarnoff, the value of film wasn’t only, or even chiefly, its ability to mechanically reproduce reality, but its ability to function as a dynamic college: to offer students of surgery a lesson on how to move back and forth seamlessly between the messy substance of reality and the neat diagrams that populate anatomical atlases.
I was especially happy to write something for the NLM because the Library’s History of Medicine division has been invaluable to my work. From my first, exploratory research into my dissertation, their librarians and archivists have been true research partners (and sometimes cheerleaders!). The History of Medicine division does invaluable work, and I’m so grateful to its staff.
This is a lightly edited version of the keynote address I was honored to give at the Keystone Digital Humanities Conference at the University of Pennsylvania on July 22, 2015. Thank you to the organizing committee for inviting me!
My sincere thanks, too, to Lauren Klein and Roderic Crooks for their advice and feedback on this talk. I’d also like to acknowledge the huge intellectual debt I owe to David Kim and Johanna Drucker, with whom I’ve argued, negotiated, and formulated a lot of these ideas, mostly in the context of teaching together. David’s important dissertation, Archives, Models, and Methods for Critical Approaches to Identities: Representing Race and Ethnicity in the Digital Humanities (UCLA, 2015), takes on many of these issues at much greater length.
I gave the title of this talk to Dot Porter some time ago in a fit of ambition, and it’s seemed wildly hubristic to me ever since. But it’s something I care a lot about, and so tonight I’d like to outline some ideas about how digital humanities might critically investigate structures of power, like race and gender.
We are doing some of that now, as evidenced by some of the work at this conference, but I don’t think we’re doing it with the energy or the creativity that we might. I’ll argue that to truly engage in this kind of work would be so much more difficult and fascinating than we’re currently talking about for the future of DH; in fact, it would require dismantling and rebuilding much of the organizing logic, like the data models or databases, that underlies most our work.
So I’ll start by saying a little about where I think we are with digital humanities now, and also about some new directions, with respect to these structures of power, that I’d like to see the field go.
As I’ve often mentioned, I’ve been working for quite some time on a study of the photographs of Walter Freeman. Freeman, a Washington, D.C., based physician, was the world’s foremost lobotomist; it’s estimated that he lobotomized some 3,500 people.
He was also a prolific and dedicated photographer. He almost invariably took photos of his patients before and after the procedure, acquiring reams of these images over the course of his career. In a chapter of my book, Depth Perception, I argue that Freeman was participating in a much longer-standing tradition of psychiatric photography, one that claimed that the human face could reveal the depths of the soul. (You can see a recorded version of the story of Freeman’s photographs here.) Continue reading “The Case of the Missing Faces”
Over the years, I’ve spent a lot of time investigating the history of lobotomy, and particularly the kinds of visual evidence doctors used to support this practice. It’s part of the book I’m finishing, Depth Perception, which is broadly about the ways doctors have used film and photography during the twentieth century. In one of my chapters, I write about the lobotomist Walter Freeman, who was a prolific photographer, describing what he thought his patient photographs showed, and how our understandings differ today.
I get a lot of questions about lobotomy from people who find me on the Web, and I know other people who specialize in the subject do, too. I thought it might be helpful for me to write down some of the answers to the most frequent questions I get about the practice of lobotomy in the United States.
I’m sorry to say that I can’t answer individual questions on this subject, but I do provide references to some excellent books on the subject below.
What is a lobotomy?
The term “lobotomy” (often used interchangeably with “psychosurgery” during the period in which it was practiced) refers to an operation that severs the connections to and from the prefrontal cortex, in the anterior part of the brain’s frontal lobe. Generally, it was performed in one of two ways. From 1936 to 1945, lobotomies were generally performed by drilling two holes in the skull, near the patient’s temples, inserting a long instrument called a leucotome, and severing the connections to and from the prefrontal cortex. From 1945 until 1967, lobotomies were generally performed by inserting a long, thin instrument into the back of a patient’s eyeball, puncturing the thin orbital plate above the eye and rotating the instrument so that it destroyed the connections to the brain’s frontal lobe. This second type of lobotomy is called the transorbital lobotomy.[1. Pressman, Jack David. Last Resort: Psychosurgery and the Limits of Medicine. Cambridge History of Medicine. Cambridge, U.K: Cambridge University Press, 1998.]
Our original proposal for this session read like a lot of attempts to grapple with controversy in the digital humanities. “Is digital humanities complicit with the neoliberal impulse in the modern university?” it asked. “Some say it is, citing A, B, and C. Others say it isn’t, citing X, Y, and Z.” The framework, if unoriginal, had the benefit of being easy to write.
My copanelist Natalia Cecire pushed us to think beyond this cliché. “Let’s start with the premise that it is complicit,” said Cecire, citing Alan Liu’s “Where is Cultural Criticism in the Digital Humanities?” “Else why would it be so obviously attractive to the neoliberal university? Let’s start with that and talk about what we then do.”
Borrowing Natalia’s framework, I want to complicate a discourse about labor that has emerged from and become identified with the digital humanities. The term for this work is “alt-ac,” which stands for “alternative academic.” ((My critique is not altogether new. Liana M. Silva aired some of these concerns in April. Martha Nell Smith has levied similar critiques of DH centers’ hiring practices, and Bethany Nowviskie has dealt with a number of these concerns in “Toward a New Deal.” I am moved and inspired by this vision of a WPA for the humanities, but I feel that my fundamental objection to some of the rhetoric about alt-ac — that it rests on the flawed assumption that the academic jobs crisis is caused by an overproduction of Ph.D.s — has yet to be answered.)) Jason Rhody, a senior program officer for the NEH’s Office of Digital Humanities, coined the term in 2009 to describe the scholarly work performed by many of us in and in the orbit of the academy who do not hold traditional faculty jobs but do perform scholarly labor. ((For a history of alt-ac and a snapshot of how graduate programs might equip students for these jobs, see the excellent “Humanities Unbound: Supporting Careers and Scholarship Beyond the Tenure Track,” prepared by Katina Rogers for the Scholarly Communication Institute.))
If you’re reading this, you may know that topic modeling is a method for finding and tracing clusters of words (called “topics” in shorthand) in large bodies of texts. Topic modeling has achieved some popularity with digital humanities scholars, partly because it offers some meaningful improvements to simple word-frequency counts, and partly because of the arrival of some relatively easy-to-use tools for topic modeling.
MALLET, a package of Java code, is one of those tools. It’s not hard to run, but you do need to use the command line. For those who aren’t quite ready for that, there’s the Topic Modeling Tool, which implements MALLET in a graphical user interface (GUI), meaning you can plug files in and receive output without entering a line of code.
David Newman and Arun Balagopalan, who developed the TMT, have done us all a great service. But they may also have created a monster. The barrier for running the TMT is so low that it’s entirely possible to run a topic modeling test and produce results without having much idea what you’re doing or what the results mean.
So is it still worth doing? I think so. Playing with the results by altering variables and rerunning the test can be a useful way to get your head around what topic modeling is and isn’t. And, as I recently tried to convince my graduate DH class, screwing around with texts — even if you’re not totally sure what you’re doing — can be a surprisingly effective way of getting a new perspective on a body of work. Finally, seeing how many decisions need to be made about texts and variables is a great way to understand that topic modeling is not a way of revealing any objective “truth” about a text; instead, it’s a way of deriving a certain kind of meaning — which still needs to be interpreted and interrogated.
But in order to get any of these benefits from the Topic Modeling Tool, you need to be able to make some sense of your results, which is no easy task. The TMT generates some decidedly cryptic-looking files, and as far as I can tell, there aren’t many resources out there to help you make sense of them.
Once you survey the results of the Topic Modeling Tool, it becomes clear why topic modeling often goes hand-in-hand with visualization. The format of the results makes it difficult for a human being to discern patterns in them, and the files aren’t easy to visualize without doing some custom coding.
But say you’re a non-coder using the Topic Modeling Tool to screw around. You feed it some text, you get some files; now what?
What follows are some very basic ways you might begin looking at the results you’ve generated.
I’m flying back from a trip to the George Eastman House (in Rochester, New York), where I did a couple days of archival research. I thought I’d write a bit about what I was doing there and what I found, in the hope that capturing the experience here will help me organize my thoughts about it later.
I was interested in a physician-filmmaker named James Sibley Watson, Jr., who made a number of striking cinefluorographic (X-ray) films. I’ve been hoping that Watson will be the basis of a fourth chapter of my book, Depth Perception, which is about medical filmmaking. Each chapter focuses on a different aspect of what, together, we think constitutes a body: mind, anatomy, disease, and, with the addition of this fourth chapter, skin. My argument is that it’s surprisingly difficult to make a medicalized body cohere on film. It takes tons of editing tricks, special effects, and dedicated equipment. In fact, it’s so difficult, I argue, that we should pause to consider whether the medicalized body exists at all.
I posted recently about tools for managing a research workflow, and one of the points I made is that no set of tools will be right for everyone. I’ve tried and failed to foist my favorite tools on enough people to know that this is true.
Still, after I wrote the post, a few people asked me which tools I use. I do indeed have a number of favored tools, and since I find myself endlessly fascinating, I enjoyed the chance to consider why I use them and what it says about me as a researcher. I’d also really love to hear what you use and why!
Here’s what I use in a nutshell:
Zotero for collecting and organizing sources (both primary and secondary), taking notes, and citing sources