Teaching technical skills online

Here I am, still blogging like some kind of caveman. I guess I should be using Substack or Medium or something, but maybe blogs will come back in style, like other artifacts of the ’00s.

Anyway, in the past, when people asked me whether I could teach my digital humanities classes online, I hemmed and hawed. Tools like web-based visualization software have made it easier to share work across platforms, and heaven knows there are plenty of cloud-based collaboration tools out there.

The thing that worried me was teaching new tech skills, which is a big part of my classes, and particularly my Intro to DH classes. I am super, super picky about how to do this, as I’ve mentioned before. My feeling is, I get one shot to teach the students this new skill, and if something goes badly wrong, I’ve not only missed my shot, but I may inadvertently lead someone to believe they’re not capable of learning the skill. It’s why I teach every single skill myself, rather than invite people to give workshops; I just know exactly how I want it done.

Continue reading “Teaching technical skills online”

Sitting with the rage

Have I ever felt this angry or trapped in my entire life? Certainly—let me cut you off right there at the pass—the world has seen greater cruelties and outrages. “Broken childcare infrastructure” barely makes the list of world-historical tragedies.

And yet for sheer absurdity, for the unbelievable stupidness of this problem, for our steadfast refusal to acknowledge the giant fucking impossible disaster hanging over all of our heads—for that, this year should win some kind of award.

Let me back up. California’s public schools are all currently online, as they should be. I have a seven-year-old daughter who’s in Zoom school. I also have a six-month-old baby. I also have a full-time job, as does my husband.

Continue reading “Sitting with the rage”

Several new publications

I’ve published several things in the last few months, and thanks to UC’s institutional repository, I’ve been able to make them available to everyone.

 

“See No Evil”

Logic Magazine no. 4 (buy a copy of this great magazine!)

This is a piece for general readership that investigates the software behind today’s massive, sprawling supply chains. I’m finishing the academic version and hope to have it out soonish.

 

“Prostitutes, Charity Girls, and The End of the Road: Hostile Worlds of Sex and Commerce in an Early Sexual Hygiene Film”

In Health Education Films in the Twentieth Century. Editors: Bonah C, Cantor D, Laukötter A. 173-187. University of Rochester Press, Rochester, NY 2018.

This piece looks at an American sexual hygiene film from 1919, using it to illustrate the fraught relationship between sex and money in post-World War I American culture. The publishers sent me a discount code if you want to buy a copy of the book. Use BB130 here to get 30% off. (Or just get it from the library!)

 

“Digital Humanities”

In The Craft of Criticism: Critical Media Studies in Practice. Editors: Kearney MC and Kackman M. 331-346. Routledge, New York, NY 2018.

This is an overview, history, and typology of digital humanities within the field of media studies. It also contains a step-by-step walkthrough of a digital humanities project I created. I think this will be really helpful for anyone trying to figure out what the heck DH is and how people go about building DH projects.

Scaling up DH101

Over the last few years, enrollment in my Introduction to Digital Humanities class has been trending steadily upward, as has enrollment in the minor itself. Last spring, we had an unexpected surge in enrollment in the minor, and many of those students needed to take DH101 right away. We had to scramble a bit to accommodate everyone. After considering a few possibilities, we more or less doubled the size of our Intro class, from 45 to 88 students. We were fortunate to enlist an excellent T.A., Dustin O’Hara, to teach two sections, and my fabulous longtime co-conspirator, Francesca Albrezzi, took the other two. (We have lectures twice a week and section once a week.)

Even with the expanded class size, we had to turn lots of people away; I suspect we could fill another DH101 class in the spring, if we had the faculty bandwidth to teach it.

This was my first time teaching a true lecture course. In previous versions of DH101, I’ve been able to alternate between dispensing information and turning discussion over to the students. While we still had discussions in the larger DH101, I could no longer pretend this was a seminar.

I expected the large class size to be a challenge, but I think the bigger challenge was the classroom itself. We were lucky to find a room at all, given how late we transitioned to a larger class size, but we were stuck with a very conventional lecture hall, with bolted-down seats in immovable rows. It at least had modern AV equipment, but the room was a significant challenge. In my previous classroom, students’ seats were arranged in 10 or so group tables, so it was easy to alternate between hands-on work and all-eyes-up-front lecturing. Now we had no choice but to sit lecture-style.

I did what I could to ameliorate the situation. I was able to reserve the Young Research Library main conference room on a few occasions, which gave us a chance to work more collaboratively. And I did continue asking the students to check in, share work with each other, and discuss issues in small groups in the lecture hall. But the space just didn’t really lend itself to that kind of thing. This was a real bummer for me, and probably for the students, too.

The classroom arrangement actually set us back significantly in terms of technical skills, too. I wasn’t really comfortable asking students to learn technical stuff when I couldn’t circulate freely in the classroom to see how they were doing. I don’t think a lecture hall is a good environment for learning new skills on your computer, since it’s so easy to get stuck and have no way to signal for help without stopping the entire class. So technical tutorials had to be reserved for section, for the rare occasions when I could reserve the Library conference room, and for a few at-home lessons. As a result, I wasn’t able to teach the students as many skills as I have in years past.

I also struggled to check in with students as much as I’ve been used to doing. Their group project is always really challenging for them, and every project is very different. Since I’m the one who picks out the datasets, I usually like to work at least a little bit with every group. But with so many students, I had a hard time devoting attention to everyone. The result was more confusion about the assignment and expectations than in previous years, and a couple of group meltdowns. Everyone pushed through and got to the finish line, thanks in large part to the TAs’ hard work, but it was more stressful than it needed to be.

The students’ final project showcase this week reassured me that, yes, they did learn what I wanted them to, and, yes, they did learn how to do serious research and think critically about data. I loved hearing them explain what they did and how they overcame challenges, and I was really excited to hear their newfound confidence in discussing technical matters. Still, as always, it’s my errors that stand out to me.

If the class remains this size next year (and I’m still the one teaching it), there are a few things I’d do differently.

  • Rethink the final assignment. This is tough, because I’ve loved giving them “real” data, and I believe they benefit from the intense labor of making meaning from messy, incomplete, but important datasets. But I’m not sure it’s realistic for me to assemble and augment this many datasets every year. And I worry about the groups getting the attention they need to complete this very complex project when there are so many people to check in with. The alternative that makes sense to me is some kind of digital portfolio, in which students create their own examples of multiple kinds of digital work and surround it with critical commentary.
  • Undecorate the Christmas tree a little bit. As the years have gone by, I’ve tossed more and more assignments in the syllabus. I don’t think the class is more work, necessarily, but there are a lot of things to turn in and a lot of dates and assignments to remember. It’s too much. I think I could cut the blog post assignments down to just a few and simplify the final project a lot.
  • Think about asking students to complete technical modules at home. I usually like to be with students when they’re learning a new technical skill, but that wasn’t always possible. On a few occasions, I had students walk through (very carefully written) tutorials themselves at home, and they seemed to do OK. I think I could do more of this, as long as I’m cautious.
  • Get a different classroom! I don’t think we actually have a great classroom for a group this size at UCLA, but what I imagine would work well is a large room, with lots of space for my TAs and me to circulate, and multiple large tables where students can sit in groups. Multiple screens would be awesome, so that students could quickly draft and share work, but honestly, I’d happily take a large, empty room with tables and chairs, preferably one that we don’t have to set up and tear down every time (ugh).

Other miscellaneous thoughts about this year’s DH101:

  • As part of their annotated bibliography, each student needed to not only write a blurb about each of their sources, but actually obtain the book or article and submit a photo of themselves holding it. We called those “shelfies.” I’m just tired of reading book summaries that are obviously pulled from the snippets students could read on Google Books. This seemed to work really well. Students STRUGGLED to find their sources, as I expected, and waited too long, as I expected, but a number of students told us that this was the first time they’d located or checked out a book in their college career. As we did last year, we held a “research-a-thon” to help get them going on this, and while I made a mistake by holding the event during midterms week, the librarians and I were able to personally escort a number of students up to the stacks and help them read a call number.
  • Students took to network analysis more than they have in years past, perhaps because a number of them were simultaneously taking an SNA class in the Sociology department. I’m happy with the lesson plan I’ve developed to introduce network analysis, which uses a questionnaire about their favorite books, movies, and musicians to develop a homophilic network graph to show how they’re all connected. (I recorded last year’s network analysis lecture and you can see it here.)
  • For the last couple of years, it’s been clear that the hardest thing about the final assignment for my students is getting started — understanding what kind of work is necessary to start asking questions of a dataset, and how to alternate between secondary research and data analysis. The DataBasic suite really helps with this, but I think they could use step-by-step instructions to get started. Perhaps I’ll take that on at some point.
  • I just did not have the wherewithal (or the funding) to schedule a pizza-dinner hackathon, as I’ve done in previous years, but I found a simple alternative that they seemed to appreciate. I convened an evening meeting to which each group had to send at least one representative and checked in with each group that way. Then, at the same time every week, I invited each group to sign up for dedicated help with me. It worked well and allowed me to work intensively with a few groups.
  • You probably guessed this, but with a lecture this size, you need to make every announcement multiple times and send email followups, and even then, students will plead total ignorance.
  • For the last few years, I’ve started off the class with a reading from Hayden White, about the essential unknowability of history. This year I switched it up and had them read the first chapter of Michel-Rolph Trouillot’s Silencing the Past, in part because Trouillot explicitly deals with power and race in ways that White doesn’t. They really struggled to understand Trouillot, but it seemed to make an impression on them, too.
  • Of the DH projects we examined together, the one they all seemed to like the most was Gabriela Aceves Sepúlveda’s [Re]Activating Mama Pina’s Cookbook. I think they liked its consideration of the materiality of data, the questions about what “counts” as data, and the beautiful design. Also, partly because so many of my students are people of color themselves, they appreciate it when I can pull in projects from and about other people of color.

Data Packages for DH Beginners

The quarter is off and running again at lightning speed. At UCLA, we’re on the quarter system, and things move fast — just 10 weeks to get through all your material. I’m teaching DH101 again this year, and, as usual, it’s a race against the clock. The profile of my students changes a bit every year, but the typical student who enters my DH101 classroom has facility with Word, PowerPoint, maybe Excel, maybe some of the Adobe suite, but not a ton of other computer stuff. By the end of the quarter, my goal is to get them working with and thinking critically about structured data, data cleaning, data visualization, mapping, and web design.

I’ve written about this before: working in groups, my students are assigned a dataset at the beginning of the quarter. They learn how to work with it as the quarter progresses, doing a lot of secondary contextual research, interviewing an expert about it, manipulating the data, and finally building a website that makes a scholarly humanistic argument with the support of the data. You can see the mechanics of this on my course website.

People often ask me about the data I use, and indeed, that is a story in itself. I have 88 students this year, and since I don’t like any group to have more than seven people in it, I have 12 groups, each of which needs a dataset. (Really, some of them can share the same dataset; I don’t know why I get weird about this.) And they can’t just use any dataset. In fact, most of the data out there is inappropriate for them.

Here is what I look for in a dataset for my students:

  1. It has to be a CSV (or able to be wrangled into a CSV). My beginners want to be able to double-click on their dataset and see…something that they can work with. CSVs are great because they open in Excel, which is familiar to most students and allows them to immediately start doing things like filtering and simple manipulation. Plus, you can drop a CSV into almost any visualization tool. I can use a relational database, but I usually just give the students the spreadsheet that results from a query, since I just don’t have time in the quarter to teach them about more complicated data structures. Likewise, if a dataset is XML, I’ll just flatten it. But I prefer not to have to deal with this because, like I said, 12 datasets.
  2. Around 2,000 records is ideal. Here’s why: I want the dataset to be big enough that it’s too labor-intensive for the students to manipulate it by hand, but not so big that it breaks Excel. Really, I can work with bigger sets, too, but students do tend to get very anxious about working with datasets that big. Any number of fields is fine (actually more is better) because students understand fairly quickly that they can choose which fields to examine.
  3. It has to be…humanities-ish. You and I probably know that one could make a humanities argument about municipal water data, or public health information, but it takes a little bit of sophistication to get there. The most “natural” kind of analysis for these kinds of datasets would be urban planning or public health kinds of questions, and it’s too difficult for me to push students toward the kind of open-ended humanities questions I want them to pursue. It’s far easier if the data is about art, books, movies — subjects that are the traditional province of the humanities.
  4. It’s nice if it’s something they care about. I have confidence that my students will eventually become interested in any subject, once they really dig into it, but I can forestall a lot of grumbling if I can give them a dataset that’s immediately compelling to them. Things they like: fashion, food, performance, books from their youth, cartoons, comic books, TV, movies.

You can see this year’s datasets at the bottom of this page. I do not just give my students their datasets in raw form. I cut the sets down to an appropriate number of records, if necessary, and then I give them the dataset along with a “project brief,” which contains:

  1. Information about the provenance and composition of the data.
  2. The name and contact info of an expert on that subject who has agreed to allow my students to interview them.
  3. The names and contact info of librarians who can help them.
  4. The name and contact info of UCLA’s mapping specialist.
  5. Two or three secondary sources to get them going on their research. I also teach them how to citation-chain.

Here is an example of a “data package,” with the contact info removed.

If you’re thinking this is kind of an absurd amount of work for the instructor, you’re right. I really feel the students need this apparatus around their dataset, but I end up spending a good chunk of my summer hunting down data, persuading friends (and strangers) to serve as subject experts, and researching secondary sources.

Even with all of this scaffolding, students get very anxious about the project assignment, just because it’s so new to them. I’ve learned to expect it, to warn them that they’ll feel anxious about it, and to reassure them that if they’re hitting project milestones, they’ll get to the finish line on time, even if they feel at sea.

Sorry for the dashed-off blog post; I’ve been meaning to write about this for some time and finally had a few (just a few!) minutes!

New job, same school! (Same office, even!)

View from window with trees and blue sky.
Never gonna give you up, beloved office window! I fought hard for this! (Used to be Chris Kelty’s! [I did not kick him out.])
I can never keep my mouth shut, so this announcement already made the rounds on social media, but I’m really excited about my remodeled job title: as of July 1, I’m an assistant professor of Information Studies and Digital Humanities (still at UCLA!). For those who care about such things, the appointment is 100% in IS, but I’ll continue to do half my teaching for the DH program.

I’m really happy. I’ve always loved my job at UCLA, but over the last few years, I’ve grown increasingly invested in a couple of research projects: the first, on the way data works under supply-chain capitalism; and the second, on what “data” means for the humanities more broadly. My new position will give me the time and resources I need to work on these projects. I’ve always felt very close to the i-School at UCLA — both to the people and to the questions they’re asking. It’s a really good fit.

When I came to UCLA for my job interview, Todd Presner, who became my boss, told me that the job I was interviewing for seemed to make sense for someone to hold for about five years. Five-and-a-half years later, that seems about right to me. I wasn’t really sure where I’d go after that time elapsed; I came close to moving into a higher-level administrative job, but in the end, I felt pulled to research and teaching.

I’m just really, really glad I had that option, and really glad I get to do it at UCLA, a place I genuinely and cheesily care about a lot. I’m very grateful for the mentors I’ve had to help me figure out how to navigate all this, and especially for Todd (who hates it when I say this).

Best of all: UCLA will be replacing “my” position as coordinator of the DH program, although I know they won’t be looking for a Miriam clone. I’m extremely excited about what it will mean to bring in someone with different and fresh ideas about DH. You can see, I think, that this is a pretty significant investment in DH at UCLA, and I think it will be good for all of us.

New tutorials on network analysis with Cytoscape

The Cytoscape interface, featuring a pane on the left with buttons and a graph diagram on the right
I find the Cytoscape interface more intuitive than Gephi’s, although in both cases, you need to have a basic understanding of key NA terms.

For some reason I got it into my head to write a bunch of tutorials on using Cytoscape for network analysis. They’re now all up on Github. (I’ve been moving to Github for tutorials because they’re easier to update there.)

I started writing these for the students in my spring-quarter class and, even though the class is over, I’ve been adding to them compulsively. They’ll take you from zero to an interactive, web-based network graph, with stops along the way for projecting a two-mode network to a one-mode network and working with node attributes. (If you don’t know what any of that stuff means, they explain that, too.)

There’s a bit of a Gephi-versus-Cytoscape battle right now among people who do network analysis. I actually started out on Cytoscape, only because I found it slightly more intuitive, and switched to Gephi when I discovered most people used that. But in recent years, I’ve had a really hard time dealing with Gephi. First, there was the Legendary Java Problem, and although the new version is purportedly more stable, I actually just cannot get it to work on my Mac and have frankly kind of lost the will to keep trying.

Cytoscape is Fine. It’s designed for scientists, really, and other people who care very much about statistical measures of networks, which to be honest, I don’t really care that much about. (I don’t think most humanists trust these measures anyway, so I don’t see much point in hammering on them.) I find Cytoscape’s web service, CyNetShare, to be pretty janky-looking, but … you can interact with the network diagram, so that’s good, I guess.

To be honest, I’ve been slowly making the switch from Gephi/Cytoscape/etc. to R’s igraph package, and to D3 for displaying networks on the web, just because they’re so much nicer looking. One thing I like about Cytoscape is that after you’ve measured various aspects of your network, you can export JSON that’s set up specifically for D3’s popular force-layout network.

When I was visiting Stanford last winter, I got to see a preview of a network analysis tool that the Humanities + Design team is building, and I really liked the way they placed the emphasis on exploration and discovery, rather than statistical measures. I’ll be looking forward to seeing the release of that tool (I think it’s called Idiographic?), since I do feel that humanists have different interests when it comes to networks than scientists or social scientists.

New book chapter

arclight-cover-page-001-e1464474821563I’m really proud to have a new chapter in an open-access volume edited by Eric Hoyt and Charles Acland called The Arclight Guidebook to Media History and the Digital Humanitiespublished by the UK press REFRAME. The chapter, which is called “How is a Digital Project Like a Film?” is really about data and narrative. What does it mean to tell stories with data? On what basis can we call data-based narratives true, and where do they necessarily lie? And what role does the interface play in all of this?

The full TOC includes lots of great stuff, including pieces by Deb Verhoeven, Haidee Wasson, Greg Waller, and Lea Jacobs. I think it does a nice job bridging the gap between traditional film studies and other forms of scholarship, and I’m very pleased to be included.

Materials on Image-Mining for Medical History

Last week, I taught the image-mining portion of the Images and Texts in Medical History workshop at the National Library of Medicine. I am far from an expert on OpenCV, the open-source computer-vision library. But as usual, that didn’t stop me from attempting to teach it.

The materials I created for the workshop include detailed instructions on how to use OpenCV to extract images from scanned journal pages (using a script written by Chris Adams), as well as a detailed breakdown of how to use the Python OpenCV library to take the average color of an image. I’ve also included links to my favorite resources on OpenCV and computer-vision in general. (My experience has been that there are a lot of really terrible tutorials out there, so I’ve tried to link only to those that are actually helpful.)

Ben Schmidt taught the text-mining portion of the workshop, and his materials are really great. His handouts in particular are concise, opinionated rundowns of the strengths and weaknesses of various forms of text analysis.

In preparation for the workshop, Ben and I created a virtual machine, provisioned via Vagrant with all the dependencies and data the participants needed. If you’d like to install the VM, it has everything you need for both Ben and my portions of the workshop, and the instructions should be pretty clear. (The VM is based on one that Andrew Goldstone created for his Literary Data class.)

The process of getting the VM installed on participants’ own computers was … complicated. We learned many things about Vagrant and VirtualBox, including the fact that Windows 7 and 8 don’t come with any way to handle SSH.

It was definitely the most technically complex workshop either of us have attempted (to a group of about 50!!). It was definitely not hitch-free, but it was really satisfying to see participants get excited about computer vision, and to talk about ways they might use these techniques in their own research.

Money and Time

This is an edited version of a talk I gave at UC Irvine on February 5, at a symposium organized by Peter Krapp and Geoffrey Bowker.

Digital humanities, as we all know, is sexy right now. It seems to be everywhere, including the New York Times, the New Republic, and the Atlantic. Mellon’s funding it, the NEH is funding it, ACLS is funding it, we’re telling our grad students to prepare to work in it. Digital humanities initiatives or centers are popping up everywhere, and what a luxury to be part of a field that’s so frequently mentioned that people create angry memes about it.

At UCLA, I run and teach in our digital humanities minor and graduate certificate, which started four years ago and now enrolls about 60 undergraduates and 30 graduate students. Students are genuinely excited about DH, and it is a total blast for me to work with them to chart out the possibilities of this expanding field.

University departmental structures aren’t always congenial to interdisciplinary work, but students seem to get it right away. They’re really fascinated by the basic questions DH raises about knowledge organization, history, and epistemology, and I love the way they push the field’s boundaries just by asking the questions that come most naturally to them. I’ve felt actually extremely lucky to be part of a field that’s growing so quickly, and even to be in a position to help chart its direction.

But all this excitement and energy might conceal some less exciting ground truths. I have been spending a ton of time on the road lately, meeting with people who are starting DH centers and talking with people who are keeping initiatives and centers going. And they are tired. They are all really tired.

And once you drill down into the specific staffing and labor configurations of these DH initiatives, you’ll begin to see why. So many of these programs are staffed entirely by postdocs, perhaps with a faculty director who spends a portion of his or her time running the center.

In other cases, a DH initiative consists of a single librarian, who’s probably also responsible for liaising with several academic departments. If a DH initiative has programmers, they’re usually what you’d call “matrixed,” meaning they have multiple bosses, to whom they have to account for their time in exquisite detail. Or if the DH activity is coming from faculty, it’s from people who have to use every ounce of their ingenuity to scare up resources to support their students and their research.

Why is this widespread shortstaffing happening? Some of it is probably just because DH is new and untested, and it is notoriously difficult to launch new, interdisciplinary programs at universities, especially big ones like most of the UCs.

And DH has had the bad timing to emerge during a moment of particular budget austerity, at least when it comes to paying for academic programs. (Whether that’s coincidental is another, much longer discussion.) Launching a program with a two-year postdoc is clearly absurd and shortsighted, but it’s nevertheless become standard operating procedure for many places looking to get a program going. So, in a way, many of these conditions are just typical of our corner of academia at our current moment.

I wonder, though, if part of the problem might also be that our institutions have absorbed some of the widespread rhetoric about the immateriality of digital labor. We’ve come to think that stuff that you do on a computer can be done anywhere, anytime — and thus everywhere, all the time, with no particular material requirements.

We’re used to getting digital stuff for free, from Facebook to iPhone apps, so perhaps we think digital academic programs shouldn’t be any different. People built Linux for free; why shouldn’t they donate their time to build a DH program?

The wide availability of free software and the general enthusiasm about all things digital have probably contributed to this notion that all we need to make a DH center is a laptop and a postdoc. For my part, I’ve optimized absolutely everything about my job I can possibly optimize, from text-expanders and email auto-filters to IFTTT pipelines to automatic appointment-booking software. We’re all lifehacking, right? And I still feel like I’m teetering on the brink of burnout.

Don’t worry, this isn’t really about me. I mean, we should all be concerned with every laborer’s working conditions, and we should all be concerned about what’s happening with academic labor. I suspect we all are. But I actually want to make a somewhat different argument here, one that has more to do with the possible futures of both of our fields.

Recently, I was talking to a group of our grad students about the kinds of work people are doing right now in digital humanities, and they asked some uncomfortable questions.

Take digital mapping. Postcolonial theorists have known since forever that the Mercator projection enshrines Western European, Cartesian models of space, when in fact there are many different ways of understanding geography. Why does every DH project use the Mercator projection?

Or take network analysis software. The tools we tend to use, like Gephi and Cytoscape, are great at measuring centrality and clustering coefficients. But what about some of the most basic things a humanist might like to do, like transforming the network diagram to reflect the perception of a different historical actor? That’s just not a possibility for us. Why is that?

Why? It’s simple. Because we’re relying on tools and infrastructure built for industry — or, in the best cases, for scientists. Which makes a certain amount of sense; one doesn’t want to reinvent the wheel. But it’s also had material effects on the kind of work we can produce, and the horizons of possibility our work can open. When we choose not to invest in our own infrastructure, we choose not to articulate a different possible version of the world.

In fact, this state of affairs is already very well-documented for edtech. By outsourcing development of key components of educational technology to for-profit vendors, we’ve chosen to invest in the development of software companies that mine our students’ data, encourage us to spy on their work, and lock us into a closed ecosystem of for-profit technology whose philosophy bears very little resemblance to the kinds of teachers we started out wanting to be.

And for all of the excitement about grant funding opportunities and enthusiastic administrators, the actual state of DH funding is less flush-with-cash than boom-and-bust. An NEH grant, no matter how prestigious, doesn’t secure a salary for very long. A postdoc, no matter how smart and committed, isn’t going to singlehandedly change campus culture.

It’s one thing to get an awesome project going; it’s another thing to pay for the routine maintenance necessary to keep it up and running. Recently, we saw the closure of HyperCities, UCLA’s well-known mapping platform for humanistic projects. People were tired of piecing together grant funding to keep it lurching along. Meanwhile, Google decided to shut down its support for the Google Earth browser plugin, so … it’s gone. That’s what happens when we don’t invest in our own infrastructure.

Don’t get me wrong, I get tired all the time of trying to wrestle with the exhausting bureaucracy of a public school, and I’ve turned to private-industry solutions plenty of times. Most recently, I’ve given up on trying to control my own space on university servers and started encouraging my students to purchase their own space from hosting companies for class projects.

It seems like the reasonable thing to do, since Lord knows I’ve had my stuff written over and erased from university servers more times than I can count. But I’m also aware that by choosing not to invest in support for this kind of thing, we’re relinquishing all of this work to private servers. We’ll never get it back again.

Last year, UCLA announced an app competition. The contest promised a $5,000 prize for the best app to, quote, promote “UCLA’s mission of education, research and service.”  I’m 100% sure that the offices that sponsored this contest had the best intentions, and I salute the winners. But this is not support. This is not research support. How long does it take to build an app? How many people does it take? How is the app going to get updated once the contest is over? What message are we sending our students by telling them they should work for free? Has anyone thought this through?

We want to believe that we can be agile and innovative, like Silicon Valley says it is, by making DH run with short-term grants, app contests, and temporary labor. We want to have a sort of Uber-style sharing economy for DH-research. But this is not how one supports careful, enduring scholarship and teaching.

Why does digital humanities look the way it does right now? I think the boom-and-bust cycle of grant-chasing and temporary funding has had a huge but largely unacknowledged effect on the kind of scholarship we’re producing. If we want to produce truly challenging scholarship and keep our best scholars from burning out, we need to pressure our institutions to, frankly, pay up. You can optimize, streamline, lifehack, and crowdsource almost everything you do — but good scholarship still takes money and time.