The project I chose to reverse engineer was Robots Reading Vogue. This was a project headed by the Yale University Library in conjunction with the Digital Humanities Research Department. In search of an already digitized set of text to boost interest in DH projects, Peter Leonard, Yale’s head librarian for Digital Humanities Research, came across Vogue’s archives with the help of Lindsay King, an arts librarian. Yale purchased the access licenses to access the database and in 2014 they received a grant to continue with this project. This is the base layer or sources, where Peter Leonard started his initial research and what type of information he based it off of.
In order to process it, the information was scaled back into sub-projects where it could be easily manipulated. Some of the different projects included decennial sampling for covers, slice histogram for color patterns used in every issue, and N-gram search to compare words usage. Going deeper into other projects there was topic modeling which clumps together words that are similar to each other and that were used concurrently. Every issue of vogue was read automatically by a computer program in order to scan for the words. Another sub-project was gathering info on advertisements. They used their information to figure out what types of advertisements were used and how frequently they appeared. Another project included getting the statistics of the magazine including how many copies were sold, how many issues were put out every year, and the amount of articles every year. All of these become superimposed in order to see different visualizations such as how many articles were sold every year vs. the price of the magazine. Another component of this project was student projects. Some of which included breaking down the look of the average model used in the magazine and the types of patterns used in different periods of social change. An image visualization used in this project was an image plot which was able to show how colorful certain covers of vogue were during different points in time. They also attempted to gather info on what types of materials were used in the magazine. They started off by using R, a computer program, to cluster together different types of materials. This proved to be much too complicated and confusing, so they scaled back and grouped similar fabrics together to create broader categories. They used dendrograms to visualize how similar different materials were to each other. The last project was a fun memo generator that took words commonly used by editors in chief in their memos, and made up new ones.
All of this is wrapped up in an interactive website, the top layer, where the visitor can click through each different sub-project and explore new topics. The charts, the graphs, the videos, and images all help the visitor learn about the different topics being talked about.

I completely agree with your analysis! I also wrote my blog post on Robots Reading Vogue and loved your insight as to which computer programs were being used and the pros/cons of each.
I also wrote my post on Robots Reading Vogue and I like how you kept things concise and organized, with clearly outlined sections for source, processing, and presentation. You covered a lot of different sections and added a lot of details for the the processing part, describing a lot of technologies and software, which was helpful in understanding the project as a whole.
I really liked how thorough you were in explaining the processing methods! It made me realise how much work went into creating this project, something I did not realise when I explored the site. I feel that you could have elaborated more on the presentation of the site, but it was nonetheless largely a comprehensive reverse engineering job.
Hi,
I really appreciate your thorough findings on the processes of this project. You really elaborated and went in depth on how each source was used and how exactly they were able to split and compile them together. I agree with Rebecca in that I feel like you should have inputted more information on the presentation of the site itself. However, your post is well written and very informative.
Great job really combing through the website and understanding its sources. Your analysis was insightful and also very straightforward. I had only glanced at the website before, but with the help of your posting I was able to see all the layers the website had. It’s a truly interesting subject and one that I enjoyed learning more about.