Humans Reading “Robots Reading Vogue”

Readers familiar with the “I forced a bot” meme format are probably going to assume they’re being set up when I start to explain the digital humanities project “Robots Reading Vogue.” But instead of producing a funny (and likely fake, as explained on Gizmodo) script, the “robots” in this case are analyzing historic trends in the visual and textual content of Vogue. In conjunction with Lindsay King and Peter Leonard of Yale University and their student researchers, the robots have shared many insights into the magazine’s history while helping scholars and students explore potential projects in the digital humanities.

Virtual Vogue

While Vogue has been publishing since 1892, the source material for “Robots Reading Vogue” only became available in 2011. That’s the year that ProQuest launched The Vogue Archive, an online database containing digitized files of every edition of the magazine. The collection spans 125 years and contains “over 2700 covers, 400,000 pages, [and] 6 TB of data.”

Beyond just the magazine’s extensive offerings, ProQuest includes additional content in this archive that makes it even more appealing: all the files are marked up as text and image. The additional indexing provided by the markup language in this collection is what makes it possible for the “robots” to read Vogue.

Because this collection is comprised of copyrighted material and is hosted on a paid platform, it is not open source or freely available to interested digital humanists. But any interested scholars or students should check if their institution or another local library offers free access to ProQuest.

Print Processing

The experiments on “Robots Reading Vogue” utilize a variety of different processing techniques depending upon the central question(s) of the experiment and the types of data computations needed to answer.

Screenshot of the homepage of the "Robots Reading Vogue" website displaying links to each of the project's 10 experiments.
In addition to these 10 experiments (as “Robots Reading Vogue” calls them), the site also contains 3 student projects on additional topics.

For example, “Cover Averages” analyzes cover image data to explore how the magazine has visually represented its content over time. Scholars started by using an image editing platform to stack and hand-align layers and layers of Vogue covers in order to ensure that the results would not be skewed by formatting variations in the archive scans. Once the team had finished compiling all the covers in a given year, the composite of the overlays “generates a mean RGB value for each pixel.” While the post does not explain how one might measure or learn more about those values, the results can be observed by looking at the image created by the process. While the manual editing of images can be time consuming, this experiment does not seem like it was particularly computationally complex.

On the other hand, experiments like the “n-gram Search” require far more computational steps to generate results. The “n-gram Search” allows users to analyze the occurrences of different words throughout the archive over time. Since the “n-gram Search” allows users to change the search terms and control for factors like author and genre, the corpus must be organized in a way that links each word instance to other contextual criteria provided in the digital text and metadata. The scholars on this experiment do not describe their process, but they likely organized the text and associated metadata into a database using a combination of Python and SQL. The structure provided by the database would then make it possible for the “robots” to run specific queries and return results within those parameters. “Robots Reading Vogue” uses the platform/”robot” Bookworm to do these computations and display the results.

Digital Displays

Like the processing techniques, the format used to display results vary with experiments. Among the simplest displays is “Cover Averages,” which is comprised of written explanation of the experiment, image results of the compiled covers, and images of individual Vogue covers. This format is straightforward but lacks the dynamic visual elements found in many other experiments on the site.

Among the more dynamic displays are animations, manipulable charts and other interactive elements. “Slice Histograms,” an experiment looking at cover color trends, features an animated video that quickly cycles through each slide of representative cover samples by year to illustrate the changes in saturation over the last century. “Advertisements” uses side-by-side graphs to compare appearances of ads in the magazines over time. Clicking on simple graphics brings up different categories of advertisements and individual charts for relevant companies.

The animated video displaying results from “Slice Histograms.”

Whether they are interested in fashion, linguistics, history, computer science or any of the other numerous fields that would find interest or inspiration in Lindsay King and Peter Leonard’s experiments, visitors to the “Robots Reading Vogue” site have plenty of examples to explore.

Author: Michelle

6 thoughts on “Humans Reading “Robots Reading Vogue”

Leave a Reply

Your email address will not be published. Required fields are marked *