Odile Harter, Harvard University
A student arrives at the reference desk, slightly cryptic citation in hand, for help tracking it down. After hitting a couple of dead ends, I explain that the next step is to figure out the appropriate subject database and search there. Her face brightens: she knows the one! Her course had a library session on it. Has she already looked there for the citation? No, she says, “I tried, but I just couldn’t figure out how it worked.”
The colleague who’d led the session is an excellent and engaging teacher. The student herself seemed resourceful, intelligent, and eager to learn. So this was a bright and motivated student who had a citation, knew the name of the best database for finding out more information about it, had seen an informative demonstration of that database, and yet was unable to use the database to fill out the missing information.
A library search interface can be extremely challenging, especially for students who spend all day in interfaces that are geared toward full-text searching. A print phone book forces you to think about what category a certain business might be under, or to think alphabetically by last name. By contrast, Google and other commercial search engines have conditioned us all to use simple queries and trust the algorithm to deliver the “best” options at the very top of the results list. When we encounter a system that is not organized in a way that immediately makes sense, there is usually a search box we can turn to, or we can find a good-enough alternative in a site that is searchable. Rarely do we have no choice but to read through a list of menus until we’ve managed to figure out how to navigate them. (Store aisles are one of the few examples that come to mind.) There is a subtle but important difference here: when we use a search box, we’re usually thinking about a vernacular rather than a controlled vocabulary. With Google, the fastest way to a satisfactory result is to predict what other people would type in.
It shouldn’t be surprising, then, that students have trouble choosing appropriate databases and constructing optimal search strategies. One particularly troubling finding is the degree to which students depend on relevancy ranking algorithms to do their cognitive work for them. Framed in terms of the “anthropology of algorithms,” Google’s deceptive simplicity takes on a sinister aspect. The phrase is from Andrew Asher, one of the lead investigators in the ERIAL Project, who notes that the secrecy around search engines’ precise recipes for determining relevancy “makes it difficult for students to fully understand the embedded politics of how information is organized and retrieved.” Lucy Holman’s study of the mental models of first-year students at the University of Baltimore similarly revealed a great deal of fuzziness in students’ understandings of how a search engine processes their query. One participant admitted “I don’t know what it does because I’ve never thought of this; I just type my words in.” Some participants anthropomorphized the search engines, assuming they intuited the user’s information need and reformulated the query accordingly: “None of [the search engines] connect all the words together… they just kind of pull out whatever they feel is the most important word.”
Some of the poor search results our students see are the direct result of their poor search skills, exacerbated by their own unawareness of the deficit. (How often have you seen a student take the first page of search results as proof that his topic is obscure or unstudied?) And the cognitive leaps required to imagine structured data and engage in algorithmic critique present significant obstacles to developing more sophisticated search strategies.
To make matters worse, library systems tend to be really, really complicated. Vendors continue to increase the number of resources included in their aggregated databases, making it even harder to locate appropriate and relevant material in a results list. Compounding this overload of information in individual databases is the sheer number of different interfaces a university library offers up for patrons to navigate. Even something as simple as identifying the link to full text stumps an alarming number of users. I spend all day with structured data, and yet when I want to remember just how confusing the library ecosystem can be, I need only think about how I feel when a research consult takes me into a database I don’t often use. How does this thing work again? What does this one mean when it says “article”?
In other words, databases are extremely challenging. However difficult and foreign we imagine library research to be, it is in fact even more difficult and foreign to the students.
To learn something difficult and complicated, one must be strongly motivated. And yet, there is often a mismatch between the research effort librarians expect of students and the research reward they receive from the professors who grade them. If the assignment asks for, say, 10-15 sources, then a few simple keyword searches in JSTOR, or consulting a professor or other expert, will create just as impressive a works-cited list as a search conducted with low to moderate expertise in the appropriate database. It will also leave the student more time to work on critical analysis of those 10-15 sources, and may ultimately reward the student with a higher grade. The student who came to the reference desk with her partial citation had chosen an efficient strategy.
This is not to say that faculty don’t push students to find better sources, or that there aren’t fantastic partnerships in which the tasks assigned are perfectly calibrated to the actual difficulty students experience with library databases, and in which the course’s learning goals and assessment criteria give students a strong understanding of the worth of the enterprise. Rather, it’s to suggest a shift in emphasis toward the cognitive leaps themselves. Most of the students I teach will never look at the MLA Bibliography again after they’ve left college; it may not even be necessary for their current project. But they will face a lot of complicated interfaces, obscure indexing, and mystery-shrouded results ranking. Perhaps the most useful thing they can learn from me is to be unafraid of the interfaces, deduce the indexing, and question the ranking. Not to balk at a failed search; to locate the help file. And perhaps these skills are more important than the particular database they train on or the sophistication of their first few search queries.
 “Search Magic” (www.andrewasher.net: 5 Dec 2011). See also “Searching for Answers: Student Research Behavior at Illinois Wesleyan University” (co-written with Lynda Duke, in College Libraries and Student Culture: What We Now Know [Chicago: ALA, 2012]).
 “Millenial Students’ Mental Models of Search: Implications for Academic Librarians and Database Developers” (Journal of Academic Librarianship 37:1 ).
 See, for example, Asher and Duke p. 77.
 Barbara Fister, Julie Gilbert, and Amy Ray Fry noted this alarming trend in 2008: see “Aggregated Interdisciplinary Databases and the Needs of Undergraduate Researchers” (portal: Libraries and the Academy 8:3 ) for their arguments in favor of the small and intensely curated general-interest index of yore, as well as their surprising findings on librarians’ uncritical attitude toward the quantity of material included in aggregated databases.
 See Emily Singley, “Top 5 problems with library websites – a review of recent usability studies” (emilysingley.net: 1 Oct 2014). Helen Anderson and Sarah Sexstone remark that the “whatever works” method used by students, faculty, and librarians alike is complicated by interfaces that “multiply and change almost daily” (“‘Whatever Works’: Finding Trusted Information,” in Studying Students: A Second Look, ed. Nancy Fried Foster [Chicago: ACRL, 2013]).
 For students’ intentionally “small compass” as well as common pitfalls of assignment prompts, see Project Information Literacy’s 2010 reports, “Truth Be Told” and “Assigning Inquiry.”