![]() Heuristic analysis of the search results was used to rate the search performance on a 5-point scoring rubric, which was designed to measure the degree of friction in locating the book in question. We developed and executed known-item test searches that were designed to simulate common researcher practices. The books tested in this study came from publishers of all sorts and sizes, and represent both monographs and edited volumes from a range of fields some were open access and others were published under traditional licensing models. We rated the search performance of more than 100 scholarly books using preset test queries (two for each title). Our test method and analysis rubric was developed based on our own information-user research, in particular how readers search and retrieve scholarly ebooks, as well as published studies about academic information experiences and research practices. Given its popularity with a range of stakeholders in our industry, we set out to measure metadata impacts on discoverability in the mainstream web – namely, Google Scholar. ![]() This study was designed to evaluate metadata impacts & benefits to users. What relevant lessons can we glean from this exercise? What changes might book publishers consider based on the outcomes of this study? Background on the study Although Google Scholar claims to not use DOI metadata in its search index, the results of our mixed-methods study of 100+ books (from 20 publishers) demonstrate that books with DOIs are generally more discoverable than those without DOIs.Īs we finalize our analysis, we are sharing some early results and inviting input from our community. ![]() Initial results indicated that DOIs have an indirect influence on the discoverability of scholarly books in Google Scholar – however, we found no direct linkage between book DOIs and the quality of Google Scholar indexing or users’ ability to access the full text via search-result links. Specifically, we set out to learn if scholarly books with DOIs (and associated metadata) were more easily found in Google Scholar than those without DOIs. In 2021, we embarked on a Crossref-sponsored study designed to measure how metadata impacts end-user experiences and contributes to the successful discovery of academic and research literature via the mainstream web. However, as we’ve reflected on previously in the Kitchen, despite well-established information standards (e.g., persistent identifiers), our industry lacks a shared framework to measure the value and impact of the metadata we produce. The scholarly publishing community talks a LOT about metadata and the need for high-quality, interoperable, and machine-readable descriptors of the content we disseminate. This blog post is from Lettie Conrad and Michelle Urberg, cross-posted from the The Scholarly Kitchen.Īs sponsors of this project, we at Crossref are excited to see this work shared out. ![]() Lettie Conrad, Michelle Urberg, Jennifer Kemp ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |