Rise of 'Altmetrics' Revives Questions About How to Measure Impact of Research

Rhodes>Perspective>2013 Archive

Steven Roberts, an assistant professor at the U. of Washington who studies how environmental change affects shellfish, tracks social-media metrics to see how his research is used online.

Steven B. Roberts's 103-page tenure package features the usual long-as-your-arm list of peer-reviewed publications. But Mr. Roberts, an assistant professor at the University of Washington who studies the effects of environmental change on shellfish, chose to add something less typical to his dossier: evidence of his research's impact online.

He listed how many people viewed his laboratory's blog posts, tweeted about his research group's findings, viewed his data sets on a site called Figshare, downloaded slides of his presentations from SlideShare, and otherwise talked about his lab's work on social-media platforms. In his bibliography, whenever he had the data, he detailed not only how many citations each paper received but how many times it had been downloaded or viewed online.

The strategy was part of "an attempt to quantify online science outreach," he explained in his promotion package.

Mr. Roberts can't say for sure that including the digital footprint of his research—captured in part with alternative metrics, or "altmetrics," like those listed above—helped him in his bid for tenure. But it certainly didn't hurt. He won a promotion to associate professor in the School of Aquatic and Fishery Sciences at the university's College of the Environment.

Adding altmetrics to CVs and dossiers may not be common yet. But interest in altmetrics is growing fast, as scholars begin to realize that it's possible to track and share evidence of online impact, and publishers and new start-up companies rush to develop altmetric services to help them document that impact.

RELATED CONTENT

Reliance on 3rd-Party Data Creates Uncertainty for 'Altmetrics' Providers

New Metrics Providers Help Keep Libraries in the Research Game

Enlarge Image

Matthew Ryan Williams for The Chronicle

Mr. Roberts cited "altmetrics" like tweets about his research group's findings as part of his successful tenure application.

The term "altmetrics" has only been around since 2010, when Jason Priem, a doctoral candidate at the School of Information and Library Science at the University of North Carolina at Chapel Hill, first used it in, fittingly enough, a tweet. That led to aninfluential manifesto written by Mr. Priem and three other researchers. which pointed out the limitations of traditional filters of quality like article citations and the journal impact factor. 

Those take months or years to bubble up; altmetrics can be collected fast, letting researchers see, almost in real time, how an article or data set or blog post is moving through all levels of the scholarly ecosystem.

That appeals to researchers like Mr. Roberts, interested in sharing scholarship quickly and openly to speed the flow of ideas, in keeping with the philosophy of the open-science and open-access movements. Even his lab's research notebooks are posted online, so colleagues can see one another's work as it progresses.

But skeptics and some observers wonder whether blog posts and tweets and other social-media activity are sophisticated and reliable enough to capture true impact, which is a slippery concept to begin with.

Nick Scott, digital manager at the Overseas Development Institute, observed in a post on a London School of Economics and Political Science blog last December that online reach and real impact—which he defined as "change in the world"—were not necessarily the same thing. "How do we compare tweets, Facebook likes," and other uncertain votes of confidence? he asked.

Some also worry that altmetrics can be easily gamed. A much-talked-about paper published last year tested how difficult it was to manipulate the metrics in Google Scholar, a free, much-used service that compiles citation data for academic output, potentially competing with commercial bibliometric databases like Thomson Reuters's Web of Science and Elsevier's Scopus. The article's authors created six papers by a fake author, uploaded them to a Web site, and tracked the resulting citations. They concluded that it's "simple, easy, and tempting" to game the system.

Altmetrics supporters acknowledge that gaming is a risk but point out that any kind of metric is vulnerable to corruption. Journals have been called out over the years for inflating their citation rates and thereby their impact factors, for instance.

More pressing is the question of who controls the sources of these data on scholarly impact online, especially as altmetrics become more sophisticated, reliable, and widespread. Those questions grew louder this spring when the publishing giant Elsevier bought Mendeley, a popular reference-management platform where scholars store and share articles.

Mendeley is also a hub for group discussions focused on specific research topics and interests—the kind of online activity that altmetrics proponents envision being harnessed as a kind of early-detection system that will pick up on promising new work and trends in a field.

And then there's the threat that altmetrics could be co-opted or misused by institutional assessors inclined to rely on numbers rather than on more nuanced indicators of quality when judging the worth of professors, research groups, or departments. "When I talk to administrators, they say there's a huge pressure to be more quantitative," says Mr. Priem.

Age-Old Debate

The larger conversation about how to measure scholarly impact is probably as old as scholarship itself. Altmetrics use has been most notable so far among scientists and librarians, for whom "quant culture" has long been a fact of life. Jason Baird Jackson, director of the Mathers Museum of World Cultures at Indiana University at Bloomington, says that metrics can be harder for humanists to understand or get behind.

"In many humanities fields, those scholars have intuitions and beliefs about the most important journals," Mr. Jackson says, but they don't know much about impact factors. "They don't know which to be more nervous about," altmetrics or all metrics. "Any kind of metric entails the risk of promoting short-sightedness," he says. "I think the humanists are particularly sensitive to this."

Mr. Jackson invokes predigital conversations that folklorists and museum-based anthropologists have long had about how to measure the scholarly impact of, say, exhibitions or other scholarly output that doesn't fit a traditional academic mold. At Indiana, he has helped lead a series of campus conversations that touched not just on altmetrics but on related issues like how to rewrite tenure-and-promotion guidelines to better reflect shifts in how scholars conduct and share their work.

In the last year, altmetrics has become "a serious matter that people are getting their head around," Mr. Jackson says. "For many of our department chairs, this is a totally new world."

Stacy Rose Konkiel, a science data-management librarian at Bloomington, agreed that what's lagging now is faculty awareness and trust. "Campuswide there's a little sensitivity toward measuring faculty output," she says.

Altmetrics can reveal that nobody's talking about a piece of work, at least in ways that are trackable—and a lack of interest is hardly something researchers want to advertise in their tenure-and-promotion dossiers. "What are the political implications of having a bunch of stuff online that nobody has tweeted about or Facebooked or put on Mendeley?"

The library at Indiana has been quietly exploring how to do more with altmetrics, operating on the principle that "altmetrics can just be a faster and more reliable way to measure public reaction to output that Indiana faculty have produced," Ms. Konkiel says. But skepticism and the old ways make that a hard sell in some quarters. "The folks I've talked to are like, 'Yes, it does have some value, but in terms of the reality of my tenure-and-promotion process, I have to focus on other things,'" she says.

Publishers Jump In

Publishers over all need less convincing. Some, like the open-access giant PLOS, have well-developed efforts to track usage of articles they publish (often called "article-level metrics"). John Wiley & Sons just started a trial with Altmetric, a publisher-oriented service that collects data from social-media sites and reference managers and creates an Altmetric score that attempts to pull all that information together.

Different sources of data are given different weights; as the Altmetric Web site explains, "a newspaper article contributes more than a blog post which contributes more than a tweet." Altmetric also provides embeddable color-coded graphic representations, called "donuts," that reveal specific social-media uptakes, downloads, or mentions for each article.

Such feedback can be useful for editors as well as for researchers.

Martijn Roelandse is publishing editor for neuroscience at Springer, one of the largest commercial scientific publishers. He is also a member of Springer's Social Lab, a social-media task force.

Springer publishes more than 3,200 journals, some 325 of them open access, according to Mr. Roelandse. The company "is changing from a sole focus on the journal impact factor to providing multiple metrics" to authors and editors, Mr. Roelandse told The Chronicle in an e-mail interview.

It uses Altmetric for what he calls "social metrics," the nonprofit content-linking service CrossRef to gauge citations, and its own download statistics to get a quantitative sense of how much use an article is getting. For Springer journals' editorial boards, "we now provide in-depth insights on the impact factor, citations, downloads, and social mentions to sketch a broader picture of the journal."

Like many of the people now experimenting with altmetrics, Mr. Roelandse sees them as complementary to the impact factor, not a replacement for it. "Altmetrics are a wonderful means of highlighting those articles that performed very well within a journal," he says. But the impact factor will continue to be a benchmark of journal quality, he adds.

As scholars, librarians, and administrators figure out how to combine altmetrics with traditional measures of reach and quality, altmetrics pioneers have been busy the last few months building tools to serve those different groups. Along with Altmetric, another leader in the new field is Plum Analytics, which is several months into a pilot project with the Smithsonian Institution and the University of Pittsburgh library. Yet another mover in this expanding space is ­Academia­.edu, where researchers can create profiles, upload papers, and track readership and use.

One example of an altmetrics provider is ImpactStory, an open-source, Web-based tool created by Mr. Priem and Heather A. Piwowar, two of the most active leaders of the burgeoning altmetrics movement. (Ms. Piwowar until recently was a postdoctoral research associate with Duke University and the University of British Columbia, studying the availability and reuse of research data.) Professors can add an ImpactStory widget on their own Web pages to get live altmetrics for papers and other research products. "I love it because it's easy, too. It doesn't take much effort," says Mr. Roberts, of the University of Washington.

The widget creates badges that show the different ways a research object—a journal article or a blog post or a SlideShare presentation—has been tapped by users. For instance, a listing on his lab's Web site for a 2012 paper Mr. Roberts co-wrote and published in the open-access megajournal PLOS ONE includes a badge that describes it as "highly saved," with 18 readers adding it to their Mendeley libraries.

That's better than 90 percent of the items indexed in 2012 by the Thomson Reuters product Web of Science, according to the ImpactStory assessment, "suggesting it's highly cited by scholars."

The article, about the development of resources for genomic sequencing of Pacific herring, also did well on the tweet-tracking service Topsy; it was tweeted more times than 97 percent of the items that were indexed in 2012 by Web of Science, "suggesting it's highly discussed by the public."

As that phrasing indicates, altmetrics data can't reveal everything. Mr. Roberts points out that if someone tweets about a paper, "they could be making fun of it." If a researcher takes the time to download a paper into an online reference manager like Mendeley or Zotero, however, he considers that a more reliable sign that the work has found some kind of audience. "My interpretation is that because they downloaded it, they found it useful," he says.

By Jennifer Howard

Article Source; http://chronicle.com/article/Rise-of-Altmetrics-Revives/139557/?cid=wb&utm_source=wb&utm_medium=en