As I mentioned in the previous post, what researchers would really like is that blogging, twittering, etc. would increase readership and engagement with their work. Social media increase reach and speed of distribution and additionally come with a whole new set of measurable data: retweets, likes, bookmarks, access and download statistics, etc. Hopefully these are also indicative of and/or lead to an increased number of citations and other quantifiable metrics or at least balance them out, since rightly or wrongly scholars of all kinds are assessed through metrics like their H-index, generally based on databases such as Scopus and the ISI “Web of Knowledge”.
To evaluate someone’s track record, citations and the various indexes based on them are used as a measure of how their publications have been actually received. However, in the humanities, and perhaps especially in philosophy, this is a quite dubious approach. First and foremost, not all journals are accurately tracked and books generally not at all, so often this kind of “impact” cannot be correctly measured and the metrics are way off. Considering my own situation, Scopus lists only 2 of my publications, the “Web of Knowledge” just 6, ReaderMeter 7, but Google Scholar 20. Additionally, only Google is able to find any citation data at all, assigning me an H-index of 2. Since it makes me look so much better, comparatively, making my Google profile public turned out to be a good idea indeed …
However, even Google doesn’t provide the complete picture. According to my own tracking, my articles on “The Beginnings of Husserl’s Philosophy” have each been cited more than 10 times, and many of my other articles did also mange to garner at least one citation (N.B. I exclude all self-citations here, which Google does not). Still, this does not radically change the picture, only bringing my H-index to 3.
Besides the problem of obtaining good data to measure, we hit a further snag regarding how useful the H-index actually can be. For cases such as mine (few publications with many citations and many with few), the H-index (H publications with at least H cites) is less suitable and the G-index might be better (rank publications by number of citations: the top G publications together have at least G-squared citations). In this case, I would have a G-index of 5.
Still, even if the issues of obtaining good data and choosing a representative index could be overcome, doesn’t this reduce a scholar’s worth to his (published) output, like a Stakhanovite listing the tons of coal produced each year?