As I mentioned in the previous post, what researchers would really like is that blogging, twittering, etc. would increase readership and engagement with their work. Social media increase reach and speed of distribution and additionally come with a whole new set of measurable data: retweets, likes, bookmarks, access and download statistics, etc. Hopefully these are also indicative of and/or lead to an increased number of citations and other quantifiable metrics or at least balance them out, since rightly or wrongly scholars of all kinds are assessed through metrics like their H-index, generally based on databases such as Scopus and the ISI “Web of Knowledge”.
To evaluate someone’s track record, citations and the various indexes based on them are used as a measure of how their publications have been actually received. However, in the humanities, and perhaps especially in philosophy, this is a quite dubious approach. First and foremost, not all journals are accurately tracked and books generally not at all, so often this kind of “impact” cannot be correctly measured and the metrics are way off. Considering my own situation, Scopus lists only 2 of my publications, the “Web of Knowledge” just 6, ReaderMeter 7, but Google Scholar 20. Additionally, only Google is able to find any citation data at all, assigning me an H-index of 2. Since it makes me look so much better, comparatively, making my Google profile public turned out to be a good idea indeed …
However, even Google doesn’t provide the complete picture. According to my own tracking, my articles on “The Beginnings of Husserl’s Philosophy” have each been cited more than 10 times, and many of my other articles did also mange to garner at least one citation (N.B. I exclude all self-citations here, which Google does not). Still, this does not radically change the picture, only bringing my H-index to 3.
Besides the problem of obtaining good data to measure, we hit a further snag regarding how useful the H-index actually can be. For cases such as mine (few publications with many citations and many with few), the H-index (H publications with at least H cites) is less suitable and the G-index might be better (rank publications by number of citations: the top G publications together have at least G-squared citations). In this case, I would have a G-index of 5.
Still, even if the issues of obtaining good data and choosing a representative index could be overcome, doesn’t this reduce a scholar’s worth to his (published) output, like a Stakhanovite listing the tons of coal produced each year?
Indeed, the problem is applying mistaken measures to activities who should be measure in another other way. The very idea of measuring is numerical and in that sense limited to the numbers you can put out. For scholars means numbers of articles and citations. It’s my view that this way of measuring fully corresponds to capitalistic views invading non – capitalistic organizations, like the university and the universe of knowledge. However, this is a problem we scholars have to overcome, for we depend on this material structure somehow, at least as leaving beings. In México we have a National Comitee for Sciences and Technology.If you want to have a good income, you need to be accepted in the listings controled by the National System of Investigators, which is a franchise of the Comitee. They only measure, that is, ask for number of articles and citations. This system is completley against the goals of the academy itself, because it encourages garbage production and auto – copy (recycling) among investigators. So, If you are in this system, it does not follows at all that you are a good scholar or investigator.
Carlo Ierna said:
Indeed, such a system selects for the wrong traits.