At last this project is finished and I apparently survived the final workshop. Hope the attendees were satisfied.

The workshop enticed 3 representatives from Scopus: Ross Cameron, Amanda Hart and Felix Haest and 2 from Thomson ISI: Jeff Klovis and Nicholas Espeche. Cameron has been my contact with Elsevier and my contacts with Thomson has been Nicholas and Simon Pratt (not present at the workshop).

Sometimes it’s hard when doing evaluations of this kind. You never get to meet that guy or girl, often a more technical one, that really knows the most of the product’s inner life. Q’s has often to get filtered through some marketing people, who often need to check with their technichians. But that’s life, and yes, I have some comprehension with that.

So, it was nice meeting Cameron Ross, which I’ve talked to by phone before beside of e-mailing, and Jeff Klovis, who I just had some brief e-mailing with. Jeff came all the way from Piladelphia to answer our questions, and with his 26 years working with WoS he really knows the inner heart of it.
Many thanks to Kari Stange at BIBSAM who led the day so I could focus on my speech and questions at the panel debate. Also, thanks to my swedish advisory board of librarians during this project.

After the talks of Cameron Ross and Jeff Klovis it seems like they really try to overbid each other with new options, coming soon, coming soon. October was a word I heard several times, instead of “next quarter” 😉 If this project One Entry to Research, first initiated by Pelle Olsson, head librarian at KIB, had any impact on the to supplier’s increased attention to new options within their products and their emerging awareness of flaws I don’t know, but at least it was one of the purposes with the project. What I missed is their blog commentaries. Rafael Sidi, engineer at Elsevier, has a blog with a witty title Really Simple Sidi and he writes:

“I am surprised that no one from Scopus and Web of Science is commenting about his remarks about these products. Folks, you like it or not, it’s time to engage and have a conversation with the customers.”

So I hope though this project is finished the debate on this bibliometric and multidisciplinary products could keep on. Will this blog die then? Maybe some postings will still come up, although I have some talks left to do on this subject, I think it’s three. So maybe to stay updated some evaluations will be done the rest of this year. Though there are no government funding left, just my working time at the library.

It would also be nice to here from you readers (if there are any 😉 if you think this blog had any meaning foryour daily work. Just comment pro’s and con’s, I’m glad to have some communications.

As a aftermath to this project I will also try to publish a coverage and overlap study I’ve done with Ylva Gavel. My power point slides from my talk at the workshop will also soon be available from BIBSAM conference hompage.

I will try to update the web page References to literature and a summary of my findings you’ll see at the web page Project summary.

Next week I have 20 minutes to present this project at the 10th European Conference of Medical and Health Libraries, Cluj-Napoca, Romania, 11-16 September. I will also talk about interfaces in the google-era, but for 60 minutes.

P.S. I also had the oppurtunity to improve my non-excellent, but preferbly intelligible written english. Thanks, Ylva Gavel for proofreading some of it.

This quite recently published article examines subject coverage in Google Scholar:

Neuhaus, Chris, (2006) Ellen Neuhaus, Alan Asher and Clint Wrede The Depth and Breadth of Google Scholar: An Empirical Study
Portal: Libraries and the Academy Vol. 6, No. 2, pp. 127-141.

From 47 databases 50 article titles were randomly collected from each database and compared with Google Scholar. Look at the figures here:

Conclusions: Each databse of 47 (with 2350 randomly selected articles) had a median and average coverage of 60%.

Neuhaus et al means GS weaknesses in subject coverage are social science and humanities databases and strenghts science and medical databases, open access databases, and single publisher databases.

Web of Science have introduced a feature called Refine your results which is displayed in top of each search. Check this screenshot:

There are several options for refining, for example institutions. When searching for Haglin L as author you get 11 hits and when clicking Institutions 8 alternative addresses:

The problem is:

1) You won’t always see the name of the department in the overview, even if it’s in the record. You have to click the record to view it.

2) You will get all affiliations for all authors of all articles, not just for the author you searched or the first author name. Addresses are mixed together for all authors.
Positive is you get a better overview of different spelled addresses for authors.

Refining option Subject categories is not a subject search limit I recommend. It’s built upon the subject categories of the journals where articles is published. Sometimes you can see an option for Concept Codes or Descriptors or Controlled Index. It’s connected to content from the database BIOSIS and should not be used as an multidisciplinary subject search. These are options in development and will see in future what it can bring.

Conclusion: A better, faster overview, though same problems exist with lacking address/author information.

Google Scholar has released an option for searching Related Articles, similar to PubMed Related Articles and other databases. Google Scholar is not using a thesaurus as PubMed do, but in advanced search you can limit to 7 broad subjects. With these Related Articles option you can try to search an article of your topic. Maybe you know the title or the author. When you find the article click on the link beneath it. Check this article reference by Judit Bar-Ilan about search engines evaluation:

Clicking the Related Articles link returns 99 related articles and one is for example Greg Notess’ old (yes, 2000 is old in the search engine evaluation world!) article about Search engines inconsistencies from the magazine Online.

I also tried some other smaller subjects within information science and it returns remarkably relevant hits. Nice satisfying option, but my brief evaluations doesn’t say anything about coverage of course. When trying to subject search Google Scholar (which as I said is not easy to do comprehensive) try to use the related articles, beside of free-text searching.

Web of Science have launched Author Finder and Scopus has of course launched Author Identifier. Both tools are supposed to assist when there are lots of variants in author listings. In this news flash Authors, Authors: Thomson Scientific and Elsevier Scopus Search Them Out by Barbara Quint at InformationToday.com 24 July 2006 you can read more about these tools.

So let’s do some evaluations of these tools by using examples from my other Author-address search evaluations in Web of Science and Scopus. Let’s try my extreme author search example on Rantapaa-Dahlqvist, S. This is a screenshot from Web of Science when searching Rantapaa in Author Index which I usually suggest to use to get more complete author searches:

Here’s a screenshot of the first step in using Author Finder in Web of Science entering the last name and first initials Rantapaa-Dahlqvist, S.:

Step 2 displays author variants. But in this case Author Finder can’t find misspelled variants of the last name, just variants of first name initials. Both 51 records of Rantapaa-Dahlqvist, S and 1 record of Rantapaa-Dahlqvist, SSRD is retrieved:

Step 3 gives you the option to choose subject categories. Those categories are based on the subject of the journal published in, not the article itself:

Step 4 is a valuable refine option called Select Institution. Here you can choose the author affiliation, but it’s not always possible to refine on department level, even if the department exist on the record. The confusing thing here is that you get all the affiliations of all authors of all articles, not just for the author you’re searching. One benefit with this refine option (which also exist in Refining you results) is you get a list of possible synonym addresses and misspelled or incomplete addresses also. If you use this option to refine your search you must be aware that all records are not complete with author addresses and refining can make you loose important records:

Let’s try searching for Stegmayr, B. You get options for Stegmayr BG and Stegmayr BK also. If you choose all variants: Stegmayr, B, Stegmayr, BG, Stegmayr, BK you get 236 records. Trying to refine in step 3 by address is just such a mess because all the author addresses from the articles are displayed. Both Stegmayr are active at the same university department. Stegmayr, BK has written one article with address DEPT INTERNAL MED, but when refining with that department name you get 2 other records instead.
Let’s search for Haglin, L with Author Finder and refine to address. In this screenshot we choose UNIV UMEA HOSP, UMEA UNIV, CTY COUNCIL VASTERBOTTEN, UNIV HOSP:

That search returns 7 hits which means 4 older records are not returned. Compare results here:

This is one of the records that will be missing because Haglin’s affiliation at that time was in Uppsala:

When searching Astrom, S in Author Finder in Web of Science there are lots of options for refining on affiliation level. Refining to UNIV HOSP will also retrieve Astrom, Siv at the Department of ophthalmology, not just Astrom, Sture at Nursing Deparment. There is no option to choose just UMEA or if you take UMEA UNIV you miss Astrom, Siv. But most strange is why DEPT OPHTHALMOL isn’t optional, though other deparments like DEPT MICROBIOL is displayed. It stands clear this refining option on Institution name is just a mess!!

Searching for author Bernspang B in Cited Ref search returns a list with one entry Bernsprang B also:

The entry Bernsprang B is a misspelled citation for her thesis:

When searching Author Finder you will not retrieve the misspelled Bernsprang when searching Bernspang:

Searching Lundin Olsson or Lundin-Olsson returns no hits either in General Search or Cited ref search. Searching LundinOlsson in Author Index retrieves both LundinOlsson ,L and LundinOlsson, I:

Checking the variant with I as first initial indicates it’s a incorrect citation:

Checking Scopus shows the record is not incorrect in their reference list:

So let’s check the Author Indentifier in Scopus. It’s integrated in Author Search. A search for Rantapaa-Dahlqvist returns three variants:

But truncated Rantapa* S* returns boths variants of Rantapaa-Dahlqvist and Rantapaa-Dahlquist:

But you also need to search Dahlqvist* s*:

An advantage in Scopus compared to Web of Science is the possibility to search fullength first name. But that doesn’t even help always. Searching Eriksson, Sture returns following screenshot:

Choosing first hit gives 66 records and three records are associated via Eriksson, S-E. It’s not the same person as Eriksson, Sture at Umea university. Eriksson, S-E affiliation 2001 was Falun hospital:

Hit 9 doesn’t show affiliation but when checking the hit the record has affiliation connected in the record:

Hit 2 when checking has also affiliation at the record and shows up being Eriksson, Sture at Umea university and hit 3 and 10 are Eriksson, Sture at Umea university but none of them are connected to the first hit of Eriksson, Sture with 66 records. That’s not an improvement, that’s confusing.

The good thing is though that Eriksson, Staffan (same first initial as Sture) at the same department as Eriksson, Sture is not connected here in the search for Eriksson, Sture, which is a problem in Web of Science, as they are not using fulllength first name.

Conclusion: Author finder in Web of Science could be hazardous for detecting possible misspellings of author names. The refining option of Institution Name in Author finder presents a nice overview but is really hazardous because the addresses don’t refer just to the author, instead all co-authors also! All address information is not always possible to include and excluding options are not available. I suggest using Author Index instead.

Scopus Author Identifier has some improvements, though there still are serious flaws existing. Misspellings are still a problem and not solved in this new algorithm.

Bauer et al have published two articles on citation search:Bakkalbasi N, Bauer K, Glover J, Wang L (2006)
Three options for citation tracking: Google Scholar, Scopus and Web of Science Biomedical Digital Libraries, Vol. 3, No. 7, 29 June.

Bauer K and Bakkalbasi N (2005) An Examination of Citation Counts in a New Scholarly Communication Environment
D-Lib Magazine, Vol. 11, No. 9.

Noruzi also made some brief evaluations in an article:

Noruzi, Alireza Google Scholar: The New Generation of Citation Indexes
LIBRI Vol. 55, Iss. 4, p. 170-80
Belew K compared citation search in WoS and Google Scholar:

Belew, RK (2005) Scientific impact quantity and quality: [PDF] Analysis of two sources of bibliographic data.
Arxiv.org

The first article of Bauer and Bakkalbasi showed the citation count for GS was higher than WoS and Scopus for 2000. But for 1985 WoS seem to be best to cover citations. Comparing WoS and Scopus, WoS found more citations for 1985 but for 2000 it was similar.

The next article of Bauer, Bakkalbasi et al evaluated journal articles from two disciplines: oncology and condensed matter physics (CM physics) and two years: 1993 and 2003.

Their conclusion is:”This study did not identify any one of the three tools studied to be the answer to all citation tracking needs”. Scopus shows strength for oncology articles from 2003, but WoS performed better for CM physics and was better for both disciplines published in 1993. GS returned smaller number but had a large set of unique citing material for 2003. Bauer, Bakkalbasi et al make clear:”…it is clear that Google Scholar provides unique citing material.”

The article by Belew compares GS with WoS by author search. Belew randomly selected six academics from same interdisciplinary department and bibliographies of all publications by these authors were manually reconciled against 203 references found by one or both systems. WoS discovered 4741 citations and GS 4045, but when evaluating each author 2 authors get significantly more citations in GS.

Belew indicates that because of the quality in some bibliographic citations it’s common to find that same publication has been treated as more than one record. When searching cited ref search in WoS for an author you can find these types of errors. As Belew indicates in Table 1. With these types of errors it’s possible to loose citations for an article in WoS, but instead there are sometimes duplicates of an article (preprint and original article) that inflates citation count. Belew does not discuss that GS often shows duplicates and sometimes if you manually check the number of times cited it’s incorrect displayed.

Belew conclusion is: “GS seems competitive in terms of coverage for materials published in the last twenty years; before then WoS seems to dominate”.

We also earlier this year made a small test between Scopus and WoS by searching author name, but just author names that we can sort out as unique.

Noruzi made free text searches when testing citation search with search statement: webometrics OR webometric. Freetext search is not a proper subject search. As Bauer et al is pointing out WoS, GS and Scopus databases processes a freetext search in different ways. For example Google Scholar indexes even the fulltext of articles in contrary to WoS and Scopus. Though in this case Noruzi still have just compared each known article, though the method of choosing articles and the low amount of articles may be arguable.

None of Belew and Bauer et al have discussed the problems with citation counting in Google Scholar. Though Peter Jacso have criticized Bauer et al and presented examples of flaws in Google Scholar:

Jacso, Peter ([2005b]) Google Scholar and The Scientist
(Published on university homesite as extra material). [online] http://www2.hawaii.edu/~jacso/extra/gs/
I believe the percentage of flaws in Google Scholar may not decrease the value of the findings significantly of Belew and Bauer et al but it should be considered and discussed. Research on the propotions of citation counting flaws in Google Scholar would be of considerable value for future evaluations.

I checked the citation counting in Google Scholar of the first article Noruzi refers to in his test in Table 2:

C Almind and P. Ingwersen Informetric analyses on the world wide web Journal of Documentation Vol. 54 Iss. 4, p. 404-426

Check tihis screenshot:

I received 4 hits where the first hit clusters 11 duplicates (look at link group of 11). 3 duplicates (hit 2-4) are unclustered. Together it’s 192 citations for the article of Almind et al. But if you check the reliance of citations in all hits you will find duplicates. I evaluated maually all 192 citations together and found 13 obvious duplicates. It’s manually checked and some more duplicates may be found. All records in chinese letters are not checked. Here are screenshots on all duplicates put together with an image editing software:

Of 192 citations from GS, 12 is duplicates which gives these results: GS 180, WoS 90. This means 6% is incorrect citations.

Of course WoS could have duplicates also.

Conclusion: Scopus is important for finding more citations from 1996 and current. Google Scholar is important because it finds a lot of unique citations but each reference with information on times cited should be manually checked by counting and looking for duplicates. Web of Science is still competetive, especially for older material.

As Noruzi mention in his article GS indexes a lot more of publication types and from various languages. If every citation, no matter from which source, has the same value of 100%, GS is an important source. The discussion should exceed on the value of each citation. Should self-citation get any value at all? Should articles not peer-reviewed get a lower value for their citations?

When searching this article An Examination of Citation Counts in a New Scholarly Communication Environment by K Bauer, N Bakkalbasi – D-Lib Magazine, 2005 in Google Scholar you get the result cited by 8 other papers:


When checking all this 8 papers cited the Bauer et al article you well get three citations from the same source in The Scientist “The future of citation analysis”:

Google Scholar have managed to find 4 duplicates (as you can see in the last reference in link “group of 4”) and cluster them but missed two other duplicates.

I've made a page with a list of conference presentations reviewing Scopus and Google Scholar and also Web of Sceince in comparison with the former databases. Three examples follow below:

Jacso, Peter "The Endangered Database Species: Are the traditional commercial indexing/abstracting & full-text databases dead?"[PPT]
UK Serials Group 29th UKSG Annual Conference and Exhibition, University of Warwick 3-5 April 2006.

Jenkins, JR "Article Linker Integration with Google Scholar (or Google Scholar as referring source)"[PPT]
OpenURL and Metasearch: New Standards, Current Innovations, and Future Directions, September 19, 20, 21, 2005, Washington, DC.

Tarantino, Ezio "Scopus, WOK, Google Scholar: too much or not too much?" [PPT]
The International Coalition of Library Consortia Autumn 2005 7th European Meeting, Poznan, Poland 28.09 – 01.10, 2005.

If you have more suggestions, just post a comment. Please note, however, that it should be a conference presentation (i.e., not a lecture) and it should focus on some of the above-mentioned sources.

We made some citation frequency comparisons between Scopus, Web of Science and Google Scholar. As Scopus counts citations from 1996 we limited the comparisions to articles published from 1996 and current. The result of the figures in the screenshot showed:

Scopus finds 9% more citations than Web of Science when limited to articles from 1996-.

Scopus finds 20% more citations than Google Scholar when limited to articles from 1996-.

Web of Science finds 10% more citations than Google Scholar when limited to articles from 1996-.

Important to know is that Web of Science indexes more than 9,000 journals compared to Scopus 15,000 journals, though Web of Science argues that (according to Bradford's Law) they have the core journals which have the most citations. Google Scholar has no list of journals and other sources they index, but they index both articles from the proprietary web and scholarly archives, master theses, books etc. Google Scholar citation counting is not working properly either as we already pointed out in a previous posting. In this test all cited references from Scopus haven't been retrieved, just the indexed articles.

As we also already mentioned, the article "An Examination of Citation Counts in a New Scholarly Communication Environment" published in D-Lib magazine September 2005 Vol. 11, No. 9. by Kathleen Bauer et al at Yale University Library made some citation counting. But when we just counted all citations for a random 5 set of authors at Umeå university, Bauer et al made comparisons of the average number of times an article is cited. Both our test and the test by Bauer et al didn't check the Google Scholar inconsistencies of citations counting and duplicates.

Some of the findings from the article by Bauer et al follow below. The information derives from the tables in their article.

The search for articles published 2000 in Journal of the American Society for Information Science and Technology (JASIST) showed for example:

Web of Science counts 0.3 more citations than Scopus.

The search for articles published 1985 in Journal of the American Society for Information Science and Technology (JASIST) showed for example:

Web of Science counts 11.9 more citations than Scopus.

Because Scopus just count citations from articles published from 1996 and current the 11.9 difference is not surprising. Though the 0.3 difference for articles published from 2000 is more questionable. This test by Bauer et al has its limitations because it's limited to just one journal (i.e., JASIST).

Conclusion: Different testing methods at least shows that Scopus definitely is important when searching citations for articles published from 1996. Due to inconsistencies in Google Scholar its not suggested as a single usable tool for citation search.

In several of his writings, Peter Jacso has indicated the inconsistencies of Google Scholar. One important flaw is the citation search. Both his web published paper "Google Scolar and The scientist" and the article "As we may search" published in Current Science 2005 (please see References to literature) discuss the problems.

My testings indicates less inconsistencies than before, but still they exist. The article "An Examination of Citation Counts in a New Scholarly Communication Environment" published in D-Lib magazine September 2005 Vol. 11, No. 9. by Kathleen Bauer et al at Yale University Library made some comparisions of the average number of times an article is cited. They checked the citation frequency of each article for a certain year, in this case both 1985 and 2000, in the Journal of the American Society for Information Science and Technology (JASIST). The search for 2000 showed Google Scholar had 4.5 more citations than Web of Science and 3.9 more citations than Scopus. But searching 1985 Web of Science had 8.7 more citations than Google Scholar and Google Scholar had "just" 2.9 more citations than Scopus. The major shortcoming in this article is that they never analyzed the inconsistencies with Google Scholar citation Search. The citation count doesn't always work properly. Here's an example of a record that indicates Cited by 15 (other sources):

When clicking this link Cited by 15 you will find only 14 citations:

Here's another example of an article by P Jacso himself. Cited by 3 sources according to Google Scholar:

When clicking this link Cited by 3 you will find only 2 citations:

This search on semiconductors is an example from Jacso. In this reference it seems like the article is published 2006, but checking the source shows it's published 1990 and 2006 is the starting page of the article:

Jacso has also pointed out the flaws of duplication in his article "As we may search" in Current Science. Google Scholar works hard with the ability to cluster duplicate articles together. If you look at the preceding screenshot after the title you see the link group of 3>>. Clicking that link shows you 3 duplicates. Because Google Scholar indexes not just peer-reviewed journal articles, but preprint archives, conference papers, master thesis, webpublished materials etc you understand they have a hard problem to discover duplicates.

Here's an example. Searching sojka modeling drop size distributions gives as the first hit an article by Bainsky and Sojka with title "Modeling drop size distributions". That article should be Cited by 7 other sources according to Google Scholar.

By clicking Cited by 7 you find 7 hits but two of them are duplicates. View the two titles "Modeling Spray Impingement using Linear Stability Theories for Droplet Shattering". Though the first title has a link to group of 2>>.

Conclusion: Don't trust the Google Scholar citation counting without manually checking it for inconsistencies in terms of counting and clustering duplicates.