Previous | Contents | Next | ||
Issues in Science and Technology Librarianship |
Winter 2012 | |||
DOI:10.5062/F42R3PMS |
URLs in this document have been updated. Links enclosed in {curly brackets} have been changed. If a replacement link was located, the new URL was added and the link is active; if a new site could not be identified, the broken link was removed. |
Denise Beaubien Bennett
Engineering Librarian
dbennett@ufl.edu
Michelle Leonard
Science & Technology Librarian
buckeye1@ufl.edu
Donna Wrublewski
Chemical Sciences Librarian
dtwrublewski@ufl.edu
University of Florida
Marston Science Library
Gainesville, Florida
Librarians are often asked to extract data comparing their budgets, productivity, and publication impacts to peer programs and institutions. This study examines the request for comparative publication impacts of six Chemical Engineering Departments executed with a five-day notice.
Librarians at the University of Florida were asked to provide data comparing their Chemical Engineering Department with those from five other pre-selected universities. The Department Chair made this request five days before an important presentation. Specifically he requested the following data:
In addition to the publication data, he requested measures to compare:
Articles on the waning popularity of journal impact factors (Monastersky 2005) and an increase in article-based evaluation techniques (Harnad 2008; Campbell 2008; Patterson 2009) are readily available and in some cases would have been relevant to our assignment. However, we were given an assignment based on parameters outlined by a department chair to be completed within five days, and we were not asked to gather additional data or to provide recommendations on how best to proceed. We provided the data requested and added only the readily-available h-index (Hirsch 2005). This study outlines the process and methods we used.
Since the department chair was developing a presentation for a College of Engineering event, we were concerned that the seven other engineering department chairs might request similar data with the same due date. These concerns caused us to work as though our timeframe was shorter than stipulated by choosing efficient searching techniques.
A short timeframe of five days served as the greatest challenge to delivering precise data. We knew that we could attain high precision with our institution's data without too great an investment in time and energy, but we did not want to weed out false drops (such as department names with similar keywords) from our local results if we were unable to also weed the comparison institutions' data.
The most precise way to search for publication and impact data by members of an academic department is to search by faculty names. Developing a list of names, name variations, and years when each name was affiliated with that university and department is a time-consuming task. Faculty arrive and depart; they retire and may or may not be awarded emeritus status and continue with university-affiliated research. Departments split and merge to improve administrative efficiency or to accommodate programmatic changes. Tracking these changes on other universities' web sites is challenging, and the task might require locating an individual at another institution who is willing to list faculty names and active dates from the time period to be examined. We quickly realized that our most efficient technique was to search, where possible, by the author address field for university and preferably department name.
We began by checking the department web site of each comparison university. In addition to verifying the accurate form of the department name, this web site check served to indicate whether other disciplines are split from or merged into the department, such as "Chemical Engineering & Materials Science" or "Chemical & Biological Engineering." Comparing peer institutions can be challenging to manage when departments or units are not identical. We immediately realized that our results needed to indicate the functional words in the department name and the number of faculty in each department, since the number of articles alone would give misleading impressions that all were authored by chemical engineers. We did not verify the length of time that each department used its current name, although any splits or merges that may have occurred within the last five years would create additional challenges in tracking accurate data. Obtaining such details for other institutions might require several inquiries.
To obtain data for the first two items on journal articles, we searched the Web of Science. We chose the Web of Science over SciFinder Web due to the better control of searching by department name, the immediacy of citation counts and the h-index, and an assumption that all relevant peer-reviewed journals would be included. Our most efficient strategy was to search for each university and department name in the Address index. Tinkering with the department name to produce all but only the desired results is a significant challenge, especially if the department name changes over time. For example, "univ florida" SAME "dept chem engn" will produce excellent results, but that formula will not work universally. We limited the search to publication dates from 2006-present. We are aware that faculty may have left or there were new hires within this timeframe; however, searching by individual faculty names is too labor intensive and would not yield the results requested. We readily obtained the number of peer-reviewed articles, the number of citations to those articles, and the h-index (Hirsch 2005; Norris & Oppenheim 2010). We also calculated the average number of articles published per faculty member and the average number of citations per article at each comparison university.
Our summary chart began to take shape, as shown in Table 1:
Table 1: Number of Faculty, Articles Published, and Articles Cited
|
U1 |
U2 |
U3 |
U4 |
U5 |
U6 |
average |
Department Name |
ChE |
Ch&BE |
CHE &MS |
Ch&BE |
ChE |
ChE |
|
Faculty in 2010 (excludes emeritus & adjunct) |
22 |
25 |
33 |
19 |
29 |
27 |
25.83 |
Peer-reviewed articles |
526 |
640 |
783 |
499 |
773 |
431 |
608.67 |
Avg. # articles/faculty |
23.91 |
25.6 |
23.73 |
26.26 |
26.66 |
15.96 |
23.69 |
Citations to peer-reviewed articles 2006-Aug 2010 |
2995 |
5561 |
6973 |
4123 |
6477 |
2643 |
4795.33 |
Avg. # citations/article |
5.69 |
8.69 |
8.91 |
8.26 |
8.38 |
6.13 |
7.68 |
h-index |
24 |
34 |
37 |
28 |
33 |
21 |
29.5 |
To gather impact factor data, we analyzed the source titles of articles published at each comparison university. From our Web of Science results, we clicked the "Analyze Results" button and set the analysis options to:
For minimum record count, we had to choose between two and one. We found valid reasons for making either choice. Choosing one (i.e., titles in which only one article had been published) would vastly increase our title list, and might include genuine outliers that don't represent typical publishing patterns of our faculty. We chose to gather down to the single record count as long as it didn't result in an unmanageable retrieval set, and then we examined the one-counts to determine whether they added value or distracted from the overall goal of the report. We included one-count journal titles where at least one of the comparison universities had published at least two articles in that source, and where that article was included on our list of most heavily-cited articles per institution.
We then built a spreadsheet to display the number of articles published in each journal by each comparison university. The spreadsheet columns include the journal name and one column for each university showing the number of articles published. The next column displays the 2009 impact factor for each journal as obtained from JCRWeb. Next are six more columns, one per university, with a "multiplier." Since our department chair requested the average journal impact factor for papers published, we devised a method to weight the value of articles published in high-impact journals within the set. The "multiplier" was obtained from multiplying the 2009 impact factor by the number of articles from each university to establish the requested weight. Table 2 excerpts data for the four top journals (after Science and several Nature titles) by impact factor:
Table 2: Sampling of data from four journals, showing number of articles published and multiplied by 2009 impact factor:
Journal (impact factor) |
U1 # articles (multiplier) |
U2 # articles (multiplier) |
U3 # articles (multiplier) |
U4 # articles (multiplier) |
U5 # articles (multiplier) |
U6 # articles (multiplier) |
Angew Chemie Int'l ed. (11.829) |
2 (23.66) |
12 (141.95) |
5 (59.14) |
7 (82.8) |
3 (35.487) |
3 (35.49) |
Nano Letters (9.991) |
0 (0) |
9 (89.92) |
12 (119.89) |
6 (59.95) |
14 (139.87) |
2 (19.98) |
PNAS (9.432) |
1 (9.43) |
12 (113.18) |
6 (56.59) |
5 (47.16) |
7 (66.02) |
1 (9.43) |
JACS (8.580) |
11 (94.38) |
25 (214.5) |
21 (180.18) |
12 (102.96) |
17 (145.86) |
10 (85.80) |
total (160 titles) |
430 (1386.34) |
518 (2529.54) |
612 (2774.20) |
391 (1833.82) |
622 (2554.53) |
336 (1280.55) |
Adding the "multipliers" delivered an overall impact of the articles published in the chosen set of journals (that included at least two published articles per university), and comprises 79.65% of total articles. These totals display at the bottom of the full spreadsheet (not included here) and are shown in Table 3:
Table 3: Total impact of articles published in core journals (where at least 2 articles were published) by each comparison university:
university |
U1 |
U2 |
U3 |
U4 |
U5 |
U6 |
Total from "multiplier" columns: articles x impact factors |
1386.336 |
2529.544 |
2774.203 |
1833.823 |
2554.527 |
1280.555 |
Total articles where at least 2 articles (from any of the 6 institutions) have been published in a journal |
430 |
518 |
612 |
391 |
622 |
336 |
Total articles published in any journal 2006-2010 |
526 |
640 |
783 |
499 |
773 |
431 |
Percent of total that are articles published in journals that contained at least 2 articles (from any of the 6 institutions) |
82% |
81% |
78% |
78% |
80% |
78% |
To measure the number of Fellows of professional associations such as AICHE, APS, or ACS and the number of NAE/NAS Memberships, we searched the associations' web sites. Some of the web sites facilitated our search by institution; some did not.
The AIChE web site lists its Fellows in alphabetical order by name. Institutions are neither listed nor indexed. Names must be compared against faculty lists.
The APS web site offers an "archive" of Fellows from 1995 to the present. The archive includes filters for year of award, nominating unit, and institution. The filters are displayed as drop-down menus for easy selection. Variations in institution names (caused by ampersands, hyphens, abbreviations, "at," etc.) are generally displayed near enough to each other to be noticed. Departments are not included in the institution name.
The ACS Fellows Program began in 2009. The list of {Fellows} includes the institution name, which enables a "find on web page" search.
"Election to National Academy of Engineering membership is one of the highest professional honors accorded an engineer." As membership is an elected honor, all member names are viewable. The NAE Members page includes tabs for active members, newest members, and members in memoriam. The active members and members in memoriam may be searched by affiliated institution.
The NAS offers a membership directory that is searchable by institution. The displayed NAS Section implies the awardees' area of research.
The task of counting association fellows requires as much care as other tasks. Do you intend to count emeritus faculty, or only those who have retired after a specified date? Fellowship in some associations is connected with a subject-based section, but the sections may not match academic departments. A Fellow may be listed as a member in the materials or interdisciplinary section of an association while his academic department is chemical engineering. Matching with a list of faculty names at each institution is the only way to ensure high precision.
Table 4: Number of fellows in professional associations:
fellows |
U1 |
U2 |
U3 |
U4 |
U5 |
U6 |
ACS |
0 |
0 |
0 |
0 |
2 |
0 |
AIChE |
3 |
1 |
1 |
5 |
9 |
4 |
APS |
2 |
0 |
3 |
2 |
1 |
0 |
NAE |
0 |
5 |
4 |
4 |
4 |
4 |
NAS |
0 |
2 |
0 |
2 |
0 |
0 |
We performed a quick department web site search to see if any patent highlights were mentioned, but found none. We then identified three avenues for determining the number of patents awarded to each comparison department: university web site, United States Patent and Trademark Office (USPTO) patent database, and SciFinder Web. We chose SciFinder Web as the easier tool for searching by author address and department name as well as its limit to chemically-related patents. In SciFinder Web, we searched by address and limited results to the document type of patents and the latest five years. The USPTO database offers easy query searching and we used this database to verify the results from SciFinder Web. The results did vary as expected but not beyond our comfort zone with a tight project deadline.
We assumed that patent applications are equivalent to articles "submitted but not yet accepted." We also assumed the number of patent applications would be proportional to the number of awarded patents for each institution. These assumptions guided our decision to omit patent applications from the count.
The usual warnings about names, name variations, and dates apply to patent searching as well. To add to the complexity, patents may be awarded to the Board of Regents, Sponsored Research, the entire university system, or in one case the Alumni Foundation rather than to the university/college/department, so the patent indexing does not include sufficient local data to determine department. E-mail inquiries to the technology officers about the accurate number of patents went unanswered or the reply consisted of a referral to the USPTO web site.
Searching by faculty member and department only may not retrieve all of the relevant patents. Companies and entrepreneurial start-ups that license university technologies are also lumped into the patent numbers and could potentially skew the number of patents secured by faculty only. Patents may be issued under non-standard forms of names, especially when issued beyond the university, causing author name retrieval and ambiguity issues. By searching on university address, we restricted our results to patents licensed to the universities. Had we searched by inventor names as well, we could have also retrieved patents in which faculty may have participated through consultancies and other opportunities. It is possible to search for the number of patents by technological development, by department only, or by faculty member.
Table 5: Approximate number of patents registered to the university by classification and/or research topic.
university |
U1 |
U2 |
U3 |
U4 |
U5* |
U6 |
dept name |
ChE |
Ch&BE |
CHE &MS |
Ch&BE |
ChE |
ChE |
# of patents |
~44 |
~149 |
~84 |
~129 |
~254 |
~18 |
*U5 combines the entire state university system and does not break down patents by campus. |
Our requestor specifically mentioned patents as a measure of a department's impact on industry. With hindsight, we might choose also to measure and compare partnerships and sponsor funding as indicators of impact (or perceived future impact) on industry. The figures should be findable for public institutions; however, sponsors are more likely to support an entire college of engineering rather than one department.
This data-gathering was completed shortly before the National Research Council released its 2010 Data-Based Assessment of Research-Doctorate Programs in the United States (NRC 2010a). The data, which are based on the 2005-06 academic year, include a component for research productivity based on publications per faculty member, citations per publication, percent of faculty holding grants, and awards per faculty member. Data from the NRC assessment have fueled graduate school rankings provided by Phds.org and are presented there in a more digestible format. The NRC data have been criticized for being out-of-date, but the NRC hints that transformative change within a program is slow and "most faculty have been at the same university in the same program for 8 to 20 years" (NRC 2010b).
A comparison of the NRC ranking range for the category of research productivity with our data indicates some similarities but demonstrates that we were counting different people or different articles. The NRC used publication data from Thomson Reuters and faculty names supplied by each institution (NRC 2010a Appendix E, p. 241). The NRC measured average publications per faculty from 2000-2006 in a range of 3-7 per year, and we measured from 2006-2010 and counted a range from 16-27 total (or roughly 3.4-5.8 per year) peer-reviewed articles per faculty. However, the ordinal rankings among the six institutions for publications per faculty did not match our data well, indicating differences among program productivity during the time ranges.
The NRC calculated a range of 1.20 to 2.92 average citations per publication per year over seven years from 2000-06 (or approximately 8.4 to 20.44 citations per article over seven years), while we measured a range from 5.69 to 8.91 total citations per publication in our set from 2006-2010. To parallel the NRC data, we measured the institutions' 2000-2006 citations to 2000-2006 articles using our author address method. We found a range of 6.53 to 11.72 average citations per article or approximately 0.93 to 1.67 per article per year. The ordinal ranking of citations per publication among the six institutions was extremely close between our data and the NRC data but the numbers are considerably different. Measuring citations over a short time period is of limited value in evaluating the strength of a program (Hirsch 2011), but it may be the only option for assessing recent strength. The NRC data may be useful for an instant comparison, but gathering and analyzing the data yourself may be necessary for some circumstances.
Exercises in data gathering may become easier as assessment tools are refined or developed. Author name disambiguation and control in the author affiliation fields continue to improve (Wagner 2009), and limiting to departments within institutions may become more precise. Researcher discovery tools such as VIVO will enable one-stop lookup of data for departments within institutions, and may include accurate and updated data such as patents, grants, and awards as well as publication lists (Gonzalez et al. 2010).
When comparing publication data across institutions on a tight deadline, a few tips can improve the accuracy of the data you are collecting. Choose your compared programs carefully, and make sure you are aware of the differences (such as merged departments) in your set. Searching by authors is the most accurate measure, but searching by Author Address and a date range is a more efficient strategy when working on a tight deadline. Experiment with Author Address fields to ensure desired levels of precision and relevance from your retrieved results. When examining journals, determine whether to use the "long tail" of all journals in which your authors have published or to choose a cutoff between core and fringe titles for the discipline. Use associations' web sites to determine awards meted to faculty at an institution. Use special care with patent counting. Consider using the NRC assessment if you need data instantly. And last but not least, hope to limit your more challenging data requests until researcher discovery tools are well established.
Campbell, Philip. 2008. Escape from the impact factor. Ethics in Science and Environmental Politics 8(1):5-6.
Harnad, Stevan. 2008. Validating research performance metrics against peer rankings. Ethics in Science and Environmental Politics 8(1):103-107.
Hirsch, J.E. 2005. An index to quantify an individual's scientific research output. PNAS 102 (46): 16569-16572. [Internet]. [Cited 2011 August 31]. Available from: doi:http://dx.doi.org/10.1073%2Fpnas.0507655102
Hirsch, J.E. 2011. On the value of author indices. Physics Today 64(3): 9.
Gonzalez, S.R., Davis, V., Tennant, M.R., Holmes, K.L., and Conlon, M. 2010. Letting the good times roll through alignment: meeting institutional missions and goals with VIVO, a web-based research discovery tool. In: Proceedings of the Special Libraries Association Annual Conference, New Orleans, LA. Washington (DC): Special Libraries Association. [Internet]. [Cited 2011 August 31]. Available from: {http://www.sla.org/PDFs/Conf/SLA_VIVO_Contributed_Paper.pdf}
Monastersky, Richard. 2005. The number that's devouring science. Chronicle of Higher Education 52 (8):A12-A17.
National Research Council. 2010a. Data-Based Assessment of Research-Doctorate Programs in the United States. [Internet]. [Cited 2011 July 12]. Available from: http://www.nap.edu/rdp/
National Research Council. 2010b. NRC 2010 FAQ [Internet]. [Cited 2011 August 31]. Available from: http://sites.nationalacademies.org/pga/resdoc/pga_051962
Norris, M. and Oppenheim, C. 2010. The h-index: a broad review of a new bibliometric indicator. Journal of Documentation 66(5):681-705
Patterson, M. 2009. PLoS Journals -- measuring impact where it matters. The official PLoS Blog (posted: July 13, 2009) [Internet]. [cited 2011 August 31]. Available from: http://blogs.plos.org/plos/2009/07/plos-journals-measuring-impact-where-it-matters/
Wagner, B. 2009. Tips from the Experts: Author Identification Systems. Issues in Science and Technology Librarianship [Internet]. [Cited 2011 August 31]. Available from: http://www.istl.org/09-fall/tips.html
Previous | Contents | Next |