Benchmarking Individual Publication Productivity in Logistics

July 5, 2017 | Autor: Robert Frankel | Categoría: Transportation, Citation Analysis, Research productivity, Aggregate Productivity, Indexation
Share Embed


Descripción

Benchmarking Individual Publication Productivity in Logistics Postprint of article as published in 2012 in Transportation Journal 51(2): 164-196: http://www.jstor.org/stable/10.5325/transportationj.51.2.0164?seq=1#page_scan_tab_contents

Submitted by

B. Jay Coleman (Contact Author) Richard deRaismes Kip Professor of Operations Management & Quantitative Methods Coggin College of Business University of North Florida 1 UNF Drive Jacksonville, FL 32224 [email protected] Tel: 904-620-2780, Fax: 904-620-2782

Yemisi A. Bolumole Assistant Professor of Supply Chain Management Department of Supply Chain Management Broad College of Business Michigan State University East Lansing, MI 48824 [email protected] Tel: 517-432-6329

Robert Frankel Richard deRaismes Kip Professor of Marketing & Logistics Coggin College of Business University of North Florida 1 UNF Drive Jacksonville, FL 32224 [email protected] Tel: 904-620-2780, Fax: 904-620-2782

About the Authors B. Jay Coleman is the Richard deRaismes Kip Professor of Operations Management and Quantitative Methods in the Coggin College of Business at the University of North Florida. He earned his Ph.D. in Industrial Management from Clemson University. Dr. Coleman’s research program includes articles in Decision Sciences, Production and Operations Management, Journal of Business Logistics, Interfaces, Industrial Relations, Journal of Financial Research, IEEE Transactions on Engineering Management, Computers and Operations Research, Production and Inventory Management Journal, and Journal of Purchasing and Materials Management, among others. Yemisi A. Bolumole is Assistant Professor of Supply Chain Management at Michigan State University. She received her Ph.D. in Logistics & Supply Chain Management from Cranfield University. Her research interests include third-party logistics outsourcing, supply chain management, and optimizing for supply chain performance. Her research has been published in Journal of Business Logistics, International Journal of Logistics Management, Journal of Business and Industrial Marketing, and Transportation Journal, among others. Robert Frankel is Richard deRaismes Kip Professor of Marketing and Logistics at the University of North Florida. A Fulbright scholar, he received his Ph.D. in Marketing and Logistics from Michigan State University. His research interests include supply chain management, international marketing, and pedagogy. His research has been published in Journal of Operations Management, Journal of Business Logistics, International Journal of Physical Distribution and Logistics Management, International Journal of Logistics Management, Journal of Business-to-Business Marketing, and Marketing Education Review, among others.

2

Benchmarking Individual Publication Productivity in Logistics Abstract What constitutes excellence in publication productivity in logistics journals? Several previous studies have examined this question at the institutional level. However, prior literature has not examined in detail the research productivity patterns of the entire distribution of individual logistics authors, across a relatively large number of journals and a lengthy time frame. Prior work has also not established the benchmarks or thresholds of individual research productivity, in terms of both quantity and quality, which are necessary to be ranked among the leading contributors in the discipline. To address this void in the literature, we examine 3312 articles published in seven leading logistics journals from 1990 through 2009, inclusive, to which 3657 different individual authors contributed at least one authorship or co-authorship. Using the rankings and associated percentiles of individual authors according to six quality and quantity metrics, we identify the aggregate productivity benchmarks necessary for individual authors to be ranked at various positions in the field. We find that the thresholds necessary to be among the leaders in logistics research productivity, or to meet typically posited expectations for performance, are not necessarily reflective of the traditional wisdom.

3

Benchmarking Individual Publication Productivity in Logistics Introduction Given the prominent role of research in knowledge discovery for a discipline (Powers et al. 1998) and in defining individual careers and institutional success within the academy (Ford et al. 2001; Ford and Merchant 2008), examining trends and productivity patterns in academic research has been of considerable scholarly interest. This has certainly been true of logistics, where Vellenga et al. (1981), Gentry et al. (1995), Carter and Ellram (2003), Carter et al. (2005), Svennson et al. (2008), Charvet et al. (2008), Maloni et al. (2009), and Cantor et al. (2010), among many others, have helped to both better define and summarize the state of development of the logistics field. Such studies also provide some degree of benchmarking for individual authors and institutions as they establish research performance expectations and gauge progress toward such goals. Publication productivity assessments provide useful information to prospective faculty recruits and doctoral students, and provide administrators with information about the relative strength of academic faculty and departments. The establishment of productivity patterns may help to create consensus regarding appropriate guidelines for faculty, and thus be helpful for promotion and tenure decisions. Additionally, for schools holding or desiring accreditation by AACSB International – and for AACSB itself – productivity benchmarks can aid in determining appropriate research expectations for faculty to be deemed academically qualified. However, the previous literature on research and publication productivity in logistics has been largely concentrated at the institutional level (e.g., Vellenga et al. 1981; Allen and Vellenga 1987; Gentry et al. 1995; Carter et al. 2001; Carter et al. 2005; and Carter et al. 2007), in which the aggregate contributions of all faculty at a given institution are compiled and ranked vis-à-vis

4

other schools. Prior research has also largely focused on quantity measures (such as article counts of some variety), and not as much on quality measures (such as the number of citations received). Thus, the prior literature does not present specific thresholds or benchmarks that can necessarily be used directly by individual faculty to assess their relative and absolute scholarly contributions, nor does it provide measures that can be used in various faculty performance evaluation processes. Prior work has also typically focused on a relatively short period of time (e.g., six to seven years), and/or on a single or relatively small number of journals, which further contributes to the problem of trying to generalize results. The purpose of our study is to address these gaps in the literature by addressing the key question: what constitutes appropriate standards for excellence in publication productivity in logistics journals? The goal of this paper is to provide a set of empirical benchmarks for establishing discipline-wide standards for research productivity as measured by publication in the leading journals. In sum, we identify the particular benchmarks of performance that are necessary to place an individual at various echelons of publication productivity in logistics. Using a lengthy time frame of 20 years of publications in a comprehensive list of seven leading logistics journals, we investigate three general research questions: 1. How many publications are required for an author to be ranked among the leaders in logistics publication productivity? 2. What quality threshold – as defined by the extent of publication citation counts – is required to be ranked among the leaders in logistics publication productivity? 3. What level of combined quantity and quality is required to be ranked among the leaders in logistics publication productivity?

5

As we will describe in detail below, we employ two versions of article counts as our measures of quantity. Article counts represent the frequency of an author’s contribution to the discipline, and are perhaps the most commonly referenced metric of research productivity in academe. Similarly, we employ two versions of citation counts as our measures of the quality of an author’s contributions. Our use of citation counts as proxies of quality assumes that more frequently referenced works likely have had more substantial disciplinary influence, a position consistent with recent bibliometric research in logistics and related fields (Phillips and Phillips 1998; Kumar and Kwan 2004; Carter et al. 2007; Charvet et al. 2008; Hult and Chabowski 2008; Chapman and Ellinger 2009; Pilkington and Meredith 2009; Cantor et al. 2010; Georgi et al. 2010). Finally, as a combined measure of quantity and quality, we employ two versions of a relatively new metric called the h-index, which was first presented in Hirsch (2005), and which has already gained considerable acceptance elsewhere in the academic community as a simple but comprehensive mechanism to measure research contributions (Saad 2010). The paper begins with an overview of the existing literature. We subsequently describe the methodology employed, and follow that with a presentation of the results from our analysis. We then discuss these results and their implications, consider the contributions and limitations of our study, and offer concluding comments and avenues for future research.

Literature Review Studies that rank and evaluate research productivity are common in many disciplines. Whereas early such research focused on subjective methods, more recent work has generally employed more objective means and focused on a particular interest area (Serenko and Bontis 2004). Objective measures have included counts of articles by authors and/or their affiliated institutions (e.g., Young et al. 1996; Carter 2005; Maloni et al. 2009; Cantor et al. 2010), as well

6

as counts of citations to particular articles (e.g., Carter et al. 2007; Pilkington and Meredith 2009; Cantor et al. 2010). In the logistics discipline, academic ranking studies have been a focus of significant scholarly interest. One category of research has been directed at the ranking of journals (Carter 2002), largely through the use of expert surveys (Fawcett et al. 1995; Gibson and Hanna 2003; Gibson et al. 2004; Rutner and Fawcett 2005; Zsidisin et al. 2007; Arlbjorn et al. 2008; Fawcett 2009; Menachof et al. 2009), and occasionally through citation analysis (Kumar and Kwan 2004; Charvet et al. 2008; Chapman and Ellinger 2009). A second category of focus can be termed “institutional” in nature. While some in this category have ranked academic institutions through survey methods (Fawcett et al. 1995; Rutner and Fawcett 2005; Fawcett 2009), several others have evaluated institutional productivity based on counts of the affiliations of authors contributing to various sets of journals (Vellenga et al. 1981; Allen and Vellenga 1987; Gentry, Allen and Vellenga 1995; Miyazaki et al 1999; Carter et al. 2001; Hanna and LaTour 2002; Carter et al. 2005; Carter et al. 2007; Cantor et al. 2010). In recent years, a third category of focus has emerged that might be entitled “individual researcher productivity.” To date, however, such research in the logistics discipline is sparse, and concentrated on a small number of journals. Miyazaki et al. (1999) present article counts by name for roughly the top 50 individual contributors to the first 20 years of the Journal of Business Logistics, and summarize the remainder of the author frequency distribution. Similarly, Carter and Ellram (2003) report article counts by name for the top 25 individual contributors to the first 35 years of the Journal of Supply Chain Management, and Crum and Poist (2011) do the same for the first 40 years of the International Journal of Physical Distribution and Logistics Management. Hanna and LaTour (2002) describe general findings regarding the number of

7

articles contributed by individual authors to the Journal of Business Logistics and Transportation Journal from 1978 through 1998 (a sample of 835 articles). Hanna and LaTour report article count thresholds reached by various sub-groups of the top 52 authors, and provide similar information for each of the two journals separately. The most substantial contribution to this third category is Autry and Griffis (2005), which utilizes an exploratory social network theory approach to examine the role of researcher productivity and collaboration over the entire history (through 2004) of four top logistics journals. Using article counts, they name the 54 most highly productive authors, and provide a depiction and discussion of the complete author frequency distribution. Autry and Griffis also indicate that future studies of researcher productivity ideally should address quality and impact, in addition to quantity measures, and capture more contributions beyond their journal set. While the aforementioned studies have made important contributions to the logistics literature, there remains an opportunity for additional research that identifies individual publication productivity standards in terms of both quantity and quality, spanning a larger group of journals, a lengthy time frame, and the full distribution of contributing authors. Identifying the thresholds to be among the leaders in logistics research productivity, or to meet typically posited expectations for performance, also allows for an assessment of the traditional wisdom on those issues. Addressing these opportunities is the rationale for the current research. We now turn to describing the methodology that we used to produce such publication productivity benchmarks for the logistics discipline.

8

Methodology Data We examined the authorship of each article published in the 20-year window from 1990 through 2009, inclusive, in the Journal of Business Logistics (JBL), Transportation Journal (TJ), International Journal of Logistics Management (IJLM), International Journal of Physical Distribution and Logistics Management (IJPDLM), Journal of Supply Chain Management (JSCM), Supply Chain Management: An International Journal (SCM), and Transportation Research: Part E (TRE). The selection of these seven journals is consistent with the findings, recommendations, and/or the lists employed in Gibson and Hanna (2003), Rutner and Fawcett (2005), Autry and Griffis (2005), Sachan and Datta (2005), Arlbjorn et al. (2008), Svensson et al. (2008), Maloni et al. (2009), Chapman and Ellinger (2009), Menachof et al. (2009), and Cantor et al. (2010). The list also covers all journals examined in any of the aforementioned previous studies of individual author productivity in logistics. All seven journals also have Institute for Scientific Information (or ISI, now Thomson Reuters) journal impact factors currently (JBL, TJ, IJPDLM, JSCM, SCM, TRE) or forthcoming (IJLM (Emerald Group Publishing Limited 2011)), with JSCM (second) and JBL (tenth) both debuting in the top 10 in impact factor among all management journals ranked in the 2010 Journal Citation Reports (Thomson Reuters 2011). Our data set includes 3312 articles, a number that represents a near-census of all articles published in these journals over this 20-year span. Included were all original research articles, omitting items such as letters to the editor, book reviews, etc. In cases where we could determine that an article was a reprint of a previously published work, it was also omitted from the data set. In contrast to studies focusing on journal ranking or institutional productivity, our focus on the contributions of individual authors generated additional complexity in data collection, as

9

there were numerous cases in which a given author’s name appeared in multiple forms. Therefore, extensive effort was made to standardize each of the author names. When there was a significant question as to whether given names represented single or multiple individuals, we performed additional online searches of institutional, journal, and faculty web pages in an attempt to make the determination. Despite this concerted effort, we readily acknowledge the possibility that some contributions may have been wrongly credited due to errors in name standardization. As will be discussed below, the collection of the various author names associated with each article allowed us to generate quantity metrics for each author. As our measure of the quality of each contribution, we also collected the number of citations that were generated by each article as of July 2010, as reported by Google Scholar (Harzing and Van der Wal 2008a, 2008b). Given that our data set included all articles published through the year 2009, and acknowledging publication delays that sometimes cause issues published nominally under one calendar year to not be disseminated and available until sometime in the following year, this citation collection time frame allowed each article in our data set to be available to the logistics community for at least some period of time. Our use of the citation count for individual articles is arguably an improved and much more specific measure of quality versus, for example, applying some sort of overall journal quality ranking to each article from a given journal. In Young et al. (1996), the authors used a literature-based ranking of the journals they examined in their study, as a proxy for the quality of each article published in the respective journal. Thus, every article published in a given journal received the same quality ranking. Conversely, our use of specific citation counts for each article within the journal allows for differentiation among articles published in the same journal. It also

10

allows highly-cited articles that may have been published in “lower-ranked” journals to be more appropriately acknowledged (or vice-versa). The use of citations also more specifically captures each author’s collective contribution to the discipline. While recently published pieces, and/or pieces published by newer contributors to the discipline, by definition received lower quality scores (i.e., lower citation counts), this lower score is arguably appropriate given the still-limited exposure of the article / author. While authors in such a position may be strong researchers, they clearly have not yet established themselves as thought leaders in the discipline, nor has the work necessarily yet been established or acknowledged as a leading contribution in the field. Metrics To assess our first research question regarding benchmarking the number of publications (quantity) an author needs to be among the leading contributors in logistics, and similar to the approaches used in Young et al. (1996), Henthorne et al. (1998), Bakir et al. (2000), Ford et al. (2001), Ford and Merchant (2008), and Cantor et al. (2010), we computed two versions of an article count for each author. One represented the total number of authorships, computed simply as the total number of articles on which that individual was an author (or co-author). This metric gave full credit for an article to each and every author on that article, and treated singleauthorships the same as joint authorships. Obviously, it can be and has been argued that such a metric does not accurately represent the contributions on multiple-author pieces (Chung and Puelz 1992; Young et al. 1996; Bakir et al. 2000). Thus, as done in the studies cited above and in Maloni et al. (2009), for the second measure of quantity we also computed the total number of proportionally-adjusted authorships, for which the authorship credit on a given article was computed simply as the inverse of the number of authors. Thus, each author on a two-author

11

piece received a credit of 0.50 credit, each author on a three-author piece received a credit of 0.33, etc. To assess our second research question regarding benchmarking the publication quality threshold necessary to be among the leading contributors in logistics publications, and consistent with our dual approach to article counts, we computed two versions of a citation count for each author. One represented the total number of citations received for all articles on which that individual appeared as an author. Like our first quantity measure, this metric gave full credit for all of an article’s citations to each and every one of its authors, regardless of the number of authors. Again, it can be argued that this inappropriately underestimates the relative contribution of authors who work alone or in comparatively smaller co-author groups. Thus, as the second measure of quality for each author we also computed the proportionally adjusted number of citations, where the citation credit assigned to each author on a given article was computed as the number of citations for that article, divided by the number of authors. As examples, on a twoauthor article with 30 citations, each author would be assigned credit for 15 citations, whereas each author on a three-author article with 30 citations would receive credit for 10 citations. The question then arose as to how to address our third research objective, regarding identifying the combination of quantity and quality necessary to be among the leaders in logistics research. Recent literature offers an alternative for combining quantity metrics (as measured by number of articles) and quality metrics (as measured by citation counts): the Hirsch index, or hindex, first presented by Hirsch (2005). According to Hirsch (2005) and Braun et al. (2005), where Np = number of papers: A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np - h) papers have
Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.