Determinants of faculty research productivity in information systems: An empirical analysis of the...

30
Jointly published by Akadémiai Kiadó, Budapest Scientometrics, Vol. 78, No. 2 (2009) 231–260 and Springer, Dordrecht DOI: 10.1007/s11192-007-1990-7 Received January 11, 2008 Address for correspondence: REBECCA LONG E-mail: [email protected] 0138–9130/US $ 20.00 Copyright © 2008 Akadémiai Kiadó, Budapest All rights reserved Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation REBECCA LONG, a ALETA CRAWFORD, b MICHAEL WHITE, c KIMBERLY DAVIS c a P.O. Box 9581; Department of Management & Information Systems, Mississippi State University, Mississippi State, MS, USA b Department of Management; University of Mississippi, Tupelo, MS, USA c Department of Management & Information Systems, Mississippi State University, Mississippi State, MS, USA This manuscript provides guidance to Deans and other academic decision makers in the hiring process and dispels the validity of a widely held assumption commonly used as a decision factor in the selection process. This paper investigates: (a) whether graduates of prestigious information systems (IS) doctoral programs (graduates with high-status academic origins) are more likely to be successful in their academic careers (as measured by research productivity) than graduates of less prestigious programs, (b) whether IS faculty who are employed by esteemed universities (faculty with high-status academic affiliations) are more productive researchers than IS faculty employed by lower-status institutions, and (c) examines faculty productivity in terms of Lotka’s Law [LOTKA, 1926]. The findings indicate that in the IS field, productivity does not follow a Lotka distribution. Moreover, our study also shows that academic affiliation is a significant determinant of research productivity in terms of quantity (as measured by publication counts) and quality (as measured by citation counts). Contrary to common expectations, however, the analysis shows that the status of a faculty member’s academic origin is not a significant determinant of research productivity in the field of information systems. Therefore, continued reliance on academic pedigree as a primary criterion for hiring decisions may not be justified in the IS discipline. Research productivity, usually measured by the number of articles published in quality journals, is a primary criterion for evaluating faculty at research-oriented business schools, particularly with respect to merit raises and promotion and tenure decisions [HU & GILL, 2002; MYLONOPOULOS & THEOHARAKIS, 2001; HU & GILL, 2000; LARSEN & NEELY, 2000; WHITMAN & AL., 1999; HARDGRAVE & WALSTROM, 1997; IM & HARTMAN, 1997; GROVER & AL., 1992; HANCOCK & AL., 1992; REBNE &

Transcript of Determinants of faculty research productivity in information systems: An empirical analysis of the...

Page 1: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

Jointly published by Akadémiai Kiadó, Budapest Scientometrics, Vol. 78, No. 2 (2009) 231–260 and Springer, Dordrecht DOI: 10.1007/s11192-007-1990-7

Received January 11, 2008

Address for correspondence: REBECCA LONG E-mail: [email protected] 0138–9130/US $ 20.00 Copyright © 2008 Akadémiai Kiadó, Budapest All rights reserved

Determinants of faculty research productivity in information systems:

An empirical analysis of the impact of academic origin and academic affiliation

REBECCA LONG,a ALETA CRAWFORD,b MICHAEL WHITE,c KIMBERLY DAVISc

a P.O. Box 9581; Department of Management & Information Systems, Mississippi State University, Mississippi State, MS, USA

b Department of Management; University of Mississippi, Tupelo, MS, USA c Department of Management & Information Systems, Mississippi State University,

Mississippi State, MS, USA

This manuscript provides guidance to Deans and other academic decision makers in the hiring process and dispels the validity of a widely held assumption commonly used as a decision factor in the selection process. This paper investigates: (a) whether graduates of prestigious information systems (IS) doctoral programs (graduates with high-status academic origins) are more likely to be successful in their academic careers (as measured by research productivity) than graduates of less prestigious programs, (b) whether IS faculty who are employed by esteemed universities (faculty with high-status academic affiliations) are more productive researchers than IS faculty employed by lower-status institutions, and (c) examines faculty productivity in terms of Lotka’s Law [LOTKA, 1926]. The findings indicate that in the IS field, productivity does not follow a Lotka distribution. Moreover, our study also shows that academic affiliation is a significant determinant of research productivity in terms of quantity (as measured by publication counts) and quality (as measured by citation counts). Contrary to common expectations, however, the analysis shows that the status of a faculty member’s academic origin is not a significant determinant of research productivity in the field of information systems. Therefore, continued reliance on academic pedigree as a primary criterion for hiring decisions may not be justified in the IS discipline.

Research productivity, usually measured by the number of articles published in

quality journals, is a primary criterion for evaluating faculty at research-oriented business schools, particularly with respect to merit raises and promotion and tenure decisions [HU & GILL, 2002; MYLONOPOULOS & THEOHARAKIS, 2001; HU & GILL, 2000; LARSEN & NEELY, 2000; WHITMAN & AL., 1999; HARDGRAVE & WALSTROM, 1997; IM & HARTMAN, 1997; GROVER & AL., 1992; HANCOCK & AL., 1992; REBNE &

Page 2: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

232 Scientometrics 78 (2009)

DAVIDSON, 1992; KOONG & WEISTROFFER, 1989; JAUCH & GLUECK, 1975; JOLSON, 1974]. This practice is reflected in the well known adage, “publish or perish.” Indeed, evidence suggests that a single publication in an IS journal is worth $20,000 to the author in incremental pay over a 5-year period, although the faculty member may need to relocate to realize that marginal value [GILL, 2001].

When recruiting junior faculty, therefore, an important goal of research driven universities is to select candidates who are likely to become productive researchers. The question is how can universities identify such candidates? In other words, what objective measures can be used as an indicator of future research productivity, such that Deans and academic hiring committees may rely on them when making staffing decisions?

A widely held assumption in the academic community is that graduates of prestigious doctoral programs (graduates with high-status academic origins) are more likely to be productive researchers during their academic careers than graduates of less prestigious programs. Directors of recruiting and recruiting committees at research oriented institutions frequently eliminate candidates from the initial applicant pool on the basis of academic origin before exploring more granular decision variables. Therefore, students who earn their doctorates from high-status schools are more likely to be employed by high-status, research driven universities following their graduation [D’AVENI, 1996]. Being employed by a prominent university is commonly believed to be another major determinant of one’s research productivity. Many doctoral students, therefore, believe that their professional success (or lack thereof) will be determined, at least in part, by the caliber of the school that provides their doctoral training. Consequently, high-status institutions tend to attract doctoral students of perceived higher quality [D’AVENI, 1996; GROVER & AL., 1992; RESKIN, 1977; CLEMENTE & STURGIS, 1974]. Further, high-status institutions attract higher numbers of applicants, allowing them to be more selective in the students they admit [ARMSTRONG & SPERRY, 1994]. (Note that while there are significant differences between the terms “institution,” “university,” “school,” “department,” and “program,” such terms will be used interchangeably in this paper, except where specificity is required. However, the analysis of academic quality rankings to determine the status of the academic affiliations and academic origins of the sample members was consistently performed at the department level.)

Many universities act on the “origin assumption” by using the status of the institution from which a doctoral student graduated as a key factor in hiring decisions, particularly in the recruitment of junior faculty, where candidates typically have not yet established an actual publication record. Research driven universities often aspire to recruit doctoral graduates from top tier schools, assuming that they will achieve a higher level of research productivity, both qualitatively and quantitatively, than their counterparts from less prestigious schools. For example, research has shown that

Page 3: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 233

academic decision makers in the field of economics tend to favor doctoral graduates from high-status universities [LABAND, 1986]. Academic department chairs “tend to be traditionalists in hiring, overly favoring the few major universities that have turned out the largest number of distinguished Ph.D.’s in the past” [CARTTER, 1966, P. 6]. Research in the field of information systems (IS) has shown that when doctoral candidates apply for faculty positions, the reputation of the applicant’s academic origin is considered to be a primary measure of the candidate’s future ability to successfully conduct research [LARSEN & NEELY, 2000; LARSEN, 1998].

It is, therefore, important to determine whether academic origin is, in fact, a good predictor of research productivity and whether the selection of faculty on the basis of academic origin is justified. If such hiring practices are based on inaccurate assumptions, the universities that employ these methods are incurring a significant opportunity cost – they may avoid hiring candidates from lower-status schools who would nevertheless prove to be highly productive researchers. The quality of faculty hiring decisions will depend, at least in part, on the validity of the assumption that academic origin serves as a proxy for research potential.

It is also generally assumed among the academic community that researchers who are employed by esteemed universities (scholars with high-status academic affiliations) tend to be more productive researchers than faculty who are employed by lower-status institutions. Many universities act on this “affiliation assumption” by using the status of a faculty member’s current affiliation as a key factor when hiring mid-level and senior faculty (lateral hires). One might surmise that academic affiliation is a less important hiring criterion for lateral candidates than academic origin is for entry level positions, because an established faculty member’s research record speaks for itself. However, it is commonly understood that it is difficult to “move up” the status ladder in academia, leading to the conclusion that academic affiliation is, indeed, a significant criterion for hiring mid-level and senior faculty. The soundness of the affiliation assumption should be tested, therefore, to either validate or repudiate this practice. In addition, if high-status institutions produce more prolific researchers, lower-status academic employers may be able to learn how to create an environment that is more conducive to research by examining the characteristics of high-status universities that promote and foster research.

Numerous empirical studies have been conducted to test the origin and affiliation assumptions in various academic disciplines, but prior to the present study, no such analysis had been conducted in the field of information systems. Research in disciplines other than IS, such as management [LONG & AL., 1998; FOWLER & AL., 1985], economics [SIEGFRIED, 1972], and the physical and social sciences [FOLGER & AL., 1970; HAGSTROM, 1968; CRANE, 1965; LAZARSFELD & THIELANS, 1958; MANIS, 1951], has generally confirmed a positive relationship between the quality of academic affiliation and research productivity. Looking at the other side of the same coin,

Page 4: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

234 Scientometrics 78 (2009)

evidence also suggests that the prestige of academic institutions is primarily derived from the research productivity of its faculty [ARMSTRONG & SPERRY, 1994; HAGSTROM, 1971; CARTTER, 1966].

Other studies in non-IS disciplines, such as finance [ZIVNEY & BERTIN, 1992], economics [HOGAN, 1986], accounting [JACOBS & AL., 1986], and the physical and social sciences [RESKIN, 1977; CRANE, 1965; SOMIT & TANENHAUS, 1964; MELTZER, 1949] have also found a strong, positive relationship between the quality of academic origin and research productivity (see also [JUDGE & AL., 1995; WHITELY & AL., 1991; JASKOLKA & AL., 1985; PSACHAROPOULOS, 1985; ROSENBAUM, 1984; PFEFFER & ROSS, 1982]), though an earlier study in the field of sociology [CLEMENTE & STURGIS, 1974] found that the quality of doctoral training exerted little influence on future research productivity. A more recent study of this type was performed in the management field [LONG & AL, 1998], which found no significant relationship between academic origin and research productivity.

Other potential determinants of research productivity have been investigated in a variety of fields, including science (biology, mathematics, chemistry, and physics) [SENTER, 1986; ALLISON & STEWART, 1974], management science [HANCOCK & AL., 1992], and statistics [LANE & AL., 1990]. One study sought to assess the effects of gender, race, and family status on the research productivity of individuals throughout the entire population of post-secondary faculty, without focusing on any particular discipline [BELLAS & TOUTKOUSHIAN, 1999].

Many studies have also ranked the research productivity of institutions, departments, and/or individual scholars in business, including IS, without investigating the underlying factors leading to such productivity. Some of these studies have examined the research productivity of business schools as a whole, encompassing many or all of the disciplines within business, such as accounting, finance, management, and marketing [ARMSTRONG & SPERRY, 1994; PETRY & SETTLE, 1988; WILLIAMS, 1987; MOORE & TAYLOR, 1980; HENRY & BURCH, 1974], while others have focused their attention on a particular discipline. The studies that rank IS programs in terms of research performance will be discussed in a later section.

The disparity of publishing patterns across disciplinary groups has been found to be substantial [NWAGNU, 2006; GUAN & MA, 2004; GUPTA & AL., 1999; COLMAN & AL., 1995; REBNE & DAVIDSON, 1992; SENTER, 1986]. Many of these studies have attempted to predict productivity in IS and other fields from the perspective of bibliometric analysis in general as well as Lotka and other publication distributions in particular (e.g., [GUPTA & & AL., 1998; GUPTA & KARISIDDAPPA, 1996; NATH & JACKSON, 1991; RAO, 1980; RADHAKRISHNAN & KERNIZAN, 1979; VOOS, 1974]).

Thus it appears that the determinants as well as the scientometric features of research productivity may vary depending on the academic discipline, so it is particularly important to expand the prior research to evaluate the soundness of the

Page 5: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 235

assumptions in fields that have not yet been studied. The present study contributes to the existing literature by examining the effects of academic origin and academic affiliation on research productivity in such a field, namely information systems. The results of this study may, in turn, lead to improved decision making in the selection and development of IS faculty.

Hypotheses

In this section, we develop three main hypotheses to examine whether there is a significant relationship between research productivity and the status of academic origin and academic affiliation in the IS discipline and a fourth hypothesis to examine the distribution of research productivity in IS.

The effect of academic origin on research productivity

The quality of a doctoral program is often judged by its status [D’AVENI, 1996], which is influenced by many factors, some of which are ostensibly objective measures of quality such as the quality of students and instruction, availability of resources, and the research productivity of faculty [GROVER & AL, 1992]. Other factors are more subjective, such as a business school’s reputation and its perceived quality among the academic community, which may be influenced by factors unrelated to the actual quality of the education provided [CABLE & MURRAY, 1999; D’AVENI, 1996; JUDGE & AL., 1995]. Nevertheless, it is reasonable to conclude that the status of a doctoral program is highly correlated with the quality of the education provided [D’AVENI, 1996; JUDGE & AL., 1995].

There are several reasons to infer that doctoral students who earn a Ph.D. from high- status doctoral programs will become more productive researchers than those who graduate from less prestigious doctoral programs. For example, students at high-status educational institutions probably benefit from several advantages in terms of “human capital,” including scholastic capital, social capital, and cultural capital [USEEM & KARABEL, 1986; BECKER, 1964], described below.

Assuming that high status is a proxy for high quality, we would expect high status academic programs to endow greater scholastic capital to their students, meaning that such programs should successfully impart greater knowledge and better research skills to their students than lower status programs [USEEM & KARABEL, 1986]. Specialized knowledge in an academic discipline, together with a firm grounding in principles of research design and statistical analysis, should better equip doctoral graduates of high-status schools with the necessary background and skills to produce high quality research.

Page 6: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

236 Scientometrics 78 (2009)

High-status institutions also tend to provide more social capital to their students by enabling them to establish personal contacts and network ties with various “gatekeepers” in the profession (CABLE & MURRAY, 1999; JUDGE ET AL., 1995). For example, doctoral students who attend high-status business schools have a greater chance of meeting and establishing close relationships with influential members of the profession, such as accomplished researchers, editorial board members, and editors of major journals.

Another human capital advantage offered by high-status institutions is cultural capital, which is based on the value that society places on symbols of prestige [USEEM & KARABEL, 1986]. A Ph.D. earned from a high-status school is a pedigree that signifies superior credentials and has the effect of opening doors and creating opportunities that may not exist for recipients of Ph.D.s from less prestigious schools. This credentialing effect reflects the high value that society places on graduation from highly regarded educational programs. The impact of cultural capital is reflected in a phenomenon coined “homosocial reproduction,” whereby “senior faculty tend to hire junior faculty trained at schools ranked favorably by the same constituency as the schools where they themselves were trained” [D’AVENI, 1996].

Perhaps because of these desirable human capital advantages, high-status programs are better able to successfully recruit doctoral students of perceived higher quality and potential. For example, doctoral students with higher average Graduate Management Aptitude Test (GMAT) scores are more likely to be recruited by high-status schools [D’AVENI, 1996]. Thus, the admission process of high-status schools has a filtering effect, resulting in the rejection of weaker students and the acceptance of students who have a high propensity to excel in their scholastic pursuits. Having filtered out the weaker candidates, high-status schools should be more likely to produce graduates who will also excel in their professional endeavors, including academic research.

Because high-status business schools tend to attract high quality doctoral students and provide them with a variety of human capital advantages, we hypothesize that, within the IS discipline, there is a positive relationship between the status of a graduate’s doctoral program and his/her research productivity.

Hypothesis 1: Doctoral graduates in information systems with high-status academic origins will exhibit greater research productivity in terms of both quantity and quality than doctoral graduates with moderate- or low-status academic origins.

The effect of academic affiliation on research productivity

Several theoretical reasons have been suggested to explain why the status of one’s academic affiliation may be a significant determinant of research productivity. For example, individuals may be drawn to universities that reflect their own norms and values, and universities may tend to recruit doctoral students whose norms and values

Page 7: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 237

coincide with the organization’s, consistent with theories of person-organization fit [CHATMAN, 1991; CHATMAN, 1989], as well as theories of attraction, selection, and attrition [SCHNEIDER & AL., 1995; SCHNEIDER, 1987]. Therefore, doctoral graduates who value research may be drawn to, and are more likely to become affiliated with, an employer who also values research and demonstrates its commitment to research by offering rewards in the form of pay raises, tenure, and promotion. In general, high-status business schools are characterized by an emphasis on research and tend to offer such rewards for research productivity [PFEFFER, 1993; KONRAD & PFEFFER, 1990], creating an environment with greater incentives, as well as greater pressures, to publish. This like-mindedness helps to explain the mutual attraction between research oriented doctoral graduates and high-status university employers. In addition, a variety of contextual factors at high-status (research oriented) business schools, such as the availability of research assistants, funding, and facilities, tend to foster and promote research activity [D’AVENI, 1996]. By supporting research, high-status institutions make it easier for faculty to become successful researchers, consistent with theories of accumulative advantage [BEYER & AL., 1995; ALLISON & STEWART, 1974]. Productive scholars are recognized and rewarded for their productivity [GILL, 2001], which engenders continued productivity because such scholars are motivated to maintain their reputation (and have access to resources that support research), while unproductive faculty who fail to earn such recognition tend to become discouraged and less motivated to continue pursuing research [ALLISON & STEWART, 1974]. These effects of accumulative advantage create a symbiotic relationship that enables prestigious universities to maintain their status through the production of high profile academic research from their faculty [ARMSTRONG & SPERRY, 1994]. As time progresses, these effects also result in an increasingly wide gap between successful and unsuccessful scholars within a cohort [ALLISON & STEWART, 1974].

Social context and formal organizational structure may also have an impact on research productivity [RESKIN, 1977]. That is, researchers who work in high-status universities tend to be surrounded by colleagues who are research oriented and socialized to conduct research. In such environments, organizational socialization and peer pressure provide additional motivation for colleagues to publish, in addition to their personal need for achievement, consistent with social information processing theory [SALANCIK & PFEFFER, 1978].

Because high-status business schools tend to attract research oriented doctoral students and provide them with incentives to publish and support for research activity, we hypothesize that, within the IS discipline, there is a positive relationship between the status of a doctoral graduate’s academic affiliation and his/her research productivity.

Hypothesis 2: Doctoral graduates in information systems with high-status academic affiliations will exhibit greater research productivity in terms of both quantity and quality than doctoral graduates with moderate- or low-status academic affiliations.

Page 8: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

238 Scientometrics 78 (2009)

Interaction effect of origin and affiliation

Hypotheses 1 and 2 propose that the status of academic origin and academic affiliation are each significant determinants of research productivity. However, because doctoral graduates with high-status academic origins are more likely to be hired by high-status institutions than graduates of mid- and low-status schools [D’AVENI, 1996], it is necessary to pose a third hypothesis to test whether graduates of high-status IS programs benefit from an “interaction effect” which accumulates the advantages of origin and affiliation to make them the most productive researchers. To analyze a potential interaction effect, we hypothesize that, within the IS discipline, doctoral graduates with high-status origins and high-status affiliations are the most productive researchers in the study sample.

Hypothesis 3: Doctoral graduates in information systems with high-status academic origins and high-status academic affiliations will demonstrate greater research productivity than all other doctoral graduates in information systems.

Lotka’s Law

Lotka’s Law [LOTKA, 1926] is one of a number of derivations of Zipf’s Law [ZIPF, 1935]. It describes empirically the publication frequency by authors in any field and states that the number of authors contributing n publications is about 1/n2 of those contributing one publication. Thus, as the number of articles published increases the number of authors publishing those articles decreases. Lotka developed the function using data from the hard sciences and established that the exponent a≈2. However, research shows that the actual ratios appear to be discipline specific (e.g., [RADHAKRISHNAN & KERNIZAN, 1979; VOOS, 1974]). Although these studies have examined related fields, only one [NATH & JACKSON, 1991] to our knowledge has tested the relatively new field of IS for its adherence to the Lotka distribution.

Hypothesis 4: The distribution of publications found in the field of IS will follow a Lotka distribution with exponent a≈2.

Method

The population of interest is comprised of IS doctoral graduates from institutions accredited by the American Assembly of Collegiate Schools of Business (AACSB) which teach information systems at the college level. This research is based on a sample of doctoral graduates who received their terminal degree in an IS or related discipline between the years 1985–1990, inclusive, from universities accredited by the AACSB. This time period was selected to allow a sufficient window of opportunity for the individuals in the study to publish in the top five IS journals (from their date of

Page 9: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 239

graduation until the year 2000, meaning that individuals had a minimum time period of approximately 11 years to publish in those journals).

To identify the members of the sample, a comprehensive list of IS doctoral graduates from 1985–1990 was initially compiled using data obtained from the directory of IS faculty jointly maintained by the Association for Information Systems (AIS) and the Management Information Systems Research Center (MISRC). This investigation yielded a sample of 584 individuals. A copy of each AACSB school’s list of doctoral graduates was mailed to the appropriate university authority (such as the Dean) for verification. A follow-up request was mailed approximately 6 months later, as necessary.

Of the 119 schools contacted, 82 responded to the request for verification, representing a response rate of 69%. The verified list of graduates included 336 individuals (out of a total of 584 graduates that had been initially identified), representing 58% of sample graduates. Only 184 graduates in the sample of 336 graduates published in the top five IS journals during the relevant time period. Therefore, the final sample consisted of 184 doctoral graduates (78% male and 22% female).

Dependent variables: Measures of research productivity

Research productivity was operationalized using two measures, one to reflect research quality and the other to reflect research quantity. The degree to which published works are cited by other authors is generally considered to be a reflection of the quality of those works (however, for a thorough review of the issues involved in citation analysis, see [PHELAN, 1999]) and thus were used as a proxy for research quality. Both the Science Citation Index and the Social Science Citation Index (data for one of the journals in our study was only available in the SSCI) were used to collect data on citations of published works. The search identified 21,381 citations to the works published by members of the sample.

Simple or unweighted counts of articles, research notes, communications, and other main section articles published in any of the top five IS journals (identified in Journal Rankings, below) during the 16-year period from 1985 to 2000, inclusive, was used as a proxy for research quantity. Book reviews, letters to the editor, replies, comments, abstracts, and similar writings were excluded from the publication count. The search identified 552 publications in the top five IS journals authored or coauthored by members of the sample.

Thus, publication count and citation count are the two dependent variables in this study. The following data for each article authored by a sample member and published in any of the top five IS journals during the relevant period were recorded: (1) the journal’s name, (2) the year of publication, (3) the academic affiliation of the author of

Page 10: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

240 Scientometrics 78 (2009)

interest, (4) the number of authors, (5) a code denoting whether the journal’s editor had the same academic affiliation as the author of interest (in-house editorship), and (6) a code to cross reference these data to the applicable cumulative publication record (described below) for each sample member.

These data were then compiled to create a cumulative publication record for each graduate in the sample. Each cumulative record contained the following data: (1) the graduate’s name and cross reference code, (2) the graduate’s publication count (total number of publications in the top five IS journals), (3) the publication quality as measured by citation count, (4) the ordinal value or rank of the graduate’s academic origin, (5) the ordinal value or rank of the graduate’s academic affiliation, and (6) the percentage of articles where the in-house editorship effect was present.

During the relevant time period, 71 sample members (38.59%) had only one publication in one of the top five IS journals, and 42 sample members (22.83%) had two works published in the top five IS journals. Two prolific members of the sample had 20 publications, respectively, in the top five IS journals. Table 1 provides a list of publication counts as well as the observed frequencies and expected Lotka distribution.

Table 1. Observed and expected distribution of IS publications

# Authors # Publications Observed % Expected % 71 1 38.59 60.79 42 2 22.83 15.20 22 3 11.96 6.75 19 4 10.33 3.80

6 5 3.26 2.43 6 6 3.26 1.69 4 7 2.17 1.24 1 8 0.54 0.95 4 9 2.17 0.75 3 10 1.63 0.61 2 11 1.09 0.50 1 12 0.54 0.42 1 13 0.54 0.35 2 20 1.09 0.15

Journal rankings

Numerous studies have developed IS journal rankings based on various objective and perceptual criteria including citation frequency, the percentage of published articles with an IS focus, perceived contributions to the IS field, and subjective quality assessments by IS faculty [KATERATTANAKUL & AL., 2003; PEFFERS & TANG, 2003; MYLONOPOULOS & THEOHARAKIS, 2001; WALSTROM & HARDGRAVE, 2001; WALCZAK, 1999; WHITMAN & AL., 1999; HARDGRAVE & WALSTROM, 1997; DOKE & AL., 1995; WALSTROM & AL., 1995; GILLENSON & STUTZ, 1991; SHIM & AL., 1991; KOONG & WEISTROFFER, 1989; NORD & NORD, 1989; DOKE & LUKE, 1987; HAMILTON

Page 11: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 241

& IVES, 1983]. One study limited its focus to ranking outlets for e-commerce research within the IS field [BHARATI & TARASEWICH, 2002], and two other studies ranked business computing journals [HOLSAPPLE & AL., 1994; HOLSAPPLE & AL., 1993]. Most of these rankings were based on an opinion survey of IS faculty. Based on a review of these prior rankings and the opinions of other scholars conducting this type of research, “at least among highly ranked journals, there appears to be a fairly high level of consensus as to what constitutes journal quality” [WHITMAN & AL., 1999, P. 108].

In the present study, principal components analysis was performed using six thoroughly researched prior journal rankings, including one of the most recent journal rankings based on a citation count [KATERATTANAKUL & AL., 2003] as well as established rankings based on opinion surveys of IS faculty that have been widely accepted in the IS academic community ([MYLONOPOULOS & THEOHARAKIS, 2001; WALSTROM & HARDGRAVE, 2001; WHITMAN & AL., 1999; GILLENSON & STUTZ, 1991; SHIM & AL., 1991]; see Appendix A for a more detailed description of these studies and their methodologies). Each of the six studies was treated as a variable in a principal components analysis. The first principal component explained 92.18% of the variance in rankings. Based on our principal components analysis of these respected journal rankings, the following journals are considered to be the top five IS journals (in order) for purposes of the present study: MIS Quarterly, Information Systems Research, Communications of the ACM, Journal of Management Information Systems, and Management Science. To confirm our results we consulted the journal rankings maintained by the Association of Information Systems, an international association of IS scholars, and found that our lists are identical. Finally, since journal impact factors are understood to represent a measure of a journals prestige and quality (see, e.g., [BORDONS & GOMEZ, 2002; SOLARI & MAGRI, 2000]), we also used the SCI and SSCI to examine the journal impact factors for our basket of journals. Table 2 provides a list of these journals along with factor scores, simple rank, and average journal impact factors (2001–2004).

Table 2. Journal rankings, factors scores and impact factors

Journal Rank Factor score Impact factor MIS Quarterly 1 4.731 2.591 Information Systems Research 2 4.471 1.053 Management Science 3 4.467 1.584 Communications of ACM 4 4.454 1.789 Journal of Management Information Systems 5 4.054 1.997

Independent variables: Status of academic origin and academic affiliation

Status of academic origin and status of academic affiliation are the two independent variables tested in the present study. The academic origin of each sample member was identified as described earlier (through verification of graduation lists obtained from

Page 12: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

242 Scientometrics 78 (2009)

independent sources). Academic affiliation was considered to be the institution where the individual was employed at the time of publication.

Classification of the sample members into their respective status tiers (using the status rankings derived from the principal components analysis described below) revealed that 97 of the doctoral graduates (52.7%) originated from high-status IS programs, 43 (23.3%) originated from middle-status IS programs, and 40 (24.9%) originated from low-status IS programs.

Of the 97 graduates of high-status IS programs, approximately 36% also had high-status academic affiliations, 13% had middle-status affiliations, and 51% had low-status affiliations. Of the 43 graduates of middle-status programs, approximately 6% had high-status affiliations, 37% had middle-status affiliations, and 58% had low-status affiliations. The affiliations of the 44 graduates with low-status academic origins were distributed as follows: approximately 8% had high-status affiliations, 10% had middle-status affiliations, and 82% had low-status affiliations. These figures are only approximate as movement across tiers is relatively common. Moreover, in some instances, graduates of lower tier schools obtained employment at higher tier schools. Although graduates with high-status origins were more likely to have high-status affiliations than the other graduates, approximately 64% of the high-status graduates were affiliated with middle- or low-status institutions.

IS status rankings

Using methodologies based on adjusted article counts or page counts, several studies have ranked universities based on the research productivity of their IS departments ([U.S. NEWS AND WORLD REPORT, 2002; ATHEY & PLOTNICKI, 2000; TRIESCHMANN & AL., 2000; IM & AL., 1998; GROVER & AL., 1992; LENDING & WETHERBE, 1992; VOGEL & WETHERBE, 1984]; see Appendix B for a more detailed description of these studies). The present study develops an ordinal measure of the independent variables (i.e., origin and affiliation) by using factor scores derived from a principal components analysis of these academic quality rankings, excluding the VOGEL & WETHERBE [1984] study, which was updated by the LENDING & WETHERBE [1992] study. In a manner similar to the analysis performed for journal rankings, each of the above studies was treated as a separate variable in the principal components analysis. The first principal component explained 69.65% of the variance. Factor scores for the 48 academic origins and additional 85 affiliations were used to rank the 133 institutions represented in our sample.

Finally, the factor scores were used to determine appropriate classification of schools into one of three tiers: high-status, middle-status, and low-status. A number of cluster analyses were performed to identify significant break points. Factor scores were clustered by 2, 3, 4 etc. groupings up to a maximum of 12 clusters.

Page 13: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 243

Table 3. IS department status rankings High status Middle status Low status

Arizona California, Berkeley UCLA Carnegie Mellon Georgia Harvard Illinois, Urbana-

Champaign Indiana Maryland MIT Michigan Minnesota New York University Pennsylvania Southern California Stanford Texas, Austin Washington, Seattle

Arizona State Boston University California, Irvine Florida Georgia Institute of

Technology Georgia State Houston Iowa North Carolina,

Chapel Hill Ohio State Pennsylvania State Pittsburgh Purdue Rochester Texas A&M Vanderbilt Virginia Wisconsin, Madison Washington (St. Louis)

Alabama Auburn Babson College Baltimore Baruch College,

CUNY Bentley College Bowling Green

State California State,

Long Beach California State,

San Marcos Canisius College Case Western

Reserve Claremont Graduate

School Clemson Colorado, Boulder Colorado,

Colorado Springs Colorado, Denver Dayton Delaware Denver DePaul Drexel East Carolina Eastern Kentucky Emory Florida Atlantic Florida International Florida State George Mason Georgetown Hartford Hawaii, Honolulu Hawaii, Manoa

Idaho Illinois State Iowa State Johns Hopkins Kansas Kennesaw State Kent State Kentucky LeMoyne College Loyola College

(Baltimore) Loyola (Chicago) Maine Massachusetts McMaster (Canada) Miami (Ohio) Mississippi Mississippi State Missouri, St. Louis Naval Postgraduate

School Nebraska Nevada New Jersey Institute of

Technology North Carolina,

Charlotte North Carolina,

Greensboro North Carolina,

Wilmington North Florida North Texas Northern Illinois Oklahoma Oklahoma State Oregon State Rennselaer Polytechnic

Institute

San Diego State San Francisco State South Carolina South Florida Southern Illinois,

Edwardsville Southern Illinois,

Carbondale Southern Methodist Southwest Missouri

State St. Louis SUNY, Buffalo SUNY, Stony Brook Syracuse Temple Texas Christian Texas Tech Texas, Arlington Texas, Dallas Texas, El Paso Texas, San Antonio Toledo Tulsa Vermont Villanova Virginia Commonwealth Virginia Polytechnic

Institute Wake Forest Washington State Wayne State (Michigan) Western Illinois Wisconsin, Milwaukee Wisconsin, Whitewater Worcester Polytechnic

Institute

The most often identified break points were after the schools ranked 17th and 35th. Thus these points, that also divided the population into three groupings, were designated as the break points for high/middle-status and middle/low-status tiers, respectively.

Control variable: In-house editorship effect

We also examined the potential effect that in-house editorship may have on research productivity. In-house editorship exists when a journal’s editor has the same academic affiliation as the author of interest. In-house editorship is considered a control variable and will be considered present if, for a given article, any of the authors was then

Page 14: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

244 Scientometrics 78 (2009)

currently affiliated with the same institution sponsoring the journal that published the article. The in-house editorship variable will be a binary or dichotomous variable associated with each individual publication record. For each publication record, a value of “1” will be assigned to indicate the presence of in-house editorship, and a value of “0” will be assigned to indicate its absence. For example, if the author’s (or any of his/her coauthor’s) academic affiliation was the University of Minnesota and the article was published in MIS Quarterly (an example of in-house editorship), then a value of “1” will be assigned to the in-house editorship variable.

Results

Research productivity is represented by two different discrete variables, publication counts and citation counts (the dependent variables). The independent variables, status of academic origin and status of academic affiliation, are represented by a discrete variable based on ordinal rankings. The control variable, in-house editorship, is also discrete (present = 1, absent = 0). We assumed independent random sampling from each of the populations and that the populations under study are normally distributed with means that may or may not be equal, but with equal variances.

Multivariate analysis of variance (MANOVA) and associated tests were used to examine the research hypotheses. The F-test was used to assess whether there are significant differences in mean publication rates across the academic origin and affiliation categories after controlling for the presence of other independent variables while Tukey’s pairwise comparison test was used to assess differences between pairs. The MANOVA procedure was first used to test for mean differences in research quantity and quality across levels of academic origin and, a second time to test for mean differences in research quantity and quality across academic affiliation. A third MANOVA analysis was also performed to test the significance of the interaction effect of academic origin and academic affiliation on publication counts and citations. Finally, hypothesis 4, relating to the Lotka distribution, was tested using ROUSSEAU & ROUSSEAU’S [2000] LOTKA program.

The in-house editorship data were initially included in the analyses. The analyses were then repeated, omitting the in-house editorship data. Because there were few instances of in-house editorship in the sample, both analyses were substantially equivalent, notwithstanding the inclusion or omission of in-house editorship. Therefore, the results of the analyses that omitted the in-house editorship data are presented.

Page 15: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 245

Effect of academic origin

Hypothesis 1 posits that doctoral graduates in information systems with high-status academic origins will exhibit greater research productivity in terms of both quantity and quality than doctoral graduates with moderate- or low-status academic origins.

The MANOVA for this hypothesis used publication count and citation count as the dependent variables regressed against academic origin. As Table 4 indicates, the graduates with middle-status academic origins had somewhat higher mean publication counts, averaging 3.06 publications; in comparison, the high-status graduates averaged 2.24 publications and the low-status graduates averaged 1.89 publications. Overall, the F test results indicated that these differences were statistically significant across the three categories (p < 0.05). Tukey’s pairwise comparison test also shows significant pairwise differences with middle-status origins outpacing low-status origins. With regard to citation counts, high-status graduates averaged 108.82 citations of their work, middle-status graduates averaged 95.26 citations, and low-status graduates averaged 37.08 citations. F test results indicated that the differences in citation counts across academic origin categories were not statistically significant (p < 0.09). Tukey’s pairwise comparison test indicated that there were also no significant pairwise differences in citation counts. Therefore, Hypothesis 1 was not supported with respect to publication quantity or publication quality.

Table 4. Effect of status of academic origin and academic affiliation

Status of academic origin High Middle Low F Statistic Mean publication count 2.24 3.06 1.89 3.542* Mean citation count 108.82 95.26 37.08 2.472 Number of sample members 135 50 52 Status of academic affiliation High Middle Low F Statistic Mean publication count 2.97 2.58 1.94 4.575** Mean citation count 159.44 95.84 53.63 6.279** Number of sample members 64 45 128

* p< 0.05 ** p < 0.01

Page 16: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

246 Scientometrics 78 (2009)

Effect of academic affiliation

Hypothesis 2 posits that doctoral graduates in information systems with high-status academic affiliations will exhibit greater research productivity in terms of both quantity and quality than doctoral graduates with moderate- or low-status academic affiliations.

As with the analysis of hypothesis 1, the effects of affiliation were tested using publication count and citation count as the dependent variables in a MANOVA. Results show that, in terms of publication counts, graduates with high-status academic affiliations averaged more publications in the top five IS journals during the relevant time period (an average of 2.97 publications, as compared to 2.58 publications and 1.94 publications by authors with middle-status affiliations and low-status affiliations, respectively). F-test results (Table 4) indicated that these differences were statistically significant (p < 0.01). Tukey’s pairwise comparison test shows a significant difference between the publication counts of those with high-status origins and those with low-status origins (p < 0.05). In terms of citation counts, the published works of those with high-status affiliations were cited more often (an average of 159.44 times, as compared to 95.84 times and 53.63 times for the works of authors with middle-status and low-status affiliations, respectively) and these differences are statistically significant (p < 0.01). Pairwise analysis utilizing Tukey’s comparison test again showed a significant difference in number of citations between graduates with high-status affiliations and those with low-status affiliations (p < 0.001). Thus, Hypothesis 2 is supported by the results of the analysis in that overall affiliation status is significantly related to both publication count and citation count.

Interaction effect of origin and affiliation

Hypothesis 3 posits that doctoral graduates in IS with both high-status academic origins and high-status academic affiliations will demonstrate greater research productivity than all other doctoral graduates in information systems. The MANOVA for this hypothesis regressed both publication counts and citations against the independent variables academic origin and academic affiliation as well as an interaction term representing origin and affiliation combined. The mean publication counts and citation counts for each of the nine combinations of academic origin and academic affiliation are shown in Table 5.

As predicted, tests indicate the existence of a significant interaction effect on publication counts (p < 0.05). Pairwise comparisons showed that, on average, graduates with both middle-status origins and high-status affiliations published significantly more articles (4.83) in the top five IS journals than graduates with either high-status origins/low-status affiliations (p < 0.05) or low-status origins/low-status affiliations

Page 17: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 247

(p < 0.05). In other words, graduates with middle-status origins who became affiliated with high-status institutions were the most productive group in terms of quantity.

In terms of citation counts, authors with both high-status origins and high-status affiliations had their work cited significantly more often than low-status origin/low-status affiliation authors (161.93 versus 57.18, p < 0.05), however, the interaction as a whole was not significant (p < 0.6).

Table 5. Interaction effect of origin and affiliation

Status of academic affiliation High Middle Low Univariate F

High-status academic origin Mean publication count 2.78 2.33 1.73 2.399* Mean citation count 161.93 119.76 57.18 1.946 Number of sample members 54 21 60

Middle-status academic origin

Mean publication count 4.83 2.93 2.76 Mean citation count 203.00 80.20 80.76 Number of sample members 6 15 29

Low-status academic origin

Mean publication count 2.75 2.56 1.64 Mean citation count 60.50 66.11 27.97 Number of sample members 4 9 39

*p < 0.05 These results indicate that high-status origin/high-status affiliation graduates were

not always the most productive group of researchers in terms of either quantity or quality. Rather, the greatest positive influence of high-status academic affiliation on productivity was among those graduates with middle-status academic origins. Thus, Hypothesis 3, stating that those graduates with a combination of high-status origin and high-status affiliation would be the most productive researchers, is only partially supported by the results of the analysis.

Lotka’s Law distribution

Hypothesis 4 posits that the number of publications in the field of IS will follow a Lotka distribution. To test this hypothesis we first calculated the theoretical or expected Lotka distribution and compared it to the observed frequencies (Table 1). The proportion of all authors publishing one paper (38.59%) greatly differs from the Lotka prediction (60.69%). We next used ROUSSEAU & ROUSSEAU’S [2000] LOTKA program to test for the theoretical distribution. We found that our data does not follow a Lotka

Page 18: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

248 Scientometrics 78 (2009)

distribution with the Kolmogorov-Smirnov maximum difference statistic D-Max=0.1536 and greater than the critical values of 0.1202, 0.1003, and 0.0899 at the respective significance levels of 1%, 5%, and 10%.

Discussion

Given the emphasis placed on research productivity as a measure of a faculty member’s academic success at research oriented institutions, Deans and other academic decision makers seek to identify reliable indicators of research potential as a means of improving their decision making in the selection process. Two factors commonly thought to influence the research productivity of individuals in the IS field are the quality of their academic origins and academic affiliations, with status being used as a proxy for quality.

The findings of this study confirm in the IS discipline the long held assumption that the status of a faculty member’s academic affiliation is a significant determinant of research productivity in terms of quantity. Researchers affiliated with high-status institutions published, on average, more articles in the top five IS journals than researchers affiliated with middle-status institutions and low-status institutions, and these differences were statistically significant. In terms of quality, the frequency of citations to the published works of these authors were also significantly different across the three tiers of affiliation.

The present analysis however dispels the notion that graduates of high-status doctoral programs in the discipline of information systems will become superior researchers. While graduates of the more prestigious IS programs did produce more highly cited research, they did not produce a larger average number of publications. That is, differences in research productivity based on academic origin were in the predicted direction with regard to citations, but graduates of high-status institutions actually published fewer articles than graduates of middle-status universities. The findings indicate that productive scholars were not heavily concentrated among a few elite universities with respect to their academic origins, and that graduates of middle-status doctoral programs were as productive as graduates of high-status programs in terms of both research quantity and quality, if not more so.

Our analysis of a potential interaction effect (testing the combined effects of high-status origin and high-status affiliation on productivity) casts further doubt on the traditional assumption that earning a doctorate from a high-status IS program is a necessary precursor to a successful career in academic research. We expected to find that graduates who enjoyed the presumed benefits of both a high-status academic origin and a high-status academic affiliation would be the most prolific researchers.

Page 19: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 249

The findings did not support that hypothesis; rather, those doctoral graduates were outperformed by graduates of middle-status schools who were affiliated with high-status schools.

We also found that the frequency of publications in the field of IS do not follow a Lotka distribution. However, our analysis of scatterplots do indicate the existence of a power distribution. A result that may support the negative binomial thesis of RAO (1980; See also, [GUPTA & AL., 1998]) and, perhaps, given the relative youth of the IS field, contradicts GUPTA & KARISIDDAPPA [1996].

The findings of this study suggest that in the IS discipline, academic hiring practices that rely on academic origin as a barometer of future research productivity are unwise. Assuming that prestigious doctoral programs actually have higher quality programs and provide better training and preparation for their graduates, the findings indicate that graduates of less prestigious (presumably lower quality) programs are able to overcome their relative lack of training and are statistically as likely to be productive researchers as their better trained counterparts. This is particularly true for those graduates of less prestigious programs who become employed by high-status, research oriented institutions that promote and foster research.

The positive relationship between academic affiliation and research productivity that was confirmed by this study emphasizes the importance of socialization and development processes in a faculty member’s professional environment. The implication is that employers may have the power to cultivate a faculty of productive researchers, regardless of their academic origins, by creating an organizational culture that fosters, promotes, and rewards research productivity. Organizational factors such as support and rewards for research, combined with greater research expectations and pressure to publish at high-status schools, appear to be effective motivators for the production of research.

Because academic affiliation, rather than academic origin, is a significant determinant of research productivity, universities should focus on creating an environment that is conducive to publishing articles. Further, IS departments should direct their hiring committees to modify their selection criteria, reducing the weight given to academic origin (or eliminating it as a selection criterion altogether) and increasing the weight given to other selection criteria. Hiring committees should be empowered to exercise greater discretion in their endeavors to identify and hire talented individuals who are likely to succeed in the realm of publishing.

Limitations

The present study was conducted using methodologies consistent with those used in other published studies, and the findings of this study provide insight into the relative

Page 20: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

250 Scientometrics 78 (2009)

importance of selection and development for the research productivity of scholars in the IS field. However, several limitations should be acknowledged.

We sought only to examine the effects of the status of academic origin and academic affiliation on research productivity. Many other factors are also likely to affect research productivity, including professional rank, age, gender, race/ethnicity, and family status [BELLAS & TOUTKOUSHIAN, 1999; BEYER & AL., 1995]. The present study did not control for these other potential influences on research productivity.

We also attempted to assess the impact of a potential interaction effect between academic origin and academic affiliation. While the results of that analysis lend further support to the conclusion that academic affiliation is a more influential determinant of research productivity than academic origin, these findings should be interpreted cautiously, because two of the groups that outperformed the high-status origin/high-status affiliation group contained a very small number of sample members. The interaction results should, therefore, be interpreted with caution, because one or two highly productive scholars greatly influenced the mean publication count and citation count in those two cells. This may warrant further research into the potential interactive effects of origin and affiliation.

We measured research productivity by reference to articles published in five premier IS journals. Evidence suggests that assessment of research productivity is highly sensitive to changes in journal baskets and other factors such as the length of time analyzed and variables used to assess the quality of published articles, including citation counts [CHUA & AL.., 2002]. Thus, the selection of a journal basket is an important step in this type of analysis. For purposes of this study, we deemed it appropriate to limit the scope of our analysis to top tier journal publications, and the method for identification of such journals was rational and unbiased. Nevertheless, scholars in the field are likely to disagree as to which journals enjoy such eminent standing. Scholars in the field may also consider themselves or their colleagues to be productive researchers if they frequently publish in lower quality journals (or journals that are simply not on the list used in the present study). Further, some would argue that a basket of five journals is too small [CHUA & AL., 2002; GUIMARAES, 1998]. Some would also argue that any basket of journals for purposes of a study such as this should be limited to IS-specific journals , that is, journals that specifically target IS research [CHUA & AL., 2002], which would exclude multidisciplinary journals such as Management Science and Decision Sciences. On the other hand, others argue for an even broader collection of journals to reflect the fact that the IS field is cross disciplinary and IS-related articles are of interest to readers of well regarded journals in other disciplines such as psychology, management, and administrative science [GUIMARAES, 1998].

The limitations of journal rankings based on subjective criteria such as the perceptions and opinions of IS faculty have also been noted [KATERATTANAKUL & AL., 2003; CHUA & AL., 2002; KAHNEMAN & AL., 1982]. However, most existing journal

Page 21: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 251

rankings are based on such subjective criteria, and the present study also used the KATERATTANAKUL & AL. [2003] study, which employed an objective citation analysis to rank journals. That objective analysis yielded results that “closely resemble results reported in several previous studies [using subjective measures of quality]” [KATERATTANAKUL & AL., 2003, P. 113].

Publication counts were not adjusted for order or number of authors, which could arguably have the effect of distorting the measurement of an individual’s research productivity in terms of his or her contribution to those publications. However, it is difficult to accurately gauge the relative contributions to a manuscript on the basis of order of authorship [GOMEZ-MEJIA & BALKIN, 1992]. Indeed, it has been argued that some “senior professors [are] widely known as authorship hounds, demanding from junior faculty or students that their names be included, and/or be placed first, in articles which, in many cases, they have not contributed to” [GUIMARAES, 1998, P. 22]. Moreover, adjusting publication counts for the number of authors may actually understate the value of a published work, because research suggests that jointly authored articles are cited more often than solely authored articles [DIAMOND, 1985]. In recognition of these limitations, we also examined citation counts in an attempt to evaluate an article’s contribution to the IS literature, which some consider to be a more important consideration in the assessment of research productivity [GUIMARAES, 1998].

Finally, our evaluation of the distribution of publications in IS per the Lotka distribution might well have produced different results if our sample had been larger in terms of both the number of journals/publications and the number of years over which data were gathered. It is possible that had we used the sort of extensive data set employed by GUPTA & KARISIDDAPPA [1996] the Lotka distribution hypothesis would have been supported.

Directions for future research

Until the LONG & AL. [1998] study was conducted in the field of management, prior research in a variety of other disciplines had supported the common assumption that graduates of high-status doctoral programs are more likely to develop into prolific authors than graduates of less esteemed programs. The LONG & AL. study and the present study produced findings that counter that assumption in the management and IS fields. The unexpected outcome of these two studies raises the question of whether inherent differences among the various academic disciplines explain the disparate research findings, or whether the relationship between academic origin and research productivity has diminished over time. The results of these two recent studies may warrant reinvestigation in other disciplines previously examined in order to confirm or refute the effects of academic origin in such fields.

Page 22: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

252 Scientometrics 78 (2009)

Although the present study confirmed a positive relationship between the status of academic affiliation and research productivity, the authors did not attempt to characterize the relative influence that various organizational factors may have on research productivity. Future research could extend this study by examining each of the diverse traits that define organizational culture (such as incentives, availability of funding and research assistants, teaching loads, pressure, and collegiality) to determine the extent to which each trait influences research productivity. The impact of organizational policies, such as those regarding promotion and tenure, joint authorship, and journal quality rankings, should also be examined. Such research may also discover unexpected organizational factors that may help to explain the positive relationship between academic affiliation and research productivity.

Lastly, as we implied above, the Lotka results for this study suggest that future research in IS faculty productivity distributions should employ an expanded data set. Moreover, our results suggest that a power distribution of some sort does exist. This finding and the explosive growth of IS in the past 25 years suggest that a study similar to that done by GUPTA & KARISIDDAPPA [1996] for the field of population genetics, to look for both Lotka and negative binomial distributions, would be of value.

References

ALLISON, P. D., STEWART, J. A. (1974). Productivity differences among scientists: Evidence for accumulative advantage. American Sociological Review, 39 : 596–606.

ARMSTRONG, J. S., SPERRY, T. (1994). Business school prestige – research versus teaching. Interfaces, 24 (2) : 13–43.

ASSOCIATION FOR INFORMATION SYSTEMS (2007). Journal Rankings. Retrieved from http://www.isworld.org/csaunders/rankings.htm December 2007.

ATHEY, S., PLOTNICKI, J. (2000). An evaluation of research productivity in academic IT. Communications of the Association for Information Systems, 3 : 1–20.

BECKER,G. (1964). Human Capital: A Theoretical and Empirical Analysis with Special Reference to Education. New York: Columbia University Press.

BELLAS, M. L., TOUTKOUSHIAN, R. K. (1999). Faculty time allocations and research productivity: Gender, race and family effects. The Review of Higher Education, 22 (4) : 367–90.

BEYER, J. M, CHANOVE, R. G., FOX, W. B. (1995). The review process and the fates of manuscripts submitted to AMJ. Academy of Management Journal, 38 : 1219–60.

BHARATI P., TARASEWICH, P. (2002). Global perceptions of journals publishing e-commerce research. Communications of the ACM, 45 (5) : 21–26.

BORDONS, M., FERNANDEZ, M. T., GOMEZ, I. (2000). Advantages and limitations in the use of impact factor measures for the assessment of research performance in a peripheral country. Scientometrics, 53 (2) : 195–206.

CABLE, D. M., MURRAY, B. (1999). Tournaments versus sponsored mobility as determinants of job search success. Academy of Management Journal, 42 (4) : 439–49.

CARTTER, A. M. (1966). An Assessment of Quality in Graduate Education. Washington, D.C.: American Council on Education.

CHATMAN, J. A. (1989). Improving interactional organizational research: A model of person-organization fit. Academy of Management Review, 14 : 333–49.

Page 23: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 253

CHATMAN, J. A. (1991). Matching people and organizations: Selection and socialization in public accounting firms. Administrative Science Quarterly, 36 : 459–84.

CHUA, C., CAO, L., COUSINS, K., STRAUB, D. W. (2002). Measuring researcher-production in information systems. Journal of the Association for Information Systems, 3 : 145–215.

CLEMENTE, F., STURGIS, R. (1974). Quality of department of doctoral training and research productivity. Sociology of Education, 47 : 287–99.

COLMAN, A. M., DHILLON, D., COULTHARD, B. (1995). A bibliometric evaluation of the research performance of British university politics departments: Publications in leading journals. Scientometrics, 32 (1) : 49–66.

CRANE, D. (1965). Scientists at major and minor universities: a study of productivity and recognition. American Sociological Review, 30 (5) : 699–714.

D’AVENI, R.A. (1996). A multiple-contingency, status-based approach to interorganizational mobility of faculty and input-output competition among top business schools. Organization Science, 7 : 166–89.

DIAMOND, A. M. (1985). The money value of citations to single-authored and multiple-authored articles. Scientometrics, 8 : 815–20.

DOKE, E. R., LUKE, R. H. (1987). Perceived quality of CIS/MIS journals among faculty: publishing hierarchies. Journal of Computer Information Systems, 28 (4) : 30–33.

DOKE, E. R., REBSTOCK, S. E., LUKE, R. H. (1995). Journal publishing preferences of CIS/MIS scholars: an empirical investigation. Journal of Computer Information Systems, 36 (1) : 49–64.

FOLGER, J., ASTIN, H., BAYER, A. (1970). Human Resources and Higher Education. New York: Russell Sage Foundation.

FOWLER, A. R., BUSHARDT, S. C., BROOKING, S. A. (1985). An analysis of the authorship of management-oriented journals: The relationship between school status, article type, publication outlet, and author academic position. The Journal of Business Communication, 22 (3) : 25–36.

GILL, T. G. (2001). What's an MIS paper worth? (An exploratory analysis). The Data Base for Advances in Information Systems, 32 (2) : 14–33.

GILLENSON, M. L., STUTZ, J. (1991). Academic issues in MIS: Journals and books. MIS Quarterly, 15 (4) : 447–52.

GOMEZ-MEJIA, L. R., BALKIN, D. B. (1992). Determinants of faculty pay: an agency theory perspective. Academy of Management Journal, 35 : 921–55.

GROVER, V., SEGARS, A. H., SIMON, S. J. (1992). An assessment of institutional research productivity in MIS. The Data Base for Advances in Information Systems, 23 (4) : 5–9.

GUAN, J., MA, N. (2004). A comparative study of research performance in computer science. Scientometrics, 61 (3) : 339–359.

GUIMARAES, T. (1998). Assessing research productivity: important but neglected considerations. Decision Line, 18 : 22.

GUPTA, B. M., KARISIDDAPPA, C. R. (1996). Author productivity patterns in theoretical population genetics (1900–1980). Scientometrics, 36 (1) : 19–41.

GUPTA, B. M., KUMAR, S., AGGARWAL, B. S. (1999). A comparison of productivity of male and female scientists of CSIR. Scientometrics, 45 (2) : 269–289.

GUPTA, B. M., KUMAR, S., ROUSSEAU, R. (1998). Applicability of selected probability distributions to the number of authors per article in theoretical population genetics. Scientometrics, 42 (3) : 325–334.

HAGSTROM, W. O. (1968). Departmental prestige and scientific productivity. Sociological Abstracts, 16 (6) : 16.

HAGSTROM, W. O. (1971). Inputs, outputs, and the prestige of university science departments. Sociology of Education, 44 : 375–97.

HAMILTON, S., IVES, B. (1983). The journal communication system for MIS research. The Data Base for Advances in Information Systems, 14 (2) : 3–14.

HANCOCK, T., LANE, J., RAY, R., GLENNON, D. (1992). The ombudsman: factors influencing academic research productivity: A survey of management scientists. Interfaces, 22 (5) : 26–38.

HARDGRAVE, B. C., WALSTROM, K. A. (1997). Forums for MIS scholars. Communications of the ACM, 40 (11) : 119–124.

Page 24: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

254 Scientometrics 78 (2009)

HENRY, W. R., BURCH, E. E. (1974). Institutional contributions to scholarly journals of business. Journal of Business, 47 (1) : 56–66.

HOGAN, T. D. (1986). The publishing performance of U.S. Ph.D. programs in economics during the 1970s. Journal of Human Resources, 21 : 216–29.

HOLSAPPLE, C. W., JOHNSON, L. E., MANAKYAN, H., TANNER, J. T. (1993). A citation analysis of business computing research journals. Information and Management, 25 (5) : 231–44.

HOLSAPPLE, C. W., JOHNSON, L. E, MANAKYAN, H., TANNER, J. T. (1994). Business computing research journals: a normalized citation analysis. Journal of Management Information Systems, 11 (1) : 131–40.

HU, Q., GILL, T. G. (2000). IS faculty research productivity: influential factors and implications. Information Resources Management Journal, 13 (2) : 15–25.

HU, Q., GILL, T. G. (2002). An analysis of academic research productivity of information systems faculty. In: M. KHOSROWPOUR (Ed.), Advanced Topics in Information Resources Management, (pp. 296–314). Hershey (PA): Idea Group Publishing.

IM, J. H., HARTMAN, S. (1997). The role of research in MIS faculty performance evaluation: an exploratory study. Journal of Computer Information Systems, 37 (3) : 37–40.

IM, K. S., KIM, K. Y., KIM, J. S. (1998). An assessment of individual and institutional research productivity in MIS. Decision Line, (December/January) : 8–12.

JACOBS, F., HARTGRAVES, A., BEARD, L. (1986). Publication productivity of doctoral alumni: a time adjusted model. The Accounting Review, 61 : 179–87.

JASKOLKA G, BEYER J. M., TRICE, H. M. (1985). Measuring and predicting managerial success. Journal of Vocational Behavior, 26 : 189–205.

JAUCH, L. R., GLUECK, W. F. (1975). Evaluation of university professors' research performance. Management Science, 22 (1) : 66–75.

JOLSON, M. A. (1974). Criteria for promotion and tenure: a faculty view. Academy of Management Journal, 17 (1) : 149–54.

JUDGE, T. A., CABLE, D. M., BOUDREAU, J. W., BRETZ, R. D. (1995). An empirical investigation of the predictors of executive career success. Personnel Psychology, 48 : 485–519.

KAHNEMAN, D., SLOVIC, P., TVERSKY, A. (1982). Judgement under Uncertainty: Heuristics and Biases. Cambridge (UK): Cambridge University Press.

KATERATTANAKUL, P., HAN, B., HONG, S. (2003). Objective quality ranking of computing journals. Communications of the ACM, 46 (10) : 111–14.

KONRAD, A., PFEFFER, J. (1990). Do you get what you deserve? Factors affecting the relationship between productivity and pay. Administrative Science Quarterly, 35 : 258–85.

KOONG, K. S., WEISTROFFER, H. R. (1989). Faculty usage of management information systems journals: A survey. Journal of Computer Information Systems, 30 (1) : 1–4.

LABAND, D. N. (1986). A ranking of the top U.S. economics departments by research productivity of graduates. Journal of Economic Education, 17 : 70–77.

LANE, J., RAY, R., GLENNON, D. (1990). Work profiles of research statisticians. The American Statistician, 44 (1) : 9–13.

LARSEN, K. R. (1998). The damocles sword of academic publishing: Sharper students or duller sword in the MIS field? Crossroads of the ACM, 5 (2) : 7 pages.

LARSEN, K. R., NEELY, M. P. (2003). Profiles of MIS doctoral candidates: Ideals and reality. The Database for Advances in Information Systems, 31 (3) : 64–77.

LAZARSFELD, P. F., THIELANS, W. (1958). The Academic Mind. Glencoe (IL): The Free Press. LENDING, D., WETHERBE, J. C. (1992). Update on MIS research: a profile of leading journals and U.S.

universities. The Data Base for Advances in Information Systems, 23 (3) : 5–11. LONG, R. G., BOWERS, W. P., BARNETT, T., WHITE, M. C. (1998). Research productivity of graduates in

management: Effects of academic origin and academic affiliation. Academy of Management Journal, 41 (6) : 704–14.

LOTKA, A. J. (1926). The frequency distribution of scientific productivity. Journal of the Washington Academy Of Sciences, 16 (12) : 317–323.

MANIS, J. G. (1951). Some academic influences upon publication productivity. Social Forces, 29 (March) : 267–72.

Page 25: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 255

MELTZER, B. (1949). The productivity of social scientists. American Journal of Sociology, 55 : 25–29. MOORE, L. J., TAYLOR, B. W. (1980). A study of institutional publications in business-related academic

journals, 1972–1978. Quarterly Review of Economics and Business, 20 (1) : 87–97. MYLONOPOULOS, N., THEOHARAKIS, V. (2001). Global perceptions of IS Journals: Where is the best IS

research published? Communications of the ACM, 44 (9) : 29–33. NATH, R., JACKSON, W. M. (1991). Productivity of management information systems researchers: Does

Lotka’s law apply? Information Processing and Management, 27 (2/3) : 203–210. NORD, J. H., NORD, G. D. (1989). MIS research: Journal status assessment and analysis. Information &

Management, 29 : 29–42. NWAGWU, W. (2006). A bibliometric analysis of productivity patterns of biomedical authors of Nigeria

during 1967–2002. Scientometrics, 69 (2) : 259–269. PEFFERS, K., TANG, Y. (2003). Identifying and evaluating the universe of outlets for information systems

research: ranking the journals. The Journal of Information Technology Theory and Application, 5 (1) : 63–84.

PETRY, G, SETTLE, J. (1988). A comprehensive analysis of worldwide scholarly productivity in selected US business journals. Quarterly Review of Economics and Business, 28 (3) : 88–104.

PFEFFER, J. (1993). Barriers to the advance of organizational science: Paradigm development as a dependent variable. Academy of Management Review, 18 : 599–620.

PFEFFER, J., ROSS, J. (1982). The effects of marriage and a working wife on occupational and wage attainment. Administrative Science Quarterly, 27 : 66–80.

PHELAN, T. J. (1999). A compendium of issues for citation analysis. Scientometrics, 45 (1) : 117–136. PSACHAROPOULOS, G. (1985). Returns to education: a further international update and implications. Journal

of Human Resources, 20 : 583–604. RADHAKRISHNAN, T., KERNIZAN, R. (1979). Lotka’s Law and computer science literature. Journal of the

American Society for Information Science, January : 52–54. RAO, I. K. R. (1980). The distribution of scientific productivity and social change. Journal of the American

Society for Information Science, March : 111–122. REBNE, D. S., DAVIDSON, N. B. (1992). Understanding patterns of publishing activity in academic research

occupations. Decision Sciences, 23 (4) : 944–56. RESKIN, B. (1977). Scientific productivity and the reward structure of science. American Sociological Review,

42 : 491–504. ROSENBAUM, J. E. (1984). Career Mobility in a Corporate Hierarchy. New York: Academic Press. ROUSSEAU, B., ROUSSEAU, R. (2000). LOTKA: A program to fit a power law distribution to observed

frequency data. Cybermetrics, 4 (1). Retrieved from http://www.cindoc.csic.es/cybermetrics/articles/v4i1p4.html, December 2007.

SALANCIK, G., PFEFFER, J. (1978). A social information processing approach to job attitudes and task design. Administrative Science Quarterly, 23 : 224–53.

SCHNEIDER, B. (1987). The people make the place. Personnel Psychology, 40 : 437–54. SCHNEIDER, B., GOLDSTEIN, H. W., SMITH, D. B. (1995). The ASA framework: An update. Personnel

Psychology, 48 : 747–73. SENTER, R. (1986). A causal model of productivity in research faculty. Scientometrics, 10 (5) : 307–328. SHIM, J. P., ENGLISH, J. B., YOON, J. (1991). An examination of articles in the eight leading management

information systems journals. Socio-Economic Planning Science, 25 (3) : 211–19. SIEGFRIED, J. J. (1972). The publishing of economic papers and its impact on graduate faculty ratings,

1960–1969. Journal of Economic Literature, 10 (March) : 31–49. SOLARI, A., MAGRI, M. (2002). A new approach to the SCI Journal Citations Reports, a system for

evaluating scientific journals. Scientometrics, 47 (3) : 605–625. SOMIT, A., TANENHAUS, J. (1964). American Political Science: A Profile of a Discipline. New York:

Atherton Press. TRIESCHMANN, J. S., DENNIS, A. R., NORTHCRAFT, G. B., NIEMI, A.W. (2000). Serving multiple

constituencies in business schools: M.B.A. program versus research performance. Academy of Management Journal, 43 (6) : 1130–41.

Page 26: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

256 Scientometrics 78 (2009)

U.S. NEWS AND WORLD REPORT. (2002). Exclusive business rankings: specialties. Special Issue: Graduate Schools, 131 (11) : 24.

USEEM, M., KARABEL, J. (1986). Pathways to top corporate management. American Sociological Review, 51 : 184–200.

VOGEL, D. R., WETHERBE, J. C. (1984). MIS research: a profile of leading journals and universities. The Database for Advances in Information Systems, 16 (1) : 3–14.

VOOS, H. (1974). Lotka and information Science. Journal of the American Society for Information Science, July-August : 270–272.

WALCZAK, S. (1999). A re-evaluation of information systems publication forums. Journal of Computer Information Systems, 40 (1) : 88–97.

WALSTROM, K. A., HARDGRAVE, B. C. (2001). Forums for information systems scholars: III. Information & Management, 39 : 117–24.

WALSTROM, K. A., HARDGRAVE, B. C., WILSON, R. L. (1995). Forums for management information systems scholars. Communications of the ACM, 38 (3) : 93–102.

WHITELY, W., DOUGHERTY, T. W., DREHER, G. F. (1991). Relationship of career mentoring and socioeconomic origin to managers' and professionals' early career progress. Academy of Management Journal, 34 : 331–51.

WHITMAN, M. E., HENDRICKSON, A. R., TOWNSEND, A. M. (1999). Academic rewards for teaching, research and service: data and discourse. Information Systems Research, 10 (2) : 99–109.

WILLIAMS, W. W. (1987). Institutional propensities to publish in academic journals of business administration: 1979–1984. Quarterly Review of Economics and Business, 27 (1) : 77–94.

ZIPF, G. K. (1935). The Psychobiology of Language, Boston: Houghton-Mifflin. ZIVNEY, T., BERTIN, W. (1992). Publish or perish: What the competition is really doing. The Journal of

Finance, 47 : 295–329.

Page 27: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 257

Appendix A Description of journal rankings

used to determine top five IS journals

The most recent journal ranking used in the principal components analysis of the present study to determine the top five IS journals was conducted by KATERATTANAKUL & AL. [2003], which used a citation analysis to rank 27 journals with a primary focus on IS and computing areas. The study did not evaluate multidisciplinary journals such as Management Science and Decision Sciences, nor journals lacking sufficient citation data such as the Journal of Management Information Systems (JMIS) and Communications of the AIS (CAIS). The citation data were collected from the Social Science Citation Index (SSCI) and the Science Citation Index (SCI) for the time period between 1997 and 2000, which allowed a two-year time lag to assess the citation frequency of articles published in the relevant journals between 1995 and 1998. The authors performed a citation analysis using 5,868 target articles and scored each journal on the basis of seven indices. The average ranking score for each journal across the seven indices determined the journal’s final quality ranking. The results indicated that the top five IS journals, in order, are MIS Quarterly (MISQ), Information Systems Research (ISR), Communications of the ACM (CACM), Journal of the ACM, and IEEE Transactions on Software Engineering.

MYLONOPOULOS & THEOHARAKIS [2001] surveyed members of the ISWorld mailing list and the IS Faculty Directory at www.isworld.org, with a response rate of 35.45%. Nearly 1,000 individuals responded to their online questionnaire, which was the largest sample obtained by researchers conducting this type of study at that time (and we are not aware of a larger sample to date). Respondents ranked up to 20 journals from a list of 87, placing 10 in the top tier and 10 in the second tier of journals, based on their perceived contribution to the IS field, which resulted in four separate journal rankings by region (World, North America, Europe, and Australasia). The World and North American rankings were identical, except that the #2 and #3 positions were reversed. The North American ranking listed the following top five IS journals, in order: MISQ, ISR, CACM, JMIS, and Management Science. As a second measure of journal importance, respondents listed the top five journals they read the most. Using the North American readership measurement, the same five journals were ranked in the top five (in the same order), reinforcing the soundness of the rankings based on perceptions of quality.

The WALSTROM & HARDGRAVE [2001] study is the third in a series of studies (see also [HARDGRAVE & WALSTROM, 1997; WALSTROM & AL., 1995]) which used an opinion survey of IS faculty to rank 51 journals. WALSTROM & HARDGRAVE surveyed all 2,147 IS faculty in the U.S. and Canada listed in ISWorld’s Directory of

Page 28: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

258 Scientometrics 78 (2009)

Management Information Systems Faculty, with a response rate of 17% (yielding 364 useable responses). According to the survey responses, the top five outlets for IS scholars, in order, are MISQ, ISR, CACM, JMIS, and Management Science.

GILLENSON & STUTZ [1991] surveyed the chairperson of the IS department (or professor in charge of the IS curriculum, where no separate IS department existed) of all AACSB accredited business schools during 1989–1990, with a response rate of 50.2%. The respondents were given a list of 38 IS or IS-related journals, and they were asked to rate each such journal’s quality as top, high, medium, low, or nil. GILLENSON & STUTZ assigned a quantitative weight to each category (1, 2, 3, 4, or 5, respectively) and ranked the journals according to their weighted averages. The results of that study identified the following top five IS journals, in order: Management Science, MISQ, CACM, Decision Sciences, and JMIS. The GILLENSON & STUTZ rankings have been used by other scholars studying research productivity, notably GROVER & AL., who relied on the GILLENSON & STUTZ journal rankings to rank the top 50 MIS research institutions in terms of research productivity [GROVER & AL., 1992]. In Grover’s considered opinion, “[t]hese journals reflect quality MIS outlets and also represent the consensus of prior studies . . . Few would argue that this set of journals has played a significant role in shaping the MIS field over the past decade” [GROVER & AL., 1992, P. 6]. (Note that ISR was not published until shortly after the GILLENSON & STUTZ study was conducted.)

A follow-up study conducted by WHITMAN & AL. [1999] largely confirmed the GILLENSON AND STUTZ journal rankings, using essentially the same methodology, except that 51 journals were evaluated and the population surveyed also included non-AACSB accredited schools. The WHITMAN & AL. [1999] study yielded a response rate of 43%, representing 184 institutions out of 432 surveyed. According to those survey responses, the top five IS journals, in order, are MISQ, Management Science, CACM, ISR, and Decision Sciences. Although these journals are ranked in a slightly different order than the GILLENSON & STUTZ list, the most significant difference is that ISR (which came into existence shortly after the GILLENSON & STUTZ survey was conducted) displaced JMIS in the exclusive top five category.

SHIM & AL. [1991] surveyed 47 highly productive IS researchers (where productivity was measured by a citation analysis), with a response rate of 53.2%, theorizing that their experience and success at publishing made them well qualified to rate the quality of IS journals. Respondents were presented with an open ended list of 13 journals specifically targeted to the IS field and were encouraged to add and rank unlisted journals they deemed important to the discipline. The perceptual rankings of these prolific researchers generated the following list of leading journals, in order: Management Science, MISQ, CACM, and Harvard Business Review, with the following journals tied for fifth place: ACM Computing Surveys, Decision Sciences, and Information and Management.

Page 29: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

Scientometrics 78 (2009) 259

Appendix B Description of academic quality rankings

used to determine the status of IS departments

The most recent academic quality ranking used in the principal components analysis of the present study to determine the status of IS departments was conducted by U.S. NEWS AND WORLD REPORT [2002], which ranked IS programs based on reputation using opinion surveys of (a) business school deans and directors of accredited programs (53% response rate) and (b) corporate recruiters (26% response rate).

ATHEY & PLOTNICKI [2000] used adjusted article counts to identify and rank the top 24 universities in terms of their IS research productivity. The study analyzed 972 IS articles written by 1,381 authors in 10 leading IS journals during the 1992–1996 time period. The sample members were from 389 different universities. The universities received one credit for each article solely authored by one of their faculty members, and one credit was divided among the universities represented by coauthored articles (e.g., credit for an article written by two authors from different universities would be split equally between the two universities).

TRIESCHMANN & AL. [2000] ranked 50 universities based on the research performance of their faculty in 20 top tier business journals during the 1986–1998 time period, as measured by a standardized page count. The study developed a different set of rankings for each business discipline within such universities, including information systems, which was based on a standardized page count of articles published in an IS-focused subset of those leading 20 journals. Equal credit was assigned to all universities represented by coauthored articles.

GROVER & AL. [1992] ranked the top 50 IS research institutions by calculating a “productivity score” for 195 universities during the time period 1982–1991. GROVER & AL. analyzed 10,214 pages of IS research from six premier IS journals. The six journals included the top five journals according to the GILLENSON & STUTZ [1991] study, plus Information Systems Research, which was first published after the Gillenson and Stutz rankings were developed. ISR was included in the top five lists of all of the journal rankings described in Appendix A that were developed after ISR came into existence. The productivity score was derived by calculating the standardized page count of each article, weighted by the importance of the journal in which the article was published, using GILLENSON AND STUTZ’s mean score for each journal as the weight (ISR was assigned a weighting equivalent to Management Science). Page counts were adjusted by crediting authors on a pro rata basis for coauthored articles.

IM & AL. [1998] ranked the 50 best IS programs, employing essentially the same methodology as the GROVER & AL. [1992] study, by assessing the IS research

Page 30: Determinants of faculty research productivity in information systems: An empirical analysis of the impact of academic origin and academic affiliation

LONG & AL.: Determinants of faculty research productivity in information systems

260 Scientometrics 78 (2009)

performance of universities between 1991 and 1996. IM ET AL. analyzed 809 IS articles from the same six journals used in the GROVER & AL. study. Like GROVER & AL., they also ranked institutions according to their productivity score, which was calculated by multiplying the adjusted standardized page count for each article attributable to an institution by the applicable journal weighting.

LENDING & WETHERBE [1992] ranked the top 20 institutions publishing IS research between 1984 and 1990. The LENDING & WETHERBE [1992] study updates the VOGEL & WETHERBE [1984] study which covered the time period of 1977–1983. LENDING & WETHERBE used an adjusted article count of IS research published in 13 leading journals during the relevant time period, with credit for coauthored articles being adjusted on a pro rata basis.