Intellectual and Ethical Integrity Requires Intellectuality

Jennifer J. Freyd, University of Oregon

Excerpt from Freyd, J.J. (2011). Journal vitality, intellectual integrity, and the problems of McEthics  [Editorial]  Journal of Trauma & Dissociation, 12, 475-481. (Click Here for Full Article)

Note: The following excerpt is copyrighted. It is available for attributed public use under a Creative Commons CC-BY-ND 3.0 license. If you wish to copy, distribute, or otherwise re-use these materials or to modify them, please first contact Jennifer J. Freyd for reprint permission.

INTELLECTUAL INTEGRITY REQUIRES INTELLECTUALITY

Plagiarism is not the only issue of intellectual integrity relevant to us as scientists and scholars. Although it seems self-evident that intellectual integrity is of paramount importance to the long-run success of science and scholarship, there are threats to this integrity not only from individual practitioners who may be dangerously careless or dishonest but also from institutional structures that reward superficial achievement rather than deep contributions. One way this occurs is by focusing on numerical metrics for achievement at the expense of engaging with the intellectual content. The single most egregious error of this sort occurs when hiring and promotion committees use JIFs to evaluate the merit of a single article or scientist. This is akin to using the ranking of a university to evaluate the quality of a doctoral dissertation of a particular individual at that university. Not only does this misuse the metric itself (which reflects average citations and says nothing about one particular article), but it lets an outside process (one that is subject to numerous market forces) trump actual comprehension and evaluation of the content. Using citation counts for an individual candidate in the hiring or promotion process is incrementally better than using JIFs, but it still abrogates a duty as well as is technically problematic because citation counts may be determined by highly superficial factors such as the size of the scholarly community for that research area, referencing habits, and the rate at which ISI (a private corporation) indexes journals in that field.

. . .

Whenever important decisions are being made about the worth of something intellectual, it is essential for the ultimate intellectual integrity of the endeavor that the evaluators grapple with the substance of that work rather than fall back on secondhand numbers. The final irony is that to reward the accumulation of fame and fortune over actual intellectual achievement is ultimately counterproductive to even the goal of increasing visibility and resources because it destructively shifts motivation from intrinsic to extrinsic goals.

ETHICAL INTEGRITY IN RESEARCH ALSO REQUIRES INTELLECTUALITY

Institutions also establish practices that encourage or discourage integrity in ethical decision making. One domain in which this occurs is the oversight and review of research with human participants. As is well known, researchers studying the behavior of humans typically must submit their research protocols to institutional ethics review boards (IRBs) in order to carry out their research. Beginning a few years ago, many researchers have also been required by their institutions to complete regular mandatory “education” in research ethics. Although the intentions of these requirements are surely good, the resulting implementation has created a new industry of mind-numbing online ethics training and testing.

My own institution, like many others, requires all researchers to regularly complete testing using Collaborative IRB Training Initiative (CITI; http://www.citiprogram.org/) software. The problem is that passing the CITI tests is neither sufficient nor necessary for ethical behavior. Rather, this method of education and testing is so superficial and coercive that it is arguably counterproductive, promoting a false sense of security and even breeding cynicism. The information presented in the curriculum includes some valuable points, numerous irrelevant details, and a nontrivial amount of incorrect information and opinion labeled as fact. This information is then tested through multiple-choice quizzes shortly after presentation so that no long-term retention is required. The only thinking occurs when disputable information is presented and tested; then the researcher must select between purposely entering a wrong answer in order to pass the test or possibly failing the test and thus being unable to do research.

Furthermore, it is considered permissible by many research communities for researchers to scan the CITI study materials while completing the quiz, thus requiring no retention of study materials even in the short run. In still other research communities, answer sheets are circulated. Although these strategies are obviously against the rules and arguably unethical, the rates of such cheating are apparently very high, probably in part because researchers consider the whole endeavor a foolish waste of time and in part because people will conform to what they believe is normative no matter if it is technically prohibited. It is ironic that an education initiative focused on ethics promotes such unethical behavior. There is very little intellectual integrity in the CITI educational experience from the perspective of either the testing itself or the behavior of the test takers.

Although knowledge is necessary, ethical behavior in research fundamentally involves motivation, problem solving, and sometimes difficult cost–benefit analyses. What we need instead is a meaningful and intellectually honest educational experience: engage in a debate; serve on the IRB; conduct a study on research ethics. Like many of my colleagues I complete the required CITI training because I must in order to be allowed to conduct research, but each time I go through this process I come out feeling like I've been force-fed a high-fat, low-nutrition meal at McEthics.

Note: the above text is an excerpt (from pages 478-480) from Freyd, J.J. (2011). Journal vitality, intellectual integrity, and the problems of McEthics  [Editorial]  Journal of Trauma & Dissociation, 12, 475-481. (Click Here for Full Article)

Appendix: 10 limitations of the Journal Impact Factor excerpted from pages 381-382 of Freyd, J.J. (2009), Journal Ethics and Impact.

  1. The secrecy and proprietary nature of the specific information ISI uses for calculating the JIF is a limitation. Good science is transparent and is subject to replication.
  2. The JIF is not validated.
  3. There is error and ambiguity in the citation databases. Errors in citations within papers, authors with identical or similar names, and inconsistent journal name abbreviations are a few of the many problems.
  4. Journals and journal editors can and do game the system. For instance, publishing a larger percentage of review articles, requiring authors to cite papers published in the same journal, or changing the percentage of “citable items” that are likely to enter the JIF equation are well-known ways to manipulate JIFs.
  5. The 2-year citation counting period rules out measuring the enduring impact of some papers that may be cited for years to come.
  6. Ideas that are very influential may become standard in the field, no longer requiring citation. This means that papers with groundbreaking ideas and techniques may not be cited at all because their influence is absorbed into the field.
  7. ISI only counts citations in some journals, and that selection is controlled by a proprietary entity, not an open community of scientists or scholars. Journals in emerging cross-disciplinary fields and international journals are less likely to be indexed. Furthermore, the percentage of journals ISI counts varies by field, so journals in some fields will necessarily have higher impact factors than those in another field.
  8. Counting citations is not a direct measure of quality. At best, it is a metric of utility. There are many reasons a paper may get cited that are not directly about quality. For instance, some famous papers in our field are routinely cited as an example of a problematic approach. Review papers are cited more often than original papers because it is efficient to do so, but that does not mean that the review papers have more value than the original works.
  9. The absolute value of a JIF is not meaningful. At best, it must be interpreted in context, because some fields overall have much higher impact factors, perhaps because of the percentage of journals indexed and the citation behavior in those fields.
  10. The JIF is designed to be a measure of journals, not individual authors who publish in those journals. One or two oft-cited articles per issue can raise the impact factor substantially, even if the other articles are never cited. To use the overall impact factor for an article that is not itself cited is clearly a misapplication. Similarly, to use JIFs in hiring or promoting individuals is a misapplication of the metric and an abrogation of our duty to evaluate the actual intellectual merits of the candidate’s work.

Also see Wikipedia articles on impact factor and h-index.