I recently read an article in the American Journal of Public Health where Briggs and Vallone argue that industry scientists, especially scientists funded by the tobacco industry, should be censored due to conflicts of interest. To be clear, Briggs and Vallone aren’t the first to say this. This is a refrain that’s been repeated a lot in various scientific circles.
The crux of the argument generally goes something like this:
- Pat is a scientist that works for the ChemsGalore company
- ChemsGalore is a company that makes chemicals
- Pat studies ChemsGalore’s chemicals to concentrations that are safe for people who use products containing ChemsGalore’s chemicals
- Pat is paid by ChemsGalore
- People who work for companies are biased to be favorable to the company’s products
- Being favorable to a company’s products means saying a product is safe when it is not safe
- Companies cannot sell unsafe products, ethically
- Therefore, Pat will be favorable to ChemsGalore’s chemicals
- Therefore, Pat will call ChemsGalore’s chemicals safe when they are not safe
The problems with this argument are that propositions 5 and 6 simply aren’t true. In fact, propositions 5 and 6 are examples of a type of logical fallacy, called the genetic fallacy. Let’s break this down, but first, what about putting the shoe on the other foot?
A Matter of Perspective? Are Academics Biased Because They Accept Taxpayer Funding?
We don’t hear much about are academics and scientists who work at non-profits, and how their funding sources might influence them. I’m primarily a toxicologist, so let’s talk about toxicology.
Academic/research toxicologists generally obtain their funding from the US Government, typically the National Institutes of Health (NIH). Being awarded money from NIH is hard — your success rate at the National Institute of Environmental Health Sciences (which funds a lot of toxicology research) for Fiscal Year 2020 was 14.2% for research program grants.
It’s a well known fact amongst those of us who try to get NIH money that any grants you previously held need to be successful. You need to push out papers. What that means is that, for better or for worse, you need to have original findings that are significant. Nonsignificant findings are hard to publish. But more importantly, if you have nonsignificant findings, you need to find a new set of hypotheses you’re going to chase. And that can be hard for scientists who may have just spent the last 5 years chasing down what they thought was a promising hypothesis.
So, it’s not hard for someone to build an argument, similar to the one above, that looks like this:
- Alex is a scientist that works for the non-profit ChemsCauseCancer Institute
- ChemsCauseCancer is a non-profit 501(c)(3) science institute that identifies chemicals that cause cancers in humans
- Alex studies chemicals to identify chemicals that cause cancer and how they cause cancer
- Alex is paid by ChemsCauseCancer
- ChemsCauseCancer raises money from the public through direct donations, as well as competing for government grants from the National Institute of Environmental Health Sciences
- People who work for universities and non-profits are biased to prove their hypothesis that particular chemicals they are studying cause cancer
- This bias to prove their hypothesis causes people who work for universities and non-profits to say that the chemical they are studying causes cancer
- Therefore, Alex will always identify the chemicals they’re studying as causing cancer
- Therefore, Alex will call chemicals carcinogens even when they are not carcinogens
- By identifying more chemicals as carcinogens, Alex is helping ChemsCauseCancer raise more money through direct donations, and NIEHS will continue to fund Alex’s important work to identify other chemicals that cause cancer
The problem with this argument is that, again, it requires a genetic fallacy to be made.
Genetic Fallacies Are Illogical, Right? Yes, They’re A Type of Stereotyping
Yep — genetic fallacies, like all fallacies, result in an argument that is inherently illogical. And bonus time — these illogical arguments also lead to unethical situations!
Okay, so let’s back this up and focus on genetic fallacies and logic. Just briefly.
Genetic fallacies are bad. From a pure logic standpoint they are bad because the argument is not focused on attacking the merits of your opponent’s argument (their claim/counterclaim). Instead, the genetic fallacy puts the attack squarely on the person on the other side, or in this case, on the scientist.
When we take a mental shortcut, based on some quality or aspect of a person, and draw a judgment we have a word for that — “stereotyping”. Now, social psychologists will tell you stereotyping things other than people isn’t all bad, in fact, stereotyping allows our brain to trust certain brands of foods, so that we don’t need to think about what we want to buy at the supermarket, or it helps us choose food from menu boards more quickly. Stereotyping becomes a problem when we start applying it to people, or when we apply unfair stereotypes to things.
Stereotyping scientists through the genetic fallacy is a similar strategy as trying to win a logical argument by name-calling. You can’t win this argument by saying that industry scientists are biased and will therefore lie to the scientific community and the public. That’s akin to saying, “industry scientists are big fat doo-doo heads.” All you’ve actually accomplished is making yourself look foolish/childish, but you haven’t addressed the elephant in the room — is the science any good? [by the way, I call this the “big fat doo-doo head test”TM, feel free to use it]
And therein lies the problem with the genetic fallacy. By using the genetic fallacy, critics of industry scientists, like Briggs and Vallone from the Truth Initiative, are attacking the scientist personally. They are not focused on the science itself. These critics are saying industry scientists are biased and they cannot be trusted. But they are leaving one thing unsaid — a very serious charge — they are saying industry scientists are engaging in scientific misconduct.
How Do You Go From Biased to Scientific Misconduct?
Easy — Briggs, Vallone, and others are saying industry scientists are so biased you cannot trust their data. If the data is not trustworthy, then that means that the data were forced to give some answer, or the study was manipulated in some way to give the answer the industry scientist wanted to see. We have words for this — “cooking the books” or “fraud”. There really is no other explanation.
Biases are like belly buttons — all scientists have some set of biases. If they didn’t, they’d never be able to come up with a hypothesis. They have some bias, due to their education, training, knowledge, and experience. We all do as human beings. Bias is what makes our thinking not so “critical” (critical is meant here as in critical thinking).
The thing is, science is a deliberate activity. It requires planning. It requires resources. In graduate school I didn’t just wake up and decide I’m going to go run some random experiment. I had to get it approved, because I was spending someone else’s money. More importantly, I didn’t want to waste my time in the lab doing things that wouldn’t help me graduate and get a real job (and sleep). Sleepwalking through an experiment just doesn’t happen (and even if it did, it wouldn’t result in a deliberate fraudulent action; it would more likely end up with you ruining your experiment, having to do it again, and having to explain why you blew $10,000 sleepwalking in the lab).
The interesting thing is, for industry scientists, they don’t want findings at relevant exposure levels. If they find their chemical is toxic at relevant exposure levels, that’s bad news. The easiest way to find a chemical is toxic at relevant exposure levels is to run a study that has a small number of samples. So industry scientists are more likely to run larger studies. In addition, in the US and the European Union, regulatory agencies have strict requirements for testing of chemicals, typically right down to the minimum number of animals and the use of good laboratory procedures (GLP, which helps prevent fraud). And these regulatory agencies look at the data very closely. What’s more, regulatory agencies can require additional testing if they are not satisfied. Also, people like me (forensic statisticians who also are toxicologists) can use statistical methods to detect fraudulent data — kinda scary right?
How Is Making The Genetic Fallacy Here Unethical?
Making the genetic fallacy is inherently unethical. I’m currently working on a formal applied ethics framework for toxicology and public health, which is based in part on the biomedical ethics framework by Beauchamp and Childress (Principles of Biomedical Ethics, Fifth Edition, 2001, Oxford University Press). My current framework has the following elements:
- Respect for Individualism/Autonomy: We must respect the right of individuals to make their own decisions with regards to their own health and their future.
- Speaking Truth Through Evidence: There is no justice without truth. When scientists speak, they must use the best available evidence, and to be critical of all studies in a fair and impartial manner.
- Justice: We must be critical of science, not only in its organizational structures and hierarchies, but also of the studies we perform and assess, to ensure equality, fairness, and impartiality.
- Nonmaleficence: We must avoid doing harm to ourselves and society.
- Beneficence: We must promote doing good for society.
When Briggs, Vallone, and others use the genetic fallacy to disparage industry scientists, they are encouraging the public to do the same. In fact, typically, these critics are encouraging censorship because of who a scientist works for. Briggs, Vallone, and others are likely going to say that their actions are in line with the Principle of Beneficence, but that’s not true.
What Briggs, Vallone, and others are doing is they are saying “don’t trust those scientists because of who they work for, because we know their science is biased.” The logical conclusion from the arguments of Briggs, Vallone and others is that biased scientists are engaging in fraud. Because, if the scientists weren’t engaging in fraud, then why would Briggs, Vallone, and others say bias is a problem? We are all biased. The only reason you censor a scientific study is because the study was fraudulent.
So what Briggs, Vallone and others are doing is that they are violating the Principles of Beneficence, Nonmaleficence, and Speaking Truth Through Evidence. These critics are not promoting “doing good for society”, because they are actively saying that industry scientists should be censored, and indirectly accusing industry scientists of committing scientific fraud, even though the quality of work from industry scientists could be quite good. This line of reasoning from Briggs, Vallone, and others leads to increasing harm to society by not allowing knowledge transfer from these scientists. This means that these scientists cannot speak truth through their evidence. This means that Briggs, Vallone, and other similar critics are violating the rights of industry scientists by preventing them from fulfilling their ethical obligations.
What Must Briggs, Vallone, and Other Critics Do?
They must do what the ethical principles I laid obligate them to do:
- They must Speak Truth Through Evidence — if they have a problem with a scientific study, or if they believe the study is fraudulent, then they are obligated to voice that concern appropriately, far and wide and provide their evidence.
- They must Respect Individualism/Autonomy — they must respect the fact that members of the public ultimately must make their own decisions, based on the available evidence, and not prevent them from seeing all of the evidence.
- They must ensure Justice — they must be critical of scientific studies, but in their critical analysis they must ensure equality, fairness, and impartiality, especially with respect to funding sources (as they appear to have a bias in that regard).
- They must promote Beneficence and Nonmaleficence — they must avoid doing harm to society by not allowing others whose viewpoints and funding sources they dislike from being silenced and censored; they must ensure we are promoting the wide distribution and discussion of science without fear of retribution.
Critical Tox Theory
Critical Tox Theory is a program I’m starting at Raptor Pharm & Tox to improve the practice of toxicology and public health for the future. My goal is to break a lot of the bad and unethical habits in toxicology and public health. Critical Tox Theory practitioners will critique what is going on in toxicology, and ultimately, we will create cultural and social change to improve the practice of toxicology and public health.
TLDR;
Making judgements about scientists based on their sources of funding is committing a genetic fallacy, it is unethical, and really it’s pointless.
Ethically speaking, scientific studies must be judged on their merits, including critical evaluations of the study design, analysis methods, and conclusions.
Critical Tox Theory is focused on breaking a lot of the bad and unethical habits in toxicology and public health, bringing critical thinking back, and improving toxicology and public health for the future.
Excellents articles!