Richard Royall starts his discussion about the interpretation of some data in regard to hypothesis A versus B as asking one of three questions:

  1. What does this observation tell me about A versus B?
  2. What do i believe, now that i have this observation?
  3. What should i do, now that i have this observation?1

He goes on to argue that statistical evidence expressed as likelhood ratios pertains to question 1, and that although this can be in agreement with Bayesian statistcs (which update priors), it is less subjective (because it does not require priors). I will argue in the following that it also is less subjective because it does not require a value judgment.

There is some discussion about the concept of prior probabilities, which are necessary for Bayesian statistics. The discussion focusses on the problem of subjectivity in assigning some prior probabilities to specific hypotheses, and how probability can be understood for a hypothesis (because a hypothesis is either true or it is not, it has no inherent probability). These issues are often handled by reframing prior probabilities as beliefs, i.e. you assign how much you belief in one hypothesis over the other in the language of probability.

This makes sense to some. But consider for a second what a belief is and how you would measure it as a scientist. Maybe you would use a questionnaire, maybe dichotomous items, maybe a visual analog scale. Maybe you let your sibjects write a paper or plan a follow-up study. Be as it may: The assignment of probabilties to beliefs only makes sense in the framework of actions, i.e. if you actually „sample“ from your beliefs. In that regard, beliefs have a very practical application in the framework of bayesian reinforcement learning 2 using e.g. Thompson sampling 3: To  determine how to act after you observed some evidence in regard to hypothesis A versus B,  the respective hypothesis is sampled from your (posterior) belief distribution. This can be intuitively understood if you replace the white and black balls in a common urn example with candies and stones  –  based upon your first draw, you would start to select out of two possible urns the one with the highest posterior probability for candy, but based on your confidence in this fact, you would also explore the other urn at times. The advantage of this understanding is that it allows us to answer questions 2 & 3 using Bayesian concepts.

Yet, if one subscribes to the argument that assigning probabilities to beliefs is only sensible if you actually do sample your actions based on your beliefs, we inadvertently stumble into an old philosophical problem. According to Hume’s Law, one can not derive an „ought“ from an „is“ 4. Please note: The psychologically inclined reader might consider that we can model moral cognition and action in bayesian terms. But even if this model would be considered as a true description of reality (it „is“), it would tell us nothing  about whether such behavior is the morally right way to think and act for humans (it „ought“). In conclusion, a scientist is not allowed to consider his hypotheses in the language of probability, as this implies taking actions based on beliefs – unless he or she is willing to make a value-based decision. This can be made plausible again by the candy example: If you value stones, but not candy, you would sample from the urns in an inverted manner.  In science, the action you take can be manifold – sampling according to a specific study design, attacking a fellow scientist with a commentary and so on. But science needs to have a value attached to its action sampling to do so. Although many might suggest truth as a viable value, truth is by no means an ultimate justification5 and many would not consider it as a relevant value or sufficient justification for many action. Certainly, truth will come into conflict with other values: beneficience, justice, gratitude. If one value enters the game, all values enter.

Put Shortly: Sample from your beliefs about reality to maximize the information about the relative trueness of your beliefs. This is ok if you accept that scientific method allows you to uphold certain values (truth) and to wield beliefs (about truth). But if you want to make an empirical judgment which is free of values and beliefs, you must stick to likelihood ratios.

 

 

1. Royall, R. M. (1997). Statistical evidence: a likelihood paradigm (1st ed). London ; New York: Chapman & Hall. (p.4) Please note that i changed the order.

2. Strens, M. J. A. (2000). A Bayesian Framework for Reinforcement Learning. In Proceedings of the Seventeenth International Conference on Machine Learning (S. 943–950). San Francisco, CA, US: Morgan Kaufmann Publishers Inc.

3. Thompson, W. R. (1933). On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4), 285 – 294.

4. Schleidgen, S., Jungert, M. C., & Bauer, R. H. (2009). Mission: Impossible? On Empirical-Normative Collaboration in Ethical Reasoning. Ethical Theory and Moral Practice, 13, 59–71.

5. Albert, H. (2010). Traktat über kritische Vernunft (Nachdr. d. 5., verb. und erw. Aufl). Tübingen: Mohr Siebeck.

Robert Bauer

Written by Robert Bauer

Agricolab | Descendant of Latin ‚agricola‘, farmer; Lab (colloquial) A laboratory

1 Comment

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.