- "Algebra of Probable Inference" by Cox, 1961 (aka, Why everyone should be a Bayesian). Demonstrates a functional derivation of probability theory as the unique extension of Boolean Algebra.
- "Why I'm not a Bayesian" by Clark Glymour, Theory and Evidence, 1981. Criticizes Bayesian approach from the philosophy of science point of view.
- "Why Glymour is a Bayesian" by R Rosenktrantz, Testing Scientific Theories, 1983
- "Why isn't everyone a Bayesian?" by Efron B, American Statistician 1986. Examines reasons why not everybody was a Bayesian, as of 1986, with scorching reply from Lindley.
- "Axioms of Maximum Entropy" by Skilling, MaxEnt 1988 proceedings. Sets up four practically motivated axioms, and uses them to derive maximum entropy as the unique method for picking a single probability distribution from the set of valid probability distributions.

- They assume boolean logic
- Can encode their true belief as a pdf

- Believe Skilling's axioms.
- Have statements about true distribution in the form of constraints.

Questions

- The references are over 20 years old. One newer one from 2000 by Berger looks at rising popularity of "objective" Bayesian and robust Bayesian approaches, and predicts practical Bayesianism of the future to contain both frequentist and traditional Bayesian elements. Does anyone know of more up-to-date overviews of different inductive reasoning methods?

BTW, if you are the author, and don't like the links to your work, let me know, and I'll remove them

## 11 comments:

Good stuff, great work, thanx :D I've no idea about any other up to date overview of inductive methods but be sure if i see something interesting i'll post it here.

I agree -- item (2) is certainly the suspect one in each list. My main questions are: (a) How do experts typically mis-specify priors? (b) How sensitive is Bayesian inference to an incorrect prior?

Anyway, I just found a book at the library you may be interested in.

Information, Inference, and Decision, edited by Menges (an apparently unpopular book; it had no bar code and was last checked out in 1978; this is the third time it's checked out at all!). I haven't looked at it closely enough to determine if the ideas are any good (I'm working on that UAI paper) but it may at least be relevant. (It's on my desk if you want to take a peek.)Thanks for the scans. We miss you, come back :(

Sounds interesting but the links no longer work. Do you still have the pdfs?

links are fixed

The second and fourth link are not working as of today October 21, 2007.

Great and memorable post by the way ... I saw it a few years ago, but I knew I had to come back and read the papers.

Looks like I broke them when renaming the files to a more consistent naming standard, should be working now

Excellent machine learning blog,thanks for sharing...

Seo Internship in Bangalore

Smo Internship in Bangalore

Digital Marketing Internship Program in Bangalore

Great Article

IEEE final year projects on machine learning

JavaScript Training in Chennai

Final Year Project Centers in Chennai

JavaScript Training in Chennai

In both models assumption number 1 sounds more or less reasonable, whereas assumption number 2 causes discontent. With Bayesian approach, we don't know how to represent our prior knowledge as a pdf, whereas with ME approach, we don't know where to get the constraints from. However, in practice, we can often find a pdf that is close to representing the true belief of the expert, and similarly we can often find constraints that approximately rule out unfeasible distributions.

Thanks to the author for writing the post, it was quite necessary for me and liked it. I wrote a note on the brill assignment review about this. I will be happy if you read it and accept it. Thank you for your concern.

Post a Comment