Monday, March 26, 2007

Mechanizing inference

A couple years ago I sat down, with great expectation, to read David Berlinski's _The Advent of the Algorithm_. Long story short, it was a frustrating and disappointing experience. I couldn't decide if the book was just bad, or if I just didn't get it.

Recently I picked it back up. I pinpointed where my frustration started, page 10. I didn't get it, and was lost from there on. Berlinski starts his story about the origins of the idea of the algorithm, with Leibniz and his work in logic. According to Berlinski, very little had happened in formal logic since Aristotle and his syllogisms. With the syllogism, Aristotle codified inference.
  • All men are mortal
  • Socrates is a man
  • Therefore, (I can infer that) Socrates is mortal
The nagging question is, what really happens in my mind when I infer. How do you describe that procedure in a way that doesn't rely on an intuitive human understanding? I gather from the book that Leibniz pondered this and ended up describing it with algebraic logic. Berlinski then gives an example. The example never made sense to me. I won't incude it here. I'm sure its correct, it just never clicked for me, no matter how much I looked at it. This was driving me nuts, until I went looking for an alternative that I could grok. I found one which I will steal from the wikipedia article, First-order logic. ∀x φ(x) means that φ(a) is true for any value of a.

  • ∀ x (Man(x) → Mortal(x))
  • Man(Socrates)
  • ∴ Mortal(Socrates)

This makes it clear to me that the inferential step is substitution.

That, of course, is what Berlinski's example in the book shows, only I couldn' t follow it. The reason it is important is that it means inference can be achieved with a mechanical procedure of substitution.


JimII said...

So, is the mathematical process of substitution different from the human intuition of the inference given by Aristotle? Surely it doesn't have more authority just because you can write it out in symbol form.

Some really smart people, maybe Bertrand Russel? or maybe Whitehead, have tried to prove that logic and math are the same thing.

Josh Gentry said...

I'm not so much interested in authority as I am in the ability to design a procedure that a machine can follow.

That's not completely true. You have pointed out something interesting, a distrust of intuition. It's often wrong. If you design an alogorithm that a computer can follow and get the correct answer, then a human should also be able to follow it with less chance of error.

Josh Gentry said...

I'm getting a little off track by talking about authority. What brought me to thinking about this stuff is the problem of transferring intuition. If I know how to do something intuitively, and someone else doesn't, can I capture and transfer my intuition.

D2 collaboration said...
This comment has been removed by the author.
JimII said...

Right on. It's easier to transfer a formula. In fact, I might suggest that is exactly what a formula is: A way to transfer an intuition.

I mean, a story is not the words on the page, right? The words represent the story.

Of course, writing a story down may clarify your thoughts. When you have a good idea that you just can't write down, lots of times, it isn't quite a good idea yet.

I've always felt like I don't know the math until it is almost intuition. For example, I took a linear algebra class once; I got an A. But when I took Solid State Mechanics, or whatever the hell the name of that course was, we had to know when to use matrixes. Well, I had not the foggiest. So, I would say I don't know how to do linear algebra.

I think that is probably now entirely unrelated to your original thought, so I'll stop.

Matt Dick said...

No Jim, I don't think a formula (or algorithm) can be appropriately defined as a mechanism for transfering intuition. Intuition is an organically developed trust in one's feelings. Even if it's 100% reliable, it's still a trust issue -- faith if you will.

An algorithm is exactly the opposite. It's a mechanism for eliminating the need for faith.

You may have an intution that:

sqrt(45) + 4 < 21

because you know that sqrt is relatively powerful way to reduce the magnitude of a number, but it's intuition. Knowing how to write out the mechanics of a square root in a set of rigorous steps, removes intuition entirely.

I would propose that an algorithm is a mechanism to replace intuition with a set of mechanical steps with some fault tolerance. In fact, when you replace intution entirely with formulae you reduce error in general.

And in fact that's why they are so powerful, especially in combination with computers.


Josh said...

Matt and Jim, I think you are both correct. Jim says that yes, an algorithm is a way to transfer intuition. Matt says no, an algorithm is exactly not intuition. I think to transfer intuition, you can convert it into something that isn't intuition, an algorithm, which you can then give to a machine or person. Something is lost in the conversion, and something is gained. The next post is brewing in my head.

Value Added Paper said...

I'd have to go with Jim on this one and vote for algorithms as a way to transfer intuition. Art is visual, people claim that it relies on talent or intuition. But it can be taught to the majority of people by teaching a few very fundamental algorithms. You teach one algorithm, show an example of how it applies. You state another algorithm and show an example of how it applies. Then people apply a bayesian algorithm to figure out when to apply what algorithm; like those spam filters, you can teach to recognize spam. This way almost anyone can develop an "eye".
I think intuition in humans is a complicated layer of algorithms many of them dealing with how to relate or when to apply different or disparate algorithms. Of course my own internal algorithms may have mislead me on this point.


Blog Archive