Lest you think that all I read are mid-twentieth century noir detective stories, I bring you…. ta-da ….. philosophy. I read a while ago an article which I now cannot find, the gist of which was our idea of philosophy is western-centric. It is all about Aristotle, Plato, Hegel, Kant, etc. etc. with nothing about Eastern philosophy, which we tend to dismiss as ‘religion’, or ‘mysticism’, not ‘philosophy’.
Gameri, a Fellow of the British Academy, has set out to set us straight on this by way of this original work which focuses on the rational principles of Indian philosophical theory, rather than the mysticism more usually associated with it. Ganeri explores the philosophical projects of a number of major Indian philosophers and looks into the methods of rational inquiry deployed within these projects. In so doing, he illuminates a network of mutual reference, criticism, influence and response, in which reason is used to call itself into question. This fresh perspective on classical Indian thought unravels new philosophical paradigms, and points towards new applications for the concept of reason.
Some of that is from the official blurb, and it explains it very well. This was certainly interesting reading, a lot of it concerning how to think about how to think. Consciousness, ideation, and what makes a rational thought. Made me think of AI and how possibly we are approaching the task of trying to teach a robot to think when we don’t ourselves know how we think.
Just one quote for you, should you be in the mood for some mind bending:
… a sophisticated theory of content. It is alleged that a person witnessing a mirage does not see the refracted sun’s rays, even if in the right sort of physical connection with them. Neither does he see water, for there is none to be seen. Someone witnessing a mirage does not see anything, but only seems to see water. And a person who witnesses a ball of dust in the distance does not see the dust if he is uncertain whether it is dust or smoke. An object is not seen if it is not seen distinctly.
So when you ‘see’ a mirage of water, you are not seeing the water, because there is no water to see, and you are not seeing the defracted particles, because they have been converted into the ‘water’. What exactly are you ‘seeing’?
How do you teach an AI about mirages?