Galileo and the Science of Nature – Lecture 1

March 6, 2015

GalileoWe are interested in science as part of intellectual history. We are not, therefore, concerned whether Galileo’s theories are correct or not in terms of the scientific conception of truth, however mistrustful we might be of such a way of thinking about truth. Nor do we need do pay attention to the specifics of Galileo’s theories, as though we were studying physics, though does not mean that details of his interpretations of nature will be of no concern at all. Rather, what matters to us is how, as non-scientists, our conception of the world has completely changed because of Galileo’s achievements. We live in a completely different world view because of the rise of experimental science in the 16th and 17th centuries, and this has profoundly altered the way we view nature. Galileo’s name, therefore, marks an epochal change in our history, and the present cannot be understood without it.

Before, however, we discuss what it is that is so important about Galileo, let us first say a few things about ‘intellectual history’ or the ‘history of ideas’, which we began to talk about last week. First of all, and perhaps most importantly, what do we mean by history? The fundamental basis of history must be time, for if we were not temporal beings; that is, had a sense of our own past, present and future, then our history as such would not have a meaning for us. Thus, we can talk about this history of rocks, but it unclear that rocks have a history for themselves. Likewise, we can talk about the history of lions, but they themselves do not have their own history.

The time of history is not the same as clock time, though it can be measured by clock time (we say that such and such an historical event happened at such a date and such a time), because it is our own experience of time that is the basis of clock time, and not clock time the basis of our experience of time. Human beings were already historical, have a sense of their own lives and death, and their place in the sweep of generations, long before clock time became the representation of lived time. If we first of all did not live in time, then there would be no calendars and clocks to measure it.

What do we mean by the past of history? It cannot mean just what once happened in the present and now has disappeared into past only to be retrieved in the present like a fish pulled out of the river gasping for air on the bank. In some sense, isn’t the past ahead of us rather than in front of us? Heidegger in Being and Time speaks of there being two meanings of history (1962, pp.424–55). One is the positivism of the past, which marches under the banner of the words of the famous German historian Ranke, who raised the study of history to a proper science, wie es eigentlich gewesen (the past as it actually was). The other, far more difficult to understand, and the sense of history that Heidegger will argue for, is the past as the possibility of the future. He asks what makes something preserved in a museum historical. We are not just speaking of statues and artefacts, but all the kinds of things the historian works with: letters, diaries, memoirs, eye witness accounts, government archives, treaties, and so on. For the positivist, these are the facts of history, and what makes history more than fables and myths. Heidegger does not dispute the reality of these documents nor their importance to the scientific study of history, but he asks a more difficult question: what is the ‘pastness’ of the past? Why, when I hold them in my hand, do I say they belong to the past. What is the status of these past artefacts as past, even though they belong to the present, since I holding these documents in my hands now?

What is at stake here is what we mean by truth, for it is the authenticity of these past documents (that they really belong to the past) that legitimate history and make it different from myth or storytelling. The historian isn’t interested merely in the fact that the battle of Waterloo happened on the 18th of June 1818, but that such an event can be verified by real witness accounts that have been written down and stored in archive. Just as clock time is derivate of lived time, since if human beings did not live in time, then we wouldn’t have clocks, so Heidegger thinks there is a fuller experience of history on which this narrower conception, however important and interesting it is, must rest. Why would we be interested in preserving this past, and why are some pasts more significant than others, since there are pasts that are absolutely lost, and some pasts that now that interest us (the pasts or women and the marginal, for example) that did not interest us before?

Heidegger answer is that the past matters to us because of our present. We can only interpret the past from the vantage point of the present, but this means we see the past in terms of our future. The truth of history is not the collection of supposedly true facts about the past, but how life can be breathed into them so that these lost documents might be retrieved and their burning embers illuminate our world in a new light. We study history because it reveals the present, but in so doing it shines a light forward into our future.

When we come to read Galileo, then, we are not interested in it as a dead object that has nothing to say about our present or our future, but precisely the opposite. The world we live in now is the world picture of Galileo, and the dangers of the future precisely spring out of this future. What is at the heart of Galileo’s projection of nature is mathematics. ‘The book of nature,’ he famously said, is written in the language of mathematics’.[1] Nature was not understood, as it was by the Ancient Greeks, and in the Medieval period, inspired as it was by the writings of Aristotle and other Greek philosophers, as made up of qualities, but as a quantity. What we see, colours, shapes, sizes and sounds, is not what is.

The ancient Greek word for nature is φύσις.[2] Rather than suggesting a mathematical homogenous reality, φύσις is related to the verb φύω, which means to ‘grow, produce or engender’. In latter medical texts of that period, φύσις began to mean not just the process of something (growing, producing, engendering, and so on), but the nature of something, what it is be that thing (Hadot 2004, pp.39–52). It is this notion of φύσις that we find in the works of Plato and Aristotle and which are passed down to our European heritage through the Islamic scholars by the 12th century. It is this intellectual world that is being rejected by Galileo’s hypothesis. Here nature becomes something very different, and its transformation is something that we still live with today.

It is in the next lecture that we shall investigate this transformation metaphysically and not just scientifically (for it is really Descartes who systematises Galileo’s approach and he understands more fully that it requires a whole different way of looking at nature). At this point we only want to describe generally how such a conception of nature is very different from before. It is no longer seen as a living being, but as a machine. It is the machine model of nature that opens it up to mathematisation. It is not that Galileo first sees nature mathematically and then subsequently understands it as a physical quantity, it is because he see nature as machine, whose parts can be explained purely physically, that it is open to the descriptive power of mathematics. It is this physical mathematical model that is still the basis of our modern physics, and affects the way that all view nature and ourselves. The aim of the new science of nature is to uncover those hidden mechanisms behind appearances using the new instruments (like Galileo’s telescope) and technologies, and constructing experiments through which they might be described mathematically.

What is at the heart of the new mechanical, physical, and mathematical model of nature is the belief that reality is homogenous. It is homogeneity of physical reality that is the real revolution of Galileo’s world view. For previously to Galileo, both in Ancient Greek thought, and in the monotheistic faiths, nature was heterogeneous and not homogeneous. Thus, the universe was divided into two distinct spheres, the physical and intelligible in Plato, which was then repeated in Aristotle’s double world view in the difference between the sublunary world and the heavenly spheres. The theistic division of the world into the earthly and heavenly was easily overlaid on top of these philosophical distinctions, such that in the long Scholastic period the one reinforced the other.

From the perspective of the Church, then, especially since there was no empirical proof for Galileo’s Copernicanism, since it was only a hypothesis, the judgement against Galileo was clearly justified. For it overthrew an image of nature that had existed for millennia. What this homogeneity of nature implied (as Spinoza knew only too well), was not that God did not exist, but that there was no unique place for man in the universe. As Freud remarked, the Copernican hypothesis was a blow to man’s pride, not God’s, since God was required to set such a nature in motion, but man certainly wasn’t (1973, pp.284–5). The universe is made of an infinite homogeneous matter, in which there are infinite stars and infinite planets, some no doubt inhabited by beings who equally mistakenly thought they were at the centre of the universe, but who quite obviously were not, just as man isn’t. That later on God dropped out of the model (‘we no longer need this hypothesis,’ as Laplace famously remarked), should not obscure the fact that it was the disappearance of man that led to the disappearance of God, and not the other way around.

You have to read The Dialogue of the Two Chief World Systems with a healthy scepticism (which of course is always the way you should read) (Galilei & Finocchiaro 2008, pp.190–271). Galileo presents the Copernican system as though it were merely an mathematical hypothesis (which is how Copernicus himself understood it), whereas he believed he already had empirical proof that such a motion was real (he even argued erroneously that it was an explanation of the tides on the earth). What you have to understand is what the motion of the earth implied, however much it went against common sense, and why his opponents, who he represents as slaves to their learning and their books, rather than earnest observational study of nature (though it was his theories rather than his observations that were the source for his own hypothesis), were so adamantly against his views. Because to see the earth as in motion meant that there no difference between the earth and the other planets, and thus there was nothing at all distinctive about it.

As he writes at the very beginning of the second day, ‘Independent Mindedness and Aristotle’s Authority’, the traditional view was that the heavenly spheres were ‘ingenerable, indestructible, unchangeable and inert’, whereas the earth was the opposite of this, ‘elemental, generable, degradable, and changeable (Galilei & Finocchiaro 2008, p.193). Galileo’s argument was that there was no difference between the earth and rest of the universe because they were made of one and the same substance, and that the earth is the same as the planet Jupiter, which is clearly moving. It is important to note that Galileo does not get Simplicus (who represents all those who reject the heliocentric view) as not disagreeing with this hypothesis by putting forward a different one, but saying that it disagrees with the authority of Aristotle. Galileo is thereby opposing two different practices of science. One in which observation and theory is fundamental (though he probably overplays the observation, since he already entertained the hypothesis and constructed the experiments to prove them), and the other, traditional and hidebound by the interpretation of texts. The former, Galileo asserts, would have been more attractive to Aristotle than the latter, even though they claim to speak in his name, since he too was a scientist, and if he had looked through Galileo’s telescope would have agreed with him and not his opponents.

If we are going to reject Aristotle, he continues, we do not need another author. The only authority we need is our own senses. Our discussion as proper philosophers should be about the ‘sensible world and not a world on paper’ (Galilei & Finocchiaro 2008, p.201). As we shall see next week, however, it is precisely this world that we cannot see.


[1] He did not actually say this. The quotation is a gloss of a passage in The Assayer. ‘Philosophy is written in this all-encompassing book that is constantly open before our eyes, that is the universe; but it cannot be understood unless one first learns to understand the language and knows the characters in which it is written. It is written in mathematical language, and its characters are triangles, circles, and other geometrical figures; without these it is humanly impossible to understand a word of it, and one wanders around pointlessly in a dark labyrinth’ (Galilei & Finocchiaro 2008, p.183).

[2] For the Liddell and Scott entry for φύσις, see http://tinyurl.com/3a4fsaf. And for φύω, see http://goo.gl/oeZ43f.


Why Philosophy?

February 19, 2015

Hubert_Selby,_JrPlato famously said that the ‘unexamined life is not worth living’ (Apology 38a). But what is an examined life in contrast? Normally, I suppose, when we live our lives we do not question our fundamental principles, values or beliefs. If we did so constantly, then we would not be able to live our lives at all. I imagine this is what most people think philosophers are. People who can’t live proper lives, who have their heads in the skies, who aren’t reasonable, serious people. This isn’t a new insult. It does right back to when there were first philosophers (because there haven’t always been such strange people). Plato tells the story of Thales, who was one of the first philosophers, who we know off, who was so distracted by the heavens that he fell into a hole. This is the passage in full:

Why take the case of Thales. While he was studying the stars and looking upwards, he fell into a pit, and a neat, witty, Thracian girl jeered at him, they say, because he was so eager to know the things in the sky that he could not see what was there before him at his very feet. The same jest applies to all who pass their lives in philosophy.
(Theatetus, 174a)

Well I don’t suppose such a thing really happened. It has the ring of a myth just because the metaphor is so telling. Isn’t studying philosophy just like falling into a hole, and doesn’t everyone laugh at philosophers because they don’t take life seriously enough. The joke, however, in the end is Thales’, because having spent so much time staring at the heavens, he was able to predict that the next olive harvest was going to be very good and thus he made a fortune. Perhaps it is not so useless being a philosopher at all.

I don’t think, though, that was the reason that Plato thought an examined life was better. I don’t think he was recommending philosophy as a way of making money (or getting a career as we might say nowadays). Though that might be a consequence of doing philosophy, that should not be the reason you chose to do philosophy. The reason that Plato recommended philosophy was that he thought that it would make you a better human being.

In this way he saw philosophy as a spiritual task that consumed the whole person and not just a skill one could become better at. The word ‘spiritual’ has perhaps become an overused word in our culture and in that way might be redundant unless we give it a precise meaning. What I do not mean by spirituality in this context is a pseudo-religious activity or practice, as when someone might say that they are spiritual but not religious. Still less do I mean the commercial side of spiritual activity, like faith healing, crystals and reincarnation. All these are a kind of watered down mysticism that is the opposite of what Plato means by an examined life.

At the end of another dialogue, The Symposium, Plato tells us a story about how philosophy was born from Poverty and Resource (203a). Someone who has everything and desires nothing cannot be a philosopher, but equally someone who has nothing and cannot desire anything will not be able either. The philosopher is someone who exists in between the two. She knows that there is truth but that she lacks it, and it is because she lacks it that she desires it. Wisdom, the love of wisdom, which is what philosophy means, is this continual search for the truth and Plato seems to suggest that this search is unending. The philosopher is always looking for the truth and is never certain that she has found it, whereas the non-philosophers are always those who know that they have found the truth and it everyone else that is wrong. The fundamentalist and the philosopher, then, would be two very different people.

Is all of this still too abstract? How would we apply Plato’s dictum to our own lives. Most of the time, I think, if we were to be honest we don’t think for ourselves. Rather we think like everyone else. We have the same opinions, the same likes and dislikes, and we act in the same way. It is when we question this common opinion that we begin to ask ourselves how could we really be ourselves. Now this might seem to be the easiest thing of all to do. Since aren’t we all ‘selves’ aren’t we already born a ‘somebody’, an individual? Yet this self that everyone is isn’t the self that we are after, because we want to be uniquely ourselves. This isn’t something that we born to be. Rather it is something we have to accomplish throughout our whole lives, something it is very possible to fail at.

The courage to be oneself, the courage to just be, is very difficult indeed. To conform, to be like everyone else, is, in comparison, very easy and what we always tempted to do instead. Philosophy isn’t about learning about philosophy just for its own sake, though it can become like that in a university sometimes, but how one faces the question of one’s own existence and how one gives meaning to one’s own life. This means being able to look inside of yourself and reflect about what is important to you, what are you values and desires and from that be able to choose the best life for yourself (which might not be the same as what other people might think is the best life for you), and once you have chosen to have the strength and commitment to carry it through.

What might prevent you from doing so is always the opposite of philosophy, distraction and boredom. Most of the time we just fill our lives in with doing stuff, as though are time were endless and we could always put off making a decision. It’s a bit like how we think about our own deaths. We are always certain that our death is some way ahead (especially when we are young) so we don’t really have to concern ourselves with it. Of course that isn’t true, because in fact our deaths could happen at any time and we wouldn’t know at all. What would it mean to live with that realisation? It would mean that you would have to ask yourself if you were really to die in the next moment would you be wasting your time as you are doing now just drifting from one moment to the next. The American writer, Hubert Selby Jr., speaks about a ‘spiritual experience’ that he had, which is close to what I am describing here. He says that one day at home, he suddenly had the realisation that he was going to die, and that if he did die, he would look back upon his whole life as a waste because he hadn’t done what he wanted to do. He hadn’t become the person he wished to be. In that very moment of wishing that he could live his life again and not waste it, he would die. This realisation terrified him. It was this terror that was his spiritual experience, though at the time, he says, he didn’t realise that, he was just terrified. It was at that very moment that he became a writer. Not that he had any skill, or any idea of what being a writer was, but he wanted to do something with his life (at the time he was on the dole and in between doing dead-end jobs) and writing seemed the best thing (of course it could have been something else, but it was doing something with his life and not regretting it that was the important thing). He has learnt to become a writer by writing but it was his ‘spiritual experience’ that made he do it and also made him commit to it, not just give up because it was difficult.

I think what Plato means by philosophy, by an ‘examined life’ as opposed to an ‘unexamined one’ is what Hubert Selby Jr. means by a ‘spiritual experience’. I am not sure that you can do philosophy if you haven’t had one (though you might be very clever about philosophy). Notice that this experience hasn’t got anything to do with being intellectual or knowing a lot of stuff. It’s about facing oneself honestly and about a commitment to a life without knowing how it might end up.


On the Difference between Ethics and Morality – Lecture 2

February 11, 2015

Without ethics we would not be human, everyone agrees with that. Blackburn calls this our ethical climate or environment, which is analogous with our physical one (Blackburn 2001, pp.1–6). Just as much as human beings need physical shelter so they also need an ethical one. Ethics describes the ways in which human beings, in any culture, value certain kinds of behaviour over others. The ancient Greeks, who were the first philosophers, would have described the difference between the physical and the ethical environment, as the separation between φύσις and νομός.[1] Just as much as there are laws of nature, then there are ethical laws of every society. Again, Blackburn is probably alluding to the etymology of the word ‘ethics’, which comes from ancient Greek ἧθος, meaning, a place or customs.[2]

But what is the difference between a natural and ethical law? We can understand the necessity of natural law. In nature, every event has its cause. Such a necessity is what we call law. But are there laws of ethics? Does not every culture have its own different values? Even Hitler, Blackburn argues, for example, had his values, the purity of a race; it is just that we do not value them. Are we right not to? What gives is the right to say that there are ethical laws, that there is an absolute difference between good and evil?

Is there a necessity to ethics? If there is then it cannot be the same as the necessity of nature. The laws of nature are intrinsic to the physical universe; they are indifferent to human beings. If there are laws of ethics (and maybe we should not use the expression ‘law’ to describe it), then they must belong to what we consider ourselves to be, what it is to live a human life, and not nature. Even the nature of human being is not important to ethics. It is not the fact that we are certain type of animal which makes us ethical, but what we value in ourselves and others, and the meaning of such a value does not belong to the natural world.[3]

Philosophy has always, from the very beginning, tried to describe what this ethics is in terms of rationality. It is because human beings are rational that we are ethical, and not the other way around. Kant would argue that it is because I have to give reasons for my actions that I take responsibility for them, and expect others do so. Without reason, there would be no ethics. This is why we do not expect small children and animals to be ethical. Bentham and Mill, on the other hand, would argue that it not my intentions that count, but the consequences of my actions, which again can be measured rationally through the principle of utility of the greatest happiness for the greatest number.

And yet is reason sufficient to explain ethics? Was not Ruolf Höss, the commandant of Auschwitz, moral to his friends and family? Did he not keep promises and probably love his wife and children? How is it possible that at the same time he could send so many other human beings to the gas chamber (Rees 2011)? It is at this point, I believe, that we must make a difference between morality and ethics. Höss had his morality. Such a morality is precisely what allowed him to murder one million Jews and a hundred thousand other human beings, but what he lacked was ethics. It is morality that differs across cultures, whereas ethics does not.[4]

Morality is the codes and values we live by. They have their origin in the societies in which we shelter, and they are the ways in which we judge one another. Such a morality is what Blackburn calls our ‘ethical environment’, but I do not think in and by itself it is ethical at all. It is morality that philosophy attempts to justify rationally, though we might like Nietzsche think that this is just a smokescreen to legitimate power. A morality without ethics, however, soon descends into murder and despair, for what it lacks is recognition of the humanity of the other. This is why Höss could go home every night to his wife and children and live a perfectly respectable middle class life (it is important to recognise that the Nazis were not on the whole mad men, like Amon Goeth played by Ralph Fiennes in Schindler’s List), because he did not see the Jews and the others he murdered in the gas chambers as human beings at all. It is precisely a morality without ethics which allows us to commit such crimes against humanity, and we see it again and again throughout human history, both in our distant and immediate past, and in other cultures than our own.

It is this ethics, as opposed to morality, described by Raimond Gaita in his book A Common Humanity (Gaita 2000, pp.17–28) and which I would claim is universal. He tells us of an event that happened in his own life when he was seventeen years old and was working in a psychiatric hospital. The patients there seemed to have lost any status as human beings. He writes that they were treated like animals by the staff in the hospital. Some of the more enlightened psychiatrists spoke of the ‘inalienable dignity’ of the patients, but others treated them sadistically. It was only when a nun arrived and behaved differently to them that the attitude of the staff was revealed to Gaita. They had ceased thinking of them as human beings. But what is important is that it is the behaviour the nun which reveals this. Humanity, then, is not a property of someone like green is a property of thing. Rather, humanity is revealed in the relation that one person has to another. It is because the nun loved the patients unconditionally that their humanity was revealed to him. Without this love, they were less than human.

Ethics, then, is not a moral code, but this unconditional love for other human beings, especially for those who have fallen out of what society might call humanity, the poor, the sick, the destitute and the mad. Our humanity, and the humanity of the society in which we live is measured by the love we have for others, and equally our inhumanity and inhumanity of the society in which we live is measured by the lack of love we have for others. Such a love is fragile, because it cannot be justified rationally, and our own moralities can work against it (in the sense that Blackburn speaks about ethics as an ethos). We can use morality to legitimate why we should not treat others as human beings, but not why we should love every human being equally. Such a love is both what makes us human and humanises others, but it is not rational, if one means by a rational, a belief or intention. This is why Gaita stresses that it is not the nun’s beliefs that justify her behaviour; rather her behaviour justifies her beliefs. The behaviour comes first. I act before I understand, and I do so because I am open to the humanity of the other. This is first of all an openness to the vulnerability and suffering of the other, before it is a thought about this vulnerability and suffering, and it is precisely because Höss can harden his heart to such vulnerability and suffering, because of his morality, his ethos, that he could have murdered so many human beings and then returned home to his wife and children every night believing himself to be a moral human being.

It is very important that this ethics of love does not slide into mawkish sentimentality. An ethics without morality or politics is just as dangerous as a morality or politics without ethics, because it makes no attempt to change the world in which there are millions of people who are suffering. This is what Badiou warns us of in his book Ethics: An Essay on the Understanding of Evil (Badiou 2002, pp.30–9). There is a subtle connection, Badiou, argues with our obsession with the suffering of others in our society and the moral nihilism of our consumer society. Their suffering has almost become a spectacle we enjoy so that we can feel good about ourselves. Yet we do nothing at all about the political situation, which is the real cause of this suffering, that is capitalism. We just accept this as an economic necessity. Badiou’s argument is that our obsession with ethics, whether it is a question of rights, or the sufferings of others, is just the opposite side of this necessity. ‘Children in Need’, the BBC’s charity, could happen every year for the rest of time, but it will never change the political situation in which there are children in need, because we live in a society where it is perfectly acceptable to give billions of pounds to the banks but to let the large majority of children live in poverty and misery. Every year, we can watch on our computer and TV screens some war or disaster, and we can feel the suffering of others, and many will generously send their own money, but we do nothing to change the unjust global economic system that is the real cause of this suffering. It is as though we need our yearly fix of ethical feeling, so that for the rest of the year we can ignore the fact that it is our empty consumer lives that are the real cause of poverty, starvation and death in this world. We cannot, therefore, separate politics from ethics. If our ethics does not change the world, then it is empty gesture; a beautiful sentiment, but without any real effect in this world.

To quote Kant’s famous phrase and change it slightly, morality (or politics) without ethics is blind, but ethics without morality (or politics) is empty

Work Cited

Badiou, A., 2002. Ethics: an Essay on the Understanding of Evil, London: Verso.

Blackburn, S., 2001. Being Good : an Introduction to Ethics, Oxford: Oxford University Press.

Gaita, R., 2000. A Common Humanity : Thinking about Love and Truth and Justice, London: Routledge.

Rees, L., 2011. BBC – History – World Wars: Rudolf Höss – Commandant of Auschwitz. Available at: http://www.bbc.co.uk/history/worldwars/genocide/hoss_commandant_auschwitz_01.shtml [Accessed October 21, 2012].


[1] For the Liddell and Scott entry for φύσις, see http://tinyurl.com/3a4fsaf, and for νομός, http://tinyurl.com/3yxavgo.

[2] See Liddell and Scott, http://tinyurl.com/39sveq6.

[3] There is naturalism in ethics that denies this and which would be represented by such philosophers as Spinoza and Nietzsche, but precisely for this reason they reject any morality.

[4] I am aware at the level of etymology that the difference between ethics and morality is non-existent, since morality (from mores) is just the Latin for the Greek ethos. It is not the words that matter here, but the different experience. Morality, in my definition, is always some kind of justification of human action, whereas what I mean by ethics is an immediate response to the suffering of others.


Moral Reasoning – Lecture 1

February 8, 2015

We all act morally or otherwise. We all see others and judge whether they act morally or otherwise. The question is what right do we have to do so. Is there an underlying procedure or principle that allows us objectively to declare are own or others acts moral or not? In the history of moral philosophy there have been two standard ways of doing so, and these two theories form the mainstay of most ethical courses both in schools and universities. They are utilitarianism and deontology. Both also have a long pedigree and can be found right at the beginnings of Western philosophy. They can also be found in other traditions outside of this canon. For ease of explanation, however, in this lecture we going to focus on the main representative of both theories: Bentham and Mill for utilitarianism, and Kant for deontology. At the end will we ask, despite the fact that they are very different theories, whether they both harbour the same prejudice that is it possible to make sense of our ethical and moral principles outside of the culture in which we exist.

Though we can find utilitarian arguments for ethics in Socrates’ speeches, for example, probably the best modern representative is the English philosophy Jeremy Bentham. The basic principle of his utilitarianism is the maximisation of human happiness. What determines all human action is pain and pleasure. Rationally, every human being, like any other natural being, seeks to maximise their pleasure and minimise their pain. To determine whether a course of action is moral or not is to add up the maximum amount of happiness for all. If the pleasure outweighs then the pain, then the action is rational.

What makes this moral theory attractive to many is that it seems to reduce moral choices to something quantifiable and calculable. It is not surprising that even today government policy is decided by utility calculus. At the heart of the calculus is the idea of a common currency. We can take what apparently appears to be value judgement and transform it into a cost benefit analysis. This is even clearer when we take this common currency literally and transform it into an economic calculation where we measure people’s preferences in terms of a monetary value. Thus we might say that it makes sense to force people to wear seatbelts because although this causes pain to a small number of people the benefit to society as whole is greater because of less deaths in road accidents.

Historically, when we come to look at the application of utilitarianism, we might, however worry about its moral basis. Thus Bentham argued that poor workhouses should be created for the poor, because the sight of beggars on the streets was more harmful to those who saw them, than the individual’s freedom to beg. The poor themselves would be forced to work in these workhouses so that they paid for their own incarceration so that the taxpayers wouldn’t have to forgo the pleasure of any loss of income.

Although this historical example might appear extreme to some, many people will defend utilitarianism in this way: a harm to one is a benefit to all. One such example is torture. If you could prevent a bomb killing millions of people would you not torture the terrorist to find out where the bomb was? It seems rational to say that you would, since you would be weighing one human life against a million others. The argument against this scenario is that it would be wrong to torture the terrorist because every human life inviolable. The utilitarian would respond that such principles are unrealistic, since this situation requires that we cannot seriously take the one life to be as important as the million others.

We might think, however, as Sandel points out, that we are not comparing like with like (2010, pp.38–40). For the real comparison, since we suppose that the millions who would die by the bomb are innocent, would we be willing to torture the innocent daughter of the terrorist in order to find where the bomb is, and many would not be willing to take this step even though the purely utilitarian argument would force us to do so. Those who routinely defend torture do not usually defend the torture of innocents even for the best utilitarian arguments that doing so would lead to the greatest happiness for the greatest number.

That Bentham’s strict utilitarianism seems to go against some of our fundamental ideas of what morality is meant that his disciple Mill looked to improve it by resolving human rights with utility. His basic conception of liberty is any individual should have the right to do what they wished as long as they did not harm others. This idea of a right seems stronger than any utility, since no society, for example could banish religion of minority simply because it did not coincide with the wishes of a majority. Mill, however, argues it can be defended on a utility calculus because it is better for a society to have non-conforming elements than supress them. Thus, in the long term, allowing dissent and individual differences prevents a society from becoming rigid and stultifying. The majority should test its views and opinions, and can only do so because it allows for a minority to exist. The problem with this utility argument, again, as Sandel points out, is that it does not sufficiently preserve individual human rights (2010, p.50). Although it is possible to imagine a society that exist with minorities, it also perfect possible to image a happy society is which every one’s needs are fulfilled but is despotic. Now we might prefer to live in a society that has individual rights, but we could not argue for that on totally utilitarian argument.

To defend utilitarianism against the idea of a common currency that all lives and all pleasure can be quantified in the same way, Mill makes the distinction between higher and lower pleasures, as he famously writes ‘it is better to be a human being unsatisfied, than a pig satisfied’. The test is that if one individual experiences both the lower and higher pleasure (let us saying listening to One Direction or going to an art gallery), then if we add up all these individuals choices, then the higher pleasure will take precedence over the lower pleasure. The only problem with this analysis is that someone can desire the lower pleasure even though they know the other choice is the greater accomplishment. The nobler life might not the most pleasurable. It might be better to go to art gallery and listen to Shakespeare, but it might be more pleasurable to slob on the settee and watch rubbish T.V. If we do think that the other life is more noble, as Mill obviously does, then it cannot be a utility calculus that makes us think that, but our ideas of what a dignified and worthy human life might be. As Sandel writes, ‘It is not desires here which are the standard but the principle of human dignity. The higher pleasures are not higher because we prefer them; we prefer them because we recognize them as higher.’ (2010, p.55)

The opposite of utilitarianism is deontology, and we are going to use Kant as our example. Like Mill and Bentham, Kant too thinks that moral choices should be determined by reason, but for Kant moral reasoning determines the principles of actions and not its ends. Kant sharply distinguishes human action from natural events. Natural events are governed by external laws of nature. A stone falls to the ground because of the law of gravity, not because it chooses to do so.[1] The necessity of moral laws is neither empirical nor natural, but ideal. I act morally because I have rational principles I act by.

What does it mean to act rationally, rather than just morally? Human beings act rationally because we act through ends and means. I want to have a cup of tea. I know rationally that if I want to have a cup of tea then I need to boil the kettle. The cup of tea is the end, and the boiling kettle is the means. Ends are objects of human desire and the ultimate end for Kant is happiness.

To act rationally therefore is to act under an imperative. If you want x, then you must do y. There are two kinds of imperative for Kant, hypothetical and categorical. The hypothetical imperative is the example we have already discussed. I have an end, and I will the means. If I want to get to the lecture on time, then I have to leave the house at a certain time. Such an hypothetical imperative Kant calls ‘technical’, since to achieve them you need certain skills, knowledge and ability. The overall aim, why I should bother to go to lectures at all, or even get out of bed, Kant calls ‘pragmatic’, and is happiness, since every human being wants to be happy.

All hypothetical imperatives are relative to the individual, since it is me who wants to get to the lecture and not you, and although we all will happiness none us is going to agree what happiness is. The only imperative that has absolute necessity Kant calls categorical imperative. Here the principles of the action are not determined by the end and the validity of law is unconditional. Kant has to prove that such imperative exists. How could there be an action, which if it were rationally willed, would have to be willed by everyone?

For Kant, the moral law takes such a form. Take the example of dishonesty. Kant’s argument is that to will dishonesty is to will lawlessness, and that one cannot at one at the same be rational and will lawlessness. To act lawfully means one can test one’s actions and see if they were lawful for every other rational being. We do so by rationalising them. First order principles are rational means ends calculations. Second order principles, which are moral tests, is where I take my maxims and see if all can follow them. This means for Kant that they are coherent and consistent.

Take then the example of finding a purse fall of money on the street. Should I take the money or should I hand it in? My maxim, then, is as follows: given the circumstances in which I can appropriate the money of someone else without being found out to make myself richer, I will take that thing (Deigh 2010, p.147). Can I universalise that subjective maxim? Kant would argue that I could not and be coherent, because if I lived in such a world in which everyone took each other property at will then there would be no property as such. My belief that I would gain from stealing is predicated on world in which people do not steal but respect private property.

In terms of consistency, Kant uses another example. Imagine a rich man walking down the street who sees a beggar, why should he give that beggar any money since he believes we shouldn’t help others but only look after ourselves (Deigh 2010, p.151). We could imagine him saying to himself I doesn’t think I should have to give any poor people my money. Now this does not contradict Kant’s coherence test, because one can perfectly imagine a world in which the rich don’t give the poor money and help them, but it fails the consistency test, because one could not will a world in which one wanted to live in which no one would offer another a helping hand. Would the rich man, for example, want to live in a world, where in a flood or in an epidemic everyone would let each other die without assistance?

The problem with Kant’s ethics is its excessive formalism. It appears to justify actions most people using their common sense would not think were ethical. So for example, if I lived in a police state, and someone came to my door for my neighbour, and I knew that they would be sent to a concentration camp, then I would still have to tell the police man the truth, since not telling lies is a categorical imperative. More importantly, I think, this formalism hides a social bias in Kant’s account. The categorical imperative against stealing rests on the existence of private property, but it is perfectly possible to imagine societies without private property, and in that case it would not be wrong to steal. We are not really, then, universalising values for all rational beings. We are only universalising our own social values.

Moral theorists, as Macintyre points out, argue as though there were two levels of discourse, each absolute separate from the other (MacIntyre 2010, p.2). One, the everyday moral language that people use, which expresses their history and culture, and the other, the language that philosophers use, which is somehow meant to transcend every history and culture. Kant does not speak of stealing being wrong for 18th century Europe, but of being wrong for all time and for all culture, and for all rational beings (including non-human rational beings, one assumes). The same can be said for utilitarian theories. That one appears to be the greatest benefit for us today, might not be the greatest benefit in the future, nor might not be seen as the greatest benefit to other cultures (they might value different things).[2] What we value, and take to be right, reflect our own culture and society’s views, and that one discourse affects this other. This does not lead to moral relativism, which is merely the opposite side of the same coin of moral absolutism, but that our moral reasoning does not take place in a vacuum.

Bibliography

Deigh, J., 2010. An introduction to ethics, New York: Cambridge University Press.

MacIntyre, A.C., 2010. A short history of ethics: a history of moral philosophy from the homeric age to the twentieth century, London [u.a.]: Routledge.

Sandel, M.J., 2010. Justice what’s the right thing to do ?, London: Penguin books.


[1] Metaphysical speaking, the idea of freedom is that the heart of Kant’s ethics. We are moral because we are free and self-determining. In this lecture we only going to focus on the categorical imperative and its difference from utilitarianism. A full account would need to show this necessary relation between freedom and morality for Kant.

[2] We even might assert that utilitarianism, at least in the form of Bentham and Mill, is itself a historical phenomenon and impossible without the rise of capitalism and economic rationality.


Heidegger and The Philosophy of Science – Lecture 7

February 1, 2015

Heidegger_1_(1960)We have thought about science as being different from religion. Science has to do with facts, and religion with beliefs. Increasingly, as we have gone through the different views of what science might be, this simple opposition has become less and less believable. For a start off, it is not at all clear that science has to do with facts, if we mean by that that facts are simply lying around for a scientist to construct a theory from. On the contrary, facts are theory dependent. What is taken to be a relevant fact is given by a scientific theory, and this theory cannot be justified by appeal to them alone otherwise we would be lost in a circular argument. Is it possible then to define science simply by theories alone without recourse to facts outside of them? Popper certainly attempts to do so through this principle of falsifiability in his initial starting point. What makes a theory scientific as opposed to non-scientific, and thus what distinguishes science from religion, is that it can be falsified whereas non scientific theories cannot. But when we examine the falsifiability theory in detail, it is very difficult to show, in concrete terms, how they are falsified. Rather than anomalies causing scientific theories to collapse, they seem quite happily to carry on regardless, and because scientific theories are so complex, it is difficult to discern which hypothesis has to be falsified in order for the theory itself as whole to be so. In other words, the fact problem still rears its end, but now at the point of falsification rather than at the point of the construction of a theory. Because of these problems, philosophers of science like Kuhn will argue that we shouldn’t be arguing about science as such, or the ideal nature of science, but investigating what scientists themselves do. What we find then is not a smooth progress of science from one theory to the next getting ever nearer to the truth, but a discontinuous series of revolutions that he called ‘paradigms’.

Although we can speak of different paradigms, surely it is the same reality that is beneath them all? The question of reality is particularly pressing in science because the basis of modern scientific theories, since Galileo and Newton, is unobservable phenomena. If science of the 16th and 17 century posited nature as made of tiny particles of matter in motion of which all that we observed we its effects, this did not mean that anyone could see such corpuscles. How then did we now that such a theory was real? The whole of Descartes philosophy was to answer this question, and his answer, which not many philosophers after him were satisfied, was that it was God’s justice than ensured that what our theories said was real was in fact what reality was, even though we could not see it. The whole debate between realists and anti-realists in the philosophy of science is whether we can commit to such a reality or not without God or any other transcendent guarantee (or indeed whether it matters or not, whether it can be proved to be real).

At the end of the discussion of realism and anti-realism, I introduced the philosophy of Heidegger. Many will argue that he does not have a philosophy of science, but I don’t think that is right at all. Indeed, one could say that the whole of his philosophy is a sustained debate with science (Glazebrook 2000). For Heidegger, science is a restricted not a full account of experience. We take science to be describing the way that things are, but for Heidegger, it is only a certain way of approaching things, and not necessarily the truest. In Being and Time, he distinguishes between the present-to-hand, and the ready-to-hand (Heidegger 1962). Science, which has its roots in a certain metaphysics, relates to things as present-to-hand, but this is not how we relate to the world that is nearest to us. Our fundamental relation to things is ready-to-hand. We use them. We open the door to enter the room, we enter the room and sit at the chair, we place the books on the table, we look at the screen on which a picture has been projected, or we look at the words written on the board, or down at the book in our hands, and so on. What we do not look at, is little particles of matter, or atoms. Why, Heidegger, would we take this world not to be real, and the scientific world to be more real?

When we related to things as ready-to-hand, as opposed to present-to-hand, then it is clear to us that these things relate to our world. The world is the context is which making use of things makes sense (there is the world of the classroom, and this world is part of bigger world in which something like a classroom makes sense). This world is not a thing. It is not a container in which something is enclosed (like water in a glass, to use Heidegger’s example). Rather, it names the cultural context or background in which something like sitting in classrooms and listening to lecture’s makes sense. Even the activity of science itself, with its abstract picture of things, is not possible without this world, since science is something that human beings do, and can only occur where this activity already has a meaning.

In section 3 of Being and Time, ‘The Ontological Priority of the Question of Being’, Heidegger speaks explicitly about science. He says that every science has its own area of things that it studies. Thus physics studies matter, chemistry, elements, and biology, life, and so on. Yet for any of these sciences to function, they have to take for granted that the things they study actually exist. Thus, Heidegger says they all presuppose a understanding of being that they do not question. The physicist accepts that matter exists, the chemist, elements, the biologist, life. If they did question the existence of these things, then they could not actual do science at all, because they would come to a stop at the threshold of the investigation and never get any further. If I don’t accept that these things exist, then how could I do physics, chemistry or biology? What Heidegger here calls a ‘regional ontology’ is similar to what Kuhn calls a paradigm, the ‘ontical questioning of positive science’ to normal science. It is only when a science goes into a crisis does the ontology that it presupposes come into question. This is when, again in Kuhn’s vocabulary, does the existence of the very fundamental nature of the objects of a science become doubtful and only at this point does science have to turn to philosophy for its answer.

What philosophy discovers is that science is a projection onto nature. This does not mean that nature does not exist for Heidegger (if human beings ceased to exist, there would be still planets, but there would not be Newton’s laws of motion). What modern science projects onto nature is mathematics. Nature is only what can be described mathematically. Galileo and Newton onwards, this is understood in terms of efficient causality rather than final causality. For Aristotle, nature is defined teleologically. Nature has a purpose, goal and direction, whereas in modern science it does not. This is why for Heidegger technology is the essence of modern science, because it means, through its mathematical projection, nature is totally subsumed to human purpose. Because nature has no purpose or value in itself, its only value is for the sake of us. It becomes, to use Heidegger’s phrase, a ‘standing reserve’. The big difference between Kuhn and Heidegger, is though both understand science historically, Heidegger does not think that the image of nature in Newton and Galileo is that fundamentally different from that in quantum physics. Though they are a different mathematics, nonetheless both view nature mathematically. The fundamental split them is between final causality of Aristotle and the efficient causality of modern science that culminates in technology.

For Heidegger, the basis of mathematical projection of science is the experiment. It is therefore a fundamental misunderstanding of science that it simply experiences things as they are and then comes up with a picture of the world (a picture which is meant to be what things really are). On the contrary, through the experiment, the scientist already interprets experience mathematically. It is the mathematical model that gives meaning to the experience and not experience meaning to the mathematical model. This again is the big difference between Aristotelian and modern science. For Aristotle, science is based on experience, for modern science it is not. Mathematics is first, not experience, but we still speak about science as though it was about experience, and somehow the things that we directly experience around us were the diminished and restrictive one, and not science. As though we were living in the abstract world and the mathematical projection of science were the full blooded one.

That meaning is the subject of science is what the history of science teaches us. We see that the world of Aristotle, Newton and Einstein, is not one and the same world a series of ruptures, breaks and discontinuities. Although the reference of these theories is one and the same, the meaning of the reality they refer to is not. What mass means in Newton, therefore, is not the same as what it means in Einstein. To use Kuhn’s word these worlds are incommensurable, since there is not a perfect translation between one and the other. You will only think that objectivity is threatened by this picture, if you believe in a metaphysical reality that is beyond human experience but which at the same time we can know. Reality is not outside of us, it is something that we construct through our institutions and discourses. The difference between astrology and astronomy is not in terms of a method, as Popper might have us believe, that one is tested by facts and the other is not, since when we investigate the history of science, we see that a theory will ignore those facts that do not fit its paradigm, but it does not have the virtues or practice of objectivity. The problem with astrology is that it explains too much and not too little. Truth, if we might put it this way, is a practice, a way of being, rather than a mirror to a reality that stands outside of us eternally the same. It is the creation of concepts to problems that are forever changing, and it is through problems that we grasp reality.

Rather than grand narratives, the study of the history of science concerns the details. What scientists say and do. For this reason we cannot impose an image of science on its own reality. What we discover is that reality is not identical through time but constructed from different aspects that are only relatively stable and which can always dissolve into a new regularity that might take elements from the previous paradigm but would transform their meaning by placing them in different relationships. It is not reality which explains how science changes, but the changes in science that explain reality, just as it is not the chair that defines sitting, but sitting the chair. The correct question is therefore not what reality is, but how do we understand and interpret reality. What changed in the nature of scientific experimentation such that reality was perceived in a different way? What changes is not reality, but how we perceive and understand, and what changes this perception is the practice of science itself, its discrete methods and discourse that would be only visible to us through historical investigation. The subject of such a history is what scientists do. We reject the idea of hidden telos, as though all scientific activity were heading in the same direction that reveals a reality that had already been there from the beginning but simply unknown by us. Science is made up of actions of scientists and nothing more. The meaning of reality does not belong to some intrinsic definition but to a practice that leads to a certain and definite objectivity over a period of time, but which can subsequently dissolve as a new objectivity emerges. Reality is only a correlate of a practice and only has a meaning as such in relation to it. We can therefore distinguish between the practice of science and non-science, but there is no absolute ahistorical meaning of science, and still less a reality that is eternal and unchanging. Science is not about reality per se, but problems.

What Heidegger calls ‘projection’ Feyerabend calls a ‘belief’ (Feyerabend, 2010, 10). We think that science is just an explanation of what common sense already knows. But the opposite is the case. Science, since Galileo, moves in in another direction than common sense. It is by moving in the opposite direction to ‘contemporary reason’, that the new science develops new instruments and new experiments. If it had not done so, if it stuck by the old rules and methods, it would not have developed such a new way of looking at and understanding reality. It is only subsequent to the emergent of the new beliefs that evidence can be found to support them. We tend to think the opposite. That the new beliefs emerged because the evidence demonstrated their truth, but the opposite is the case: it is the new beliefs that made the evidence even visible. This is why subsequently we can say that ‘Galileo was on the right track’, because now there is enough evidence to support the theory, but if we had waited for the evidence before hand, the theory would never have got off the ground. As Feyerabend continues:

Theories become clear and ‘reasonable’ only after incoherent parts of them have been used for a long time. Such unreasonable, nonsensical, unmethodical foreplay thus turns out to be an unavoidable precondition of clarity and of empirical success. (2010, 11).


Realism and Anti-Realism in Science – Lecture 6

December 19, 2014

Stylised_Lithium_AtomIn a previous lecture we looked at Kuhn’s idea of history of science as broken by different paradigms that are incommensurable. Aristotelianism, Newtonianism, and Einsteinianism, mark revolutions in the history of science rather than a smooth flow of one epoch into another which will some day reach an ultimate Truth when we can all stop doing science because what our theories say and what is are exactly the same and there will be no exceptions. What Kuhn reminds us is that when we think about what science is, rather than taking the philosopher of science’s word for it, we should examine what scientists do. We will find that the philosophical version does not much look like the real history of science, rather they are idealisations in both sense of the word: an abstraction and a kind of wish fulfilment. Kuhn is not sceptical of science as such, but the philosophy of science. His book, The Structure of Scientific Revolutions, marks the death knell of a particular kind of philosophical history of science, so that it can be replaced by the proper history of science, whose object is what scientists actually do, rather than what philosophers think they might do. In other words, the new object of this history of science is ‘normal science’, in all its messiness and vagueness, rather than an idealised science that has never existed except in the minds of philosophers like Ayer or Popper.

At this point, however, we are going to make a little detour back to philosophy, and that is to the question which should have been bugging us from the very beginning, which is what exactly is science about, rather than what is the history of science. Early on we characterised the difference between religion and science as the difference between belief and facts. We said that science is about reality, that it makes true description of real things that happen in the world. In a word, it is objective. On the contrary, religion is subjective. It does not give us a true picture of the world, but offers us a moral compass through which we can live our lives. To confuse religion with science is to undermine the importance of religion rather than to give it more intellectual support. There is no conflict between science and religion, because they are completely different discourses. One tells you what something is, the other how you ought to live your life.[1]

But what do mean when we say that science is about reality? Aren’t we being a little simplistic when we do that? What is reality after all? Everyone knows the old paradox of whether a tree that falls down in a forest makes a sound or not if no one is there to hear it. Is reality what we perceive or is it more than that? I would say that it would be absurd to say that there would not be trees, stones or stars if there were no human beings. As though human beings were to vanish the universe would vanish with them. The universe does not have any meaning, however, except for the fact that it means something for some being or other in the universe. A stone is not a stone for a stone. It is only a stone for human beings who understand what it is to be a stone. We’ll come back to this at the end of the lecture.

Both Chalmers , Okasha, and Ladyman (perhaps because they all belong to what can be loosely called the analytic tradition) seem very reluctant to address these questions head on (as though they were too philosophical and could be avoided. I would say that it is their hidden philosophical assumptions which allow them to avoid these questions).[2] For them, on the contrary, the important distinction is between realism and anti-realism, rather than whether reality exists out there as such and what we might mean by reality as a whole. Chalmers simply dismisses the idea that reality being formed by language (what he calls global anti-realism), through a Tarskian theory of truth, which begs the questions, because such a theory already has a commitment to a certain view of language, and a certain view of reality, which remains unquestioned by Chalmers himself. Investigating this presupposition, however, would take us too far from the subject of this lecture itself.

What then is anti-realism and realism in science? First of all it is important to note that both theories accept the reality of the world. So it is important not to confuse either with a thorough going scepticism. The difference between them has to do with the status of scientific theories, on the one hand, and observable phenomenon on the other. A strong realist would argue that both observable phenomenon and theories are true descriptions of the world out there, whereas an strong anti-realist would say that only observable phenomenon are true, and theories are neither true of false. All these authors, as far as I can see, occupy a position between these two extremes.

The common sense view, I suppose, would take it that both theories and observable phenomenon are true, so we are going to approach this question from this point of view. None of us would think that observable phenomenon are not real, that when I see a donkey there isn’t a donkey out there (again I am not so sure that both Okasha and Chalmers skip over this supposed reality far too quickly, but let us leave them to have that truth for now). What isn’t so certain is that theories really point to something out there. This is because much of the basis of a scientific theories actually point to phenomenon that we cannot observe. If we cannot see something, then how can we say that it is part of the world? From what vantage point would we say that it is real? Of course, as Okasha points out, many sciences do have as their basis observable phenomenon, such as palaeontology whose objects are fossils, but modern physics does not (Okasha, 2002: 59). We cannot literally see inside of the atom. We only have theoretical pictures of what they look like, and we do not know if at that level the universe really looks like that at all.

The anti-realist is not saying that there is no difference between science and someone who thinks that the earth is balance on the back of a turtle. Rather theories only give us structures or the scaffolding in which we can observe phenomena through experimentation, but it is only this literally observable phenomenon which we can take to be true. The theory itself we cannot prove is real or not, because there is nothing there to see which we could demonstrate as real or not. The history of science itself seems to bear this out, because there have been false theories that have actually brought out true observable phenomena, so there does not seem to be an analogy between the truth of a theory and the truth of observable phenomena. The example that Chalmers gives is the history of optics, which is littered with what we now understand to be false theories of light, and yet which provided correct observable phenomena. Thus Newton believed that light was made up of particles, then Fresnel believed that light was a wave in a medium called ether, then Maxwell, believed that light waves were fluctuating electric and magnetic fields in ether, then in 20th century we got rid of the ether and the waves were entities in their own right, then finally the wave theory of light was supplemented by the particle theory of photons.

It seems to go against common sense, however, to say that theories are just fictions on which we hang our experimental results. When we look at the history of atomic theory it does appear that we are getting a progressive understanding of the structure of atom, and it would seem entirely bizarre that the theory would predict what we ought to see, and at the same time being entirely false. One way of getting around this is by arguing that the anti-realist is making a false distinction between what is observable and what is not observable, since though we cannot see inside the atom, we can detect the existence of atoms by ionisation when they are passed through a cloud chamber. The strict anti-realist, however would say that, all we know is real is the trails themselves, and we cannot not know whether the atoms are real or not, just as we should confuse the trail that a plane leaves in the sky with the plane itself. In other words, we have to make a distinction between direction observation and detection.

The fundamental issue here is whether we can make a complete separation between theories, on the one side, and facts on the other. This is the real issue, rather than whether facts are observable and theories not. In fact it is the anti-realist and not the realist who is committed to the separation. Both Okasha and Chalmers, though in different ways, would criticise this separation. Chalmers returns to whether the history of philosophy really does prove that theories which were once taken as true are shown to be false by the next one, and so on infinitum, so that we can never know whether are theories give us an accurate view of the world, by arguing that each new theory takes up some aspect of the previous one which gives us a more and more accurate picture of the phenomenon we are attempting to understand. Thus a true theory (unlike the turtle theory) captures some aspect of the truth of the world, if only a partial one, which is then improved by the subsequent one (does this conflict with the Kuhnian view of science, since it implies an accumulative image of science?). Okasha, on the other hand, will claim that the problems that the anti-realist claims would undermine the possibility of claiming theories to be true, could also rebound against what we would think were observable phenomena, and thus would destroy the basis of all science altogether, since we could only claim to know what we could see now in this moment, and not past events, since again they are only known by detection rather than direct observation (this would be mean that the anti-realist argument would be like Hume’s problem of induction).

As I said at the beginning, I find both Okasha’s and Chalmers discussion of realism unsatisfactory and indeed both of their chapters seems to end without any kind of resolution as though they had both been exhausted by the discussion. What I think is left unthought in their views is that the only way we could access reality is through science, and thus if we cannot, then we cannot access reality. To me the discussion of observable and unobservable phenomena is a red herring. Nothing has meaning unless it has meaning for us and that is true of both observable and unobservable phenomena, but the real issue is whether our reality is first of all something that we observe. Here I would turn to the philosophy of Heidegger, who would argue that it is prejudice of a very old metaphysics that our first relation to the world is one of perception, what he calls ‘present-to-hand’. What is true both for the realist and the anti-realist is that they take reality to mean ‘present to hand’. It is just that one thinks scientific theories are speaking about something present to hand and the other does not. The world for Heidegger, on the contrary is not something, present to hand, but ready to hand. The world is first of all something that we orientate ourselves in, rather than perceive.[3] This context can never be investigated as an object, because it is what objects make possible. Even science itself must have its origin in this cultural context or background. It is only because science as an activity means something to us that we can approach anything in the world as a scientific object, and not the other way around.

As Heidegger argues in Being and Time, Newton’s laws are only true because we exist. If we were no longer to exist, and the world in which these laws made sense were no longer to exist, then it would be absurd to still say that these laws were true. This does not mean that things do not exist separate from us, nor that truth is relative. Newton’s laws really say something about things, because these things only are, in the sense of ‘true’, through our existence. This truth would only be relative if we really thought that there was a truth of things beyond our existence that we did not know. Things are only because they are there for us, but this in no way means that any assertion is possible. This would be to confuse assertion and the condition of assertion. The truth of reality is dependent on our existence, but this does not mean that you or I can say anything we like about this existence. For you or I as individuals are just as much part of this existence as anything else is. To be a scientist is to already except what this existence means (what the world of science means, of which Newton’s laws are an example), and to refuse this is no longer to be a scientist.

Works Cited

Van Fraassen, B. (2006). Weyl’s Paradox: The Distance between Structure and Perspective. In A. Berg-Hildebrand, & C. Shum (Eds.), Bas C. Van Fraassen: The Fortunes of Empiricism (pp. 13-34). Frankfurt: Ontos Verlag.


[1] It is a wholly other topic whether religion is the only discourse that can do this, but that does not undermine our distinction between it and science.

[2] Okasha, Samir, ‘Realism and Anti-Realism’ in Philosophy of Science: A Very Short Introduction, Oxford: OUP, 2002, 58-76. A. F. Chalmers, ‘Realism and Anti-Realism’ in What is this Thing Called Science?, third edition, Maidenhead: Open University Press, 1999, 226-46.. Ladyman is more willing to discuss the philosophical issues in depth, but he does so from an analytic perspective. What is lacking in all these treatments is what I would call ‘ontological depth’, and I am going to turn to this in the next lecture which will look at some of the ideas of Heidegger.

[3] I think that this is what Fraassen is getting at when he says that a theory or model of reality is only useful when we locate ourselves within it, though I don’t think he is referring to Heidegger’s distinction here. (Van Fraassen, 2006, p. 31)


Kuhn – Lecture 5

December 13, 2014

JastrowDuckRabbitScience does not begin with facts and then construct theories out of them. Nor does science begin with theories and then just find facts that would confirm them. Both these conceptions conceive of science as though it were a discourse that was completely context free. In the first case, facts are simply available as though they were waiting for interpretation of a specific kind, and in the second case, theories are simply open to facts as though there were no inertia or hindrance to the smooth progress of science from one theory to the next, each equally open to the possibility of falsification.

One of the first philosophers of science in the Anglo-American tradition to take the idea of context or background to scientific activity seriously was Thomas Kuhn.[1] Loosely characterised this approach might be called ‘historical’. What does it mean to treat science as though it were part of history rather than outside of it? It means first of all to take scientists seriously. It is to treat what they do the same way that we would analyse the thoughts and actions of French peasants or the 13th century or a military general in the 20th. First of all to record scientific achievements correctly (who thought of what at what time), and secondly to examine exactly how scientists come up with their theories in relation to the material they were investigating. What it certainly is not is the importation of philosophical theories from the outside (like verification or falsification) followed by squeezing the scientific activity to see whether it would fit these ideal models.

However much the logical positivists and Popper might differ, they both have the same idealised view of science: there is a sharp difference between theory and observation; knowledge is cumulative tending towards a true understanding of the universe; the language of science is precise and science is unified; science is either inductive or deductive; the key question of the philosophy of science is legitimacy and validity, rather than the contingency of discovery. Against all the suppositions Kuhn puts forward exactly the opposite: there is no sharp difference between observation and theory; knowledge is discontinuous; science is neither a tight deductive structure nor an inductive reading of facts; scientific concepts are not precise; nor is science unified; context is important and science is historical and temporal.

At the heart of the idealised picture of science is scientific progress. This is the view that science is leading to ever increasing knowledge about the universe and that finally one day we will have a theory of everything, and I suppose, science can come to end, because there will be no more questions that need to be answered. So first of all there are pre-scientific theories of the universe that we find in the religious and mythical texts (like Genesis), and then we get the first science, Aristotelianism (though this is a really a mixture of science and occult explanations), then Newtonism (which is the first science proper) and then finally in our times, Einsteinian science which is a response to the crisis that befell Newtonism. One imagines that sometime in the future, though one can never tell, there will be fourth science that will replace Einstein, but only because it contains more truth and is close to the universe as it really is than all the other theories that we have had. Such a view of the history of science, we might call ‘convergence’, since it views the series of scientific discoveries of converging on the true understanding of reality.

There are two problems with this image of science. One is temporal and the practical. First of all it has a conception of time, where the past is merely a stepping stone to the present but the past has no meaning in itself. For how can we measure the progress of science in this sense unless we imagine an end towards which it is moving and this end is supposed to be an advance on the past?[2] But how can we know that this advance is real unless we can stand outside of time and measure it? Is it not really the case that past is not the stepping stone to the future, but that we judge the past from the vantage point of the present, and in looking back, project a false teleology back into the past? In terms of the past itself, there were numerous possibilities and the present that we now occupy did not have to occur. Equally the present that we now stand in has infinite possibilities, so we cannot know what the future will be.

In terms of the practice of science, we also know that his temporal picture of progress is false. This is what Kuhn discovered when he did his own historical research. Rather than the history of science demonstrating that each scientific period progressed into the next one moving to ever greater level of truth and closing the distance between discourse and reality, we find that it is discontinuous and non-cumulative and that there is no reality out there, which we could know independently and through which we could measure the relative truths of each discourse because reality itself is a creation of discourse and not its external validation.

What does it mean to say that the history of science is discontinuous rather than continuous, non-cumulative rather than cumulative? Let’s go back to the image of progress where science moves smoothly from Aristotle, to Newton, to Einstein. What is left out in this description is the gaps or spaces between each scientific theory (or what Kuhn calls a paradigm, because it is more than just a theory) and it can leave this gaps out because the fantasy of some ultimate truth which is where reality and discourse are the same. As soon as we leave this fantasy behind, and realise that it too is a creation of a discourse (in this case metaphysics), then we can see that there is no transition of one to the other. Rather, they are separate or incommensurable. They belong to different worlds.

Again this is visible when we actually study the history of science, rather than project our own view of progress upon it. What we get instead of single continuous line is line of breaks: Aristotle, Newton and Einstein. What then causes these breaks? Why don’t we just go from one science to another in an endless progression towards the truth? The answer for Kuhn is to be found in history and not in the philosophical image of science as a universal method.

The new picture we have of science is now as follows: first we have pre-science – normal science – crisis or revolution – new normal science – new crisis (Chalmers 1999, p.108). When at first science begins to emerge we don’t have a collection of facts or theories that explain facts, rather we have a competition between many theories (Chalmers gives the example of the state of optics before Newton). Gradually different scientists will be attracted to the one explanation. What is important is that the reason for this attraction will not just be scientific or rarely just scientific. It will be a combination of difference elements some of which will be psychological, sociological and even metaphysical. As more and more scientists come on board, what is in a state of chaos will coagulate into a paradigm. Only at that point will normal science be possible (the kind of science that Popper and the logical positivists describe). But even a paradigm, which makes normal science possible, is not made up of merely theories and observations. Like Newtonian mechanics, it is constructed from fundamental laws and theoretical assumptions, standard techniques and methods of investigation, general rules about exceptions and application to reality and most importantly of all a kind of world view or metaphysics which will unify all of this together (in Newtonism, that we exist in an infinite deterministic universe).

Rather than anomalies, as Popper would have us believe, being antithetical to normal science, it can quite happily accept them as long as they don’t attack the fundamentals of the paradigm. Everyone can get happily to work devising their experiments and putting in their grants and anyone who goes against the status quo can be banished to the outer darkness. The paradigm is reinforced by the institutions themselves. If you don’t follow the paradigm you won’t get the grant money, and anyway the education of young scientists make sure that they follow the paradigm. This is clearly what Kuhn saw when he first looked into the history of science as a practicing scientist: young scientists were taught the idealised image of science that had nothing at all to do with the history of science at all.

So why do paradigms fall? Why are revolutions inevitable? This is because of the anomalies. Because no discourse can close the gap between itself and reality, there will always be the nagging doubt that something is not being explained by the paradigm. As more and more money and experiments are thrown at these anomalies, cracks begin to appear in the scientific establishment. Thus a normal science begins to take the form of the pre-science. Rather than scientists doing experiments, they start having ideas and hypothesis. Some might be said to be cranks and fools, but gradually they begin to attract other scientists. Again Kuhn is clear that the reason for this cannot be scientific or logical, because there is nothing in one paradigm that would justify the leap to another, for there is no commensurability that would link them together, such that one might say that one is truer than the other. The reasons are practical. As more and more are attracted to this new science, gradually a new paradigm is born and the whole process repeats itself. We get a new normal science, where again people can happily devise their experiments, apply for grants and get promotion. Until of course the cracks start appearing again.

Although this appears to be an accurate representation of what scientists do, there is a fundamental problem with it. If we are to give up the image of science as the progress towards a truth in which the distance between discourse and reality is progressive closed, for a discontinuous series of closed paradigms, then does this make scientific truth relative? We can distinguish normal science from pseudo-science because of how paradigms work (the difference between astronomy and astrology), but that does not make science itself any truer. Can we say that Einstein, for example is truer than Newton? We want to feel that this is the case, but Kuhn’s principle of incommensurability will not let us do so. The answer to this question, as we shall see when we read Kuhn’s The Structure of Scientific Revolutions in more detail, is that we might have to change what we mean by truth, rather than giving up truth altogether. It means that we have to think of truth as a practice or activity, rather than as a representation of a reality that stands outside of us waiting for our discourse to catch up with it.

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Sharrock, W.W. & Read, R.J., 2002. Kuhn : Philosopher of Scientific Revolution, Cambridge: Polity.


[1] He might have been the first American philosopher to take this idea seriously. In France, this was the dominant view of science (Sharrock & Read 2002, p.1).

[2] It is science (think for example of evolution) itself that should make us suspect such teleological arguments.


Follow

Get every new post delivered to your Inbox.

Join 1,089 other followers