Heidegger and The Philosophy of Science – Lecture 7

January 28, 2016

Martin HeideggerWe have thought about science as being different from religion. Science has to do with facts, and religion with beliefs. Increasingly, as we have gone through the different views of what science might be, this simple opposition has become less and less believable. For a start off, it is not at all clear that science has to do with facts, if we mean by that that facts are simply lying around for a scientist to construct a theory from. On the contrary, facts are theory dependent. What is taken to be a relevant fact is given by a scientific theory, and this theory cannot be justified by appeal to them alone otherwise we would be lost in a circular argument. Is it possible then to define science simply by theories alone without recourse to facts outside of them? Popper certainly attempts to do so through this principle of falsifiability in his initial starting point. What makes a theory scientific as opposed to non-scientific, and thus what distinguishes science from religion, is that it can be falsified whereas non scientific theories cannot. But when we examine the falsifiability theory in detail, it is very difficult to show, in concrete terms, how they are falsified. Rather than anomalies causing scientific theories to collapse, they seem quite happily to carry on regardless, and because scientific theories are so complex, it is difficult to discern which hypothesis has to be falsified in order for the theory itself as whole to be so. In other words, the fact problem still rears its end, but now at the point of falsification rather than at the point of the construction of a theory. Because of these problems, philosophers of science like Kuhn will argue that we shouldn’t be arguing about science as such, or the ideal nature of science, but investigating what scientists themselves do. What we find then is not a smooth progress of science from one theory to the next getting ever nearer to the truth, but a discontinuous series of revolutions that he called ‘paradigms’.

Although we can speak of different paradigms, surely it is the same reality that is beneath them all? The question of reality is particularly pressing in science because the basis of modern scientific theories, since Galileo and Newton, is unobservable phenomena. If science of the 16th and 17 century posited nature as made of tiny particles of matter in motion of which all that we observed we its effects, this did not mean that anyone could see such corpuscles. How then did we now that such a theory was real? The whole of Descartes philosophy was to answer this question, and his answer, which not many philosophers after him were satisfied, was that it was God’s justice than ensured that what our theories said was real was in fact what reality was, even though we could not see it. The whole debate between realists and anti-realists in the philosophy of science is whether we can commit to such a reality or not without God or any other transcendent guarantee (or indeed whether it matters or not, whether it can be proved to be real).

At the end of the discussion of realism and anti-realism, I introduced the philosophy of Heidegger. Many will argue that he does not have a philosophy of science, but I don’t think that is right at all. Indeed, one could say that the whole of his philosophy is a sustained debate with science (Glazebrook 2000). For Heidegger, science is a restricted not a full account of experience. We take science to be describing the way that things are, but for Heidegger, it is only a certain way of approaching things, and not necessarily the truest. In Being and Time, he distinguishes between the present-to-hand, and the ready-to-hand (Heidegger 1962). Science, which has its roots in a certain metaphysics, relates to things as present-to-hand, but this is not how we relate to the world that is nearest to us. Our fundamental relation to things is ready-to-hand. We use them. We open the door to enter the room, we enter the room and sit at the chair, we place the books on the table, we look at the screen on which a picture has been projected, or we look at the words written on the board, or down at the book in our hands, and so on. What we do not look at, is little particles of matter, or atoms. Why, Heidegger, would we take this world not to be real, and the scientific world to be more real?

When we related to things as ready-to-hand, as opposed to present-to-hand, then it is clear to us that these things relate to our world. The world is the context is which making use of things makes sense (there is the world of the classroom, and this world is part of bigger world in which something like a classroom makes sense). This world is not a thing. It is not a container in which something is enclosed (like water in a glass, to use Heidegger’s example). Rather, it names the cultural context or background in which something like sitting in classrooms and listening to lecture’s makes sense. Even the activity of science itself, with its abstract picture of things, is not possible without this world, since science is something that human beings do, and can only occur where this activity already has a meaning.

In section 3 of Being and Time, ‘The Ontological Priority of the Question of Being’, Heidegger speaks explicitly about science. He says that every science has its own area of things that it studies. Thus physics studies matter, chemistry, elements, and biology, life, and so on. Yet for any of these sciences to function, they have to take for granted that the things they study actually exist. Thus, Heidegger says they all presuppose a understanding of being that they do not question. The physicist accepts that matter exists, the chemist, elements, the biologist, life. If they did question the existence of these things, then they could not actual do science at all, because they would come to a stop at the threshold of the investigation and never get any further. If I don’t accept that these things exist, then how could I do physics, chemistry or biology? What Heidegger here calls a ‘regional ontology’ is similar to what Kuhn calls a paradigm, the ‘ontical questioning of positive science’ to normal science. It is only when a science goes into a crisis does the ontology that it presupposes come into question. This is when, again in Kuhn’s vocabulary, does the existence of the very fundamental nature of the objects of a science become doubtful and only at this point does science have to turn to philosophy for its answer.

What philosophy discovers is that science is a projection onto nature. This does not mean that nature does not exist for Heidegger (if human beings ceased to exist, there would be still planets, but there would not be Newton’s laws of motion). What modern science projects onto nature is mathematics. Nature is only what can be described mathematically. Galileo and Newton onwards, this is understood in terms of efficient causality rather than final causality. For Aristotle, nature is defined teleologically. Nature has a purpose, goal and direction, whereas in modern science it does not. This is why for Heidegger technology is the essence of modern science, because it means, through its mathematical projection, nature is totally subsumed to human purpose. Because nature has no purpose or value in itself, its only value is for the sake of us. It becomes, to use Heidegger’s phrase, a ‘standing reserve’. The big difference between Kuhn and Heidegger, is though both understand science historically, Heidegger does not think that the image of nature in Newton and Galileo is that fundamentally different from that in quantum physics. Though they are a different mathematics, nonetheless both view nature mathematically. The fundamental split them is between final causality of Aristotle and the efficient causality of modern science that culminates in technology.

For Heidegger, the basis of mathematical projection of science is the experiment. It is therefore a fundamental misunderstanding of science that it simply experiences things as they are and then comes up with a picture of the world (a picture which is meant to be what things really are). On the contrary, through the experiment, the scientist already interprets experience mathematically. It is the mathematical model that gives meaning to the experience and not experience meaning to the mathematical model. This again is the big difference between Aristotelian and modern science. For Aristotle, science is based on experience, for modern science it is not. Mathematics is first, not experience, but we still speak about science as though it was about experience, and somehow the things that we directly experience around us were the diminished and restrictive one, and not science. As though we were living in the abstract world and the mathematical projection of science were the full blooded one.

That meaning is the subject of science is what the history of science teaches us. We see that the world of Aristotle, Newton and Einstein, is not one and the same world a series of ruptures, breaks and discontinuities. Although the reference of these theories is one and the same, the meaning of the reality they refer to is not. What mass means in Newton, therefore, is not the same as what it means in Einstein. To use Kuhn’s word these worlds are incommensurable, since there is not a perfect translation between one and the other. You will only think that objectivity is threatened by this picture, if you believe in a metaphysical reality that is beyond human experience but which at the same time we can know. Reality is not outside of us, it is something that we construct through our institutions and discourses. The difference between astrology and astronomy is not in terms of a method, as Popper might have us believe, that one is tested by facts and the other is not, since when we investigate the history of science, we see that a theory will ignore those facts that do not fit its paradigm, but it does not have the virtues or practice of objectivity. The problem with astrology is that it explains too much and not too little. Truth, if we might put it this way, is a practice, a way of being, rather than a mirror to a reality that stands outside of us eternally the same. It is the creation of concepts to problems that are forever changing, and it is through problems that we grasp reality.

Rather than grand narratives, the study of the history of science concerns the details. What scientists say and do. For this reason we cannot impose an image of science on its own reality. What we discover is that reality is not identical through time but constructed from different aspects that are only relatively stable and which can always dissolve into a new regularity that might take elements from the previous paradigm but would transform their meaning by placing them in different relationships. It is not reality which explains how science changes, but the changes in science that explain reality, just as it is not the chair that defines sitting, but sitting the chair. The correct question is therefore not what reality is, but how do we understand and interpret reality. What changed in the nature of scientific experimentation such that reality was perceived in a different way? What changes is not reality, but how we perceive and understand, and what changes this perception is the practice of science itself, its discrete methods and discourse that would be only visible to us through historical investigation. The subject of such a history is what scientists do. We reject the idea of hidden telos, as though all scientific activity were heading in the same direction that reveals a reality that had already been there from the beginning but simply unknown by us. Science is made up of actions of scientists and nothing more. The meaning of reality does not belong to some intrinsic definition but to a practice that leads to a certain and definite objectivity over a period of time, but which can subsequently dissolve as a new objectivity emerges. Reality is only a correlate of a practice and only has a meaning as such in relation to it. We can therefore distinguish between the practice of science and non-science, but there is no absolute ahistorical meaning of science, and still less a reality that is eternal and unchanging. Science is not about reality per se, but problems.

What Heidegger calls ‘projection’ Feyerabend calls a ‘belief’ (Feyerabend, 2010, 10). We think that science is just an explanation of what common sense already knows. But the opposite is the case. Science, since Galileo, moves in in another direction than common sense. It is by moving in the opposite direction to ‘contemporary reason’, that the new science develops new instruments and new experiments. If it had not done so, if it stuck by the old rules and methods, it would not have developed such a new way of looking at and understanding reality. It is only subsequent to the emergent of the new beliefs that evidence can be found to support them. We tend to think the opposite. That the new beliefs emerged because the evidence demonstrated their truth, but the opposite is the case: it is the new beliefs that made the evidence even visible. This is why subsequently we can say that ‘Galileo was on the right track’, because now there is enough evidence to support the theory, but if we had waited for the evidence before hand, the theory would never have got off the ground. As Feyerabend continues:

Theories become clear and ‘reasonable’ only after incoherent parts of them have been used for a long time. Such unreasonable, nonsensical, unmethodical foreplay thus turns out to be an unavoidable precondition of clarity and of empirical success. (1993, 11).

Works Cited

Feyerabend, P., 1993. Against method. Verso, London; New York.

Glazebrook, T., 2000. Heidegger’s philosophy of science. Fordham University Press, New York.

Heidegger, M., 1962. Being and Time. Wiley-Blackwell, Oxford.


Realism and Anti-Realism in Philosophy of Science –Lecture 6

January 24, 2016

higgs-simulation-3In a previous lecture we looked at Kuhn’s idea of history of science as broken by different paradigms that are incommensurable. Aristotelianism, Newtonianism, and Einsteinism, mark revolutions in the history of science rather than a smooth flow of one epoch into another which will some day reach an ultimate Truth when we can all stop doing science because what our theories say and what is are exactly the same and there will be no exceptions. What Kuhn reminds us is that when we think about what science is, rather than taking the philosopher of science’s word for it, we should examine what scientists do. We will find that the philosophical version does not much look like the real history of science, rather they are idealisations in both sense of the word: an abstraction and a kind of wish fulfilment. Kuhn is not sceptical of science as such, but the philosophy of science. His book, The Structure of Scientific Revolutions, marks the death knell of a particular kind of philosophical history of science, so that it can be replaced by the proper history of science, whose object is what scientists actually do, rather than what philosophers think they might do. In other words, the new object of this history of science is ‘normal science’, in all its messiness and vagueness, rather than an idealised science that has never existed except in the minds of philosophers like Ayer or Popper.

At this point, however, we are going to make a little detour back to philosophy, and that is to the question which should have been bugging us from the very beginning, which is what exactly is science about, rather than what is the history of science. Early on we characterised the difference between religion and science as the difference between belief and facts. We said that science is about reality, that it makes true description of real things that happen in the world. In a word, it is objective. On the contrary, religion is subjective. It does not give us a true picture of the world, but offers us a moral compass through which we can live our lives. To confuse religion with science is to undermine the importance of religion rather than to give it more intellectual support. There is no conflict between science and religion, because they are completely different discourses. One tells you what something is, the other how you ought to live your life.[1]

But what do mean when we say that science is about reality? Aren’t we being a little simplistic when we do that? What is reality after all? Everyone knows the old paradox of whether a tree that falls down in a forest makes a sound or not if no one is there to hear it. Is reality what we perceive or is it more than that? I would say that it would be absurd to say that there would not be trees, stones or stars if there were no human beings. As though human beings were to vanish the universe would vanish with them. The universe does not have any meaning, however, except for the fact that it means something for some being or other in the universe. A stone is not a stone for a stone. It is only a stone for human beings who understand what it is to be a stone. We’ll come back to this at the end of the lecture.

Both Chalmers , Okasha, and Ladyman (perhaps because they all belong to what can be loosely called the analytic tradition) seem very reluctant to address these questions head on (as though they were too philosophical and could be avoided. I would say that it is their hidden philosophical assumptions which allow them to avoid these questions).[2] For them, on the contrary, the important distinction is between realism and anti-realism, rather than whether reality exists out there as such and what we might mean by reality as a whole. Chalmers simply dismisses the idea that reality being formed by language (what he calls global anti-realism), through a Tarskian theory of truth, which begs the questions, because such a theory already has a commitment to a certain view of language, and a certain view of reality, which remains unquestioned by Chalmers himself. Investigating this presupposition, however, would take us too far from the subject of this lecture itself.

What then is anti-realism and realism in science? First of all it is important to note that both theories accept the reality of the world. So it is important not to confuse either with a thorough going scepticism. The difference between them has to do with the status of scientific theories, on the one hand, and observable phenomenon on the other. A strong realist would argue that both observable phenomenon and theories are true descriptions of the world out there, whereas an strong anti-realist would say that only observable phenomenon are true, and theories are neither true of false. All these authors, as far as I can see, occupy a position between these two extremes.

The common sense view, I suppose, would take it that both theories and observable phenomenon are true, so we are going to approach this question from this point of view. None of us would think that observable phenomenon are not real, that when I see a donkey there isn’t a donkey out there (again I am not so sure that both Okasha and Chalmers skip over this supposed reality far too quickly, but let us leave them to have that truth for now). What isn’t so certain is that theories really point to something out there. This is because much of the basis of a scientific theories actually point to phenomenon that we cannot observe. If we cannot see something, then how can we say that it is part of the world? From what vantage point would we say that it is real? Of course, as Okasha points out, many sciences do have as their basis observable phenomenon, such as palaeontology whose objects are fossils, but modern physics does not (Okasha, 2002: 59). We cannot literally see inside of the atom. We only have theoretical pictures of what they look like, and we do not know if at that level the universe really looks like that at all.

The anti-realist is not saying that there is no difference between science and someone who thinks that the earth is balance on the back of a turtle. Rather theories only give us structures or the scaffolding in which we can observe phenomena through experimentation, but it is only this literally observable phenomenon which we can take to be true. The theory itself we cannot prove is real or not, because there is nothing there to see which we could demonstrate as real or not. The history of science itself seems to bear this out, because there have been false theories that have actually brought out true observable phenomena, so there does not seem to be an analogy between the truth of a theory and the truth of observable phenomena. The example that Chalmers gives is the history of optics, which is littered with what we now understand to be false theories of light, and yet which provided correct observable phenomena. Thus Newton believed that light was made up of particles, then Fresnel believed that light was a wave in a medium called ether, then Maxwell, believed that light waves were fluctuating electric and magnetic fields in ether, then in 20th century we got rid of the ether and the waves were entities in their own right, then finally the wave theory of light was supplemented by the particle theory of photons.

It seems to go against common sense, however, to say that theories are just fictions on which we hang our experimental results. When we look at the history of atomic theory it does appear that we are getting a progressive understanding of the structure of atom, and it would seem entirely bizarre that the theory would predict what we ought to see, and at the same time being entirely false. One way of getting around this is by arguing that the anti-realist is making a false distinction between what is observable and what is not observable, since though we cannot see inside the atom, we can detect the existence of atoms by ionisation when they are passed through a cloud chamber. The strict anti-realist, however would say that, all we know is real is the trails themselves, and we cannot not know whether the atoms are real or not, just as we should confuse the trail that a plane leaves in the sky with the plane itself. In other words, we have to make a distinction between direction observation and detection.

The fundamental issue here is whether we can make a complete separation between theories, on the one side, and facts on the other. This is the real issue, rather than whether facts are observable and theories not. In fact it is the anti-realist and not the realist who is committed to the separation. Both Okasha and Chalmers, though in different ways, would criticise this separation. Chalmers returns to whether the history of philosophy really does prove that theories which were once taken as true are shown to be false by the next one, and so on infinitum, so that we can never know whether are theories give us an accurate view of the world, by arguing that each new theory takes up some aspect of the previous one which gives us a more and more accurate picture of the phenomenon we are attempting to understand. Thus a true theory (unlike the turtle theory) captures some aspect of the truth of the world, if only a partial one, which is then improved by the subsequent one (does this conflict with the Kuhnian view of science, since it implies an accumulative image of science?). Okasha, on the other hand, will claim that the problems that the anti-realist claims would undermine the possibility of claiming theories to be true, could also rebound against what we would think were observable phenomena, and thus would destroy the basis of all science altogether, since we could only claim to know what we could see now in this moment, and not past events, since again they are only known by detection rather than direct observation (this would be mean that the anti-realist argument would be like Hume’s problem of induction).

As I said at the beginning, I find both Okasha’s and Chalmers discussion of realism unsatisfactory and indeed both of their chapters seems to end without any kind of resolution as though they had both been exhausted by the discussion. What I think is left unthought in their views is that the only way we could access reality is through science, and thus if we cannot, then we cannot access reality. To me the discussion of observable and unobservable phenomena is a red herring. Nothing has meaning unless it has meaning for us and that is true of both observable and unobservable phenomena, but the real issue is whether our reality is first of all something that we observe. Here I would turn to the philosophy of Heidegger, who would argue that it is prejudice of a very old metaphysics that our first relation to the world is one of perception, what he calls ‘present-to-hand’. What is true both for the realist and the anti-realist is that they take reality to mean ‘present to hand’. It is just that one thinks scientific theories are speaking about something present to hand and the other does not. The world for Heidegger, on the contrary is not something, present to hand, but ready to hand. The world is first of all something that we orientate ourselves in, rather than perceive.[3] This context can never be investigated as an object, because it is what objects make possible. Even science itself must have its origin in this cultural context or background. It is only because science as an activity means something to us that we can approach anything in the world as a scientific object, and not the other way around.

As Heidegger argues in Being and Time, Newton’s laws are only true because we exist. If we were no longer to exist, and the world in which these laws made sense were no longer to exist, then it would be absurd to still say that these laws were true. This does not mean that things do not exist separate from us, nor that truth is relative. Newton’s laws really say something about things, because these things only are, in the sense of ‘true’, through our existence. This truth would only be relative if we really thought that there was a truth of things beyond our existence that we did not know. Things are only because they are there for us, but this in no way means that any assertion is possible. This would be to confuse assertion and the condition of assertion. The truth of reality is dependent on our existence, but this does not mean that you or I can say anything we like about this existence. For you or I as individuals are just as much part of this existence as anything else is. To be a scientist is to already accept what this existence means (what the world of science means, of which Newton’s laws are an example), and to refuse this is no longer to be a scientist.

Works Cited

Van Fraassen, B. (2006). Weyl’s Paradox: The Distance between Structure and Perspective. In A. Berg-Hildebrand, & C. Shum (Eds.), Bas C. Van Fraassen: The Fortunes of Empiricism (pp. 13-34). Frankfurt: Ontos Verlag.


[1] It is a wholly other topic whether religion is the only discourse that can do this, but that does not undermine our distinction between it and science.

[2] Okasha, Samir, ‘Realism and Anti-Realism’ in Philosophy of Science: A Very Short Introduction, Oxford: OUP, 2002, 58-76. A. F. Chalmers, ‘Realism and Anti-Realism’ in What is this Thing Called Science?, third edition, Maidenhead: Open University Press, 1999, 226-46. Ladyman is more willing to discuss the philosophical issues in depth, but he does so from an analytic perspective. What is lacking in all these treatments is what I would call ‘ontological depth’, and I am going to turn to this in the next lecture which will look at some of the ideas of Heidegger.

[3] I think that this is what Fraassen is getting at when he says that a theory or model of reality is only useful when we locate ourselves within it, though I don’t think he is referring to Heidegger’s distinction here. (Van Fraassen, 2006, p. 31)


Kuhn – Lecture 5

November 8, 2015

Thomas_KuhnScience does not begin with facts and then construct theories out of them. Nor does science begin with theories and then just find facts that would confirm them. Both these conceptions conceive of science as though it were a discourse that was completely context free. In the first case, facts are simply available as though they were waiting for interpretation of a specific kind, and in the second case, theories are simply open to facts as though there were no inertia or hindrance to the smooth progress of science from one theory to the next, each equally open to the possibility of falsification.

One of the first philosophers of science in the Anglo-American tradition to take the idea of context or background to scientific activity seriously was Thomas Kuhn.[1] Loosely characterised this approach might be called ‘historical’. What does it mean to treat science as though it were part of history rather than outside of it? It means first of all to take scientists seriously. It is to treat what they do the same way that we would analyse the thoughts and actions of French peasants or the 13th century or a military general in the 20th. First of all to record scientific achievements correctly (who thought of what at what time), and secondly to examine exactly how scientists come up with their theories in relation to the material they were investigating. What it certainly is not is the importation of philosophical theories from the outside (like verification or falsification) followed by squeezing the scientific activity to see whether it would fit these ideal models.

However much the logical positivists and Popper might differ, they both have the same idealised view of science: there is a sharp difference between theory and observation; knowledge is cumulative tending towards a true understanding of the universe; the language of science is precise and science is unified; science is either inductive or deductive; the key question of the philosophy of science is legitimacy and validity, rather than the contingency of discovery. Against all the suppositions Kuhn puts forward exactly the opposite: there is no sharp difference between observation and theory; knowledge is discontinuous; science is neither a tight deductive structure nor an inductive reading of facts; scientific concepts are not precise; nor is science unified; context is important and science is historical and temporal.

At the heart of the idealised picture of science is scientific progress. This is the view that science is leading to ever increasing knowledge about the universe and that finally one day we will have a theory of everything, and I suppose, science can come to end, because there will be no more questions that need to be answered. So first of all there are pre-scientific theories of the universe that we find in the religious and mythical texts (like Genesis), and then we get the first science, Aristotelianism (though this is a really a mixture of science and occult explanations), then Newtonism (which is the first science proper) and then finally in our times, Einsteinian science which is a response to the crisis that befell Newtonism. One imagines that sometime in the future, though one can never tell, there will be fourth science that will replace Einstein, but only because it contains more truth and is close to the universe as it really is than all the other theories that we have had. Such a view of the history of science, we might call ‘convergence’, since it views the series of scientific discoveries of converging on the true understanding of reality.

There are two problems with this image of science. One is temporal and the practical. First of all it has a conception of time, where the past is merely a stepping stone to the present but the past has no meaning in itself. For how can we measure the progress of science in this sense unless we imagine an end towards which it is moving and this end is supposed to be an advance on the past?[2] But how can we know that this advance is real unless we can stand outside of time and measure it? Is it not really the case that past is not the stepping stone to the future, but that we judge the past from the vantage point of the present, and in looking back, project a false teleology back into the past? In terms of the past itself, there were numerous possibilities and the present that we now occupy did not have to occur. Equally the present that we now stand in has infinite possibilities, so we cannot know what the future will be.

In terms of the practice of science, we also know that his temporal picture of progress is false. This is what Kuhn discovered when he did his own historical research. Rather than the history of science demonstrating that each scientific period progressed into the next one moving to ever greater level of truth and closing the distance between discourse and reality, we find that it is discontinuous and non-cumulative and that there is no reality out there, which we could know independently and through which we could measure the relative truths of each discourse because reality itself is a creation of discourse and not its external validation.

What does it mean to say that the history of science is discontinuous rather than continuous, non-cumulative rather than cumulative? Let’s go back to the image of progress where science moves smoothly from Aristotle, to Newton, to Einstein. What is left out in this description is the gaps or spaces between each scientific theory (or what Kuhn calls a paradigm, because it is more than just a theory) and it can leave this gaps out because the fantasy of some ultimate truth which is where reality and discourse are the same. As soon as we leave this fantasy behind, and realise that it too is a creation of a discourse (in this case metaphysics), then we can see that there is no transition of one to the other. Rather, they are separate or incommensurable. They belong to different worlds.

Again this is visible when we actually study the history of science, rather than project our own view of progress upon it. What we get instead of single continuous line is line of breaks: Aristotle, Newton and Einstein. What then causes these breaks? Why don’t we just go from one science to another in an endless progression towards the truth? The answer for Kuhn is to be found in history and not in the philosophical image of science as a universal method.

The new picture we have of science is now as follows: first we have pre-science – normal science – crisis or revolution – new normal science – new crisis (Chalmers 1999, p.108). When at first science begins to emerge we don’t have a collection of facts or theories that explain facts, rather we have a competition between many theories (Chalmers gives the example of the state of optics before Newton). Gradually different scientists will be attracted to the one explanation. What is important is that the reason for this attraction will not just be scientific or rarely just scientific. It will be a combination of difference elements some of which will be psychological, sociological and even metaphysical. As more and more scientists come on board, what is in a state of chaos will coagulate into a paradigm. Only at that point will normal science be possible (the kind of science that Popper and the logical positivists describe). But even a paradigm, which makes normal science possible, is not made up of merely theories and observations. Like Newtonian mechanics, it is constructed from fundamental laws and theoretical assumptions, standard techniques and methods of investigation, general rules about exceptions and application to reality and most importantly of all a kind of world view or metaphysics which will unify all of this together (in Newtonism, that we exist in an infinite deterministic universe).

Rather than anomalies, as Popper would have us believe, being antithetical to normal science, it can quite happily accept them as long as they don’t attack the fundamentals of the paradigm. Everyone can get happily to work devising their experiments and putting in their grants and anyone who goes against the status quo can be banished to the outer darkness. The paradigm is reinforced by the institutions themselves. If you don’t follow the paradigm you won’t get the grant money, and anyway the education of young scientists make sure that they follow the paradigm. This is clearly what Kuhn saw when he first looked into the history of science as a practicing scientist: young scientists were taught the idealised image of science that had nothing at all to do with the history of science at all.

So why do paradigms fall? Why are revolutions inevitable? This is because of the anomalies. Because no discourse can close the gap between itself and reality, there will always be the nagging doubt that something is not being explained by the paradigm. As more and more money and experiments are thrown at these anomalies, cracks begin to appear in the scientific establishment. Thus a normal science begins to take the form of the pre-science. Rather than scientists doing experiments, they start having ideas and hypothesis. Some might be said to be cranks and fools, but gradually they begin to attract other scientists. Again Kuhn is clear that the reason for this cannot be scientific or logical, because there is nothing in one paradigm that would justify the leap to another, for there is no commensurability that would link them together, such that one might say that one is truer than the other. The reasons are practical. As more and more are attracted to this new science, gradually a new paradigm is born and the whole process repeats itself. We get a new normal science, where again people can happily devise their experiments, apply for grants and get promotion. Until of course the cracks start appearing again.

Although this appears to be an accurate representation of what scientists do, there is a fundamental problem with it. If we are to give up the image of science as the progress towards a truth in which the distance between discourse and reality is progressive closed, for a discontinuous series of closed paradigms, then does this make scientific truth relative? We can distinguish normal science from pseudo-science because of how paradigms work (the difference between astronomy and astrology), but that does not make science itself any truer. Can we say that Einstein, for example is truer than Newton? We want to feel that this is the case, but Kuhn’s principle of incommensurability will not let us do so. The answer to this question, as we shall see when we read Kuhn’s The Structure of Scientific Revolutions in more detail, is that we might have to change what we mean by truth, rather than giving up truth altogether. It means that we have to think of truth as a practice or activity, rather than as a representation of a reality that stands outside of us waiting for our discourse to catch up with it.

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Sharrock, W.W. & Read, R.J., 2002. Kuhn : Philosopher of Scientific Revolution, Cambridge: Polity.


[1] He might have been the first American philosopher to take this idea seriously. In France, this was the dominant view of science (Sharrock & Read 2002, p.1).

[2] It is science (think for example of evolution) that should make us suspect such teleological arguments.


Popper and Falsification – Lecture 4

October 28, 2015

Karl_PopperWhat we want is some criterion which will allow us to distinguish science from any other discourse. In other words what makes science, science, as opposed to religion? What is specific to the method of science? Our simplest response to this question is that science deals with facts that are objective (out there in some way) and that religion has to do with belief and is subjective. We might want to say, then, that science is true, and religion is not. When we looked at this simple definition, however, the less certain and clear it seemed. For the idea that science is made up of many observations of facts that are then converted into theories breaks down in the problem of induction, which, in its most succinct form, is the impossibility of leaping from a singular judgement to a universal one. No amount of logical finessing will get you from a particular to a universal. This would seem to imply that science is no more objective than religion, and that a theory is as much a belief as any faith.[1]

Moreover, it was also clear that the inductionist picture of science was not accurate at all, since facts are not just littered throughout the world such that we pick them up and notice common characteristics from which we then construct some universal law. On the contrary, we already come to facts with a pre-existing theory, which determines which facts we take as relevant or not (or even which fact we can see). As Ladyman explained, Newton did not find the law of gravity in Kepler’s data, he already had to have it in order to interpret the data (Ladyman 2002, pp.55–6).

This reversal of the relation between theory and facts, that theory is first and facts second, is the basis of the next philosophy of science that we shall look at, Popper’s theory of falsification, and indeed rose out of the insurmountable problems of ‘inductionism’. His argument is that we should give up induction as the basis of science, but such a rejection would not lead to irrationalism. Rather we substitute for induction, deduction. But did we not argue already in the first lecture that deduction could not be the basis of science, since deduction is merely tautological? Deductive logic tells us nothing new about the world, but only analyses what we already know, whereas as would say that science actually tells us something about nature that we didn’t know before.

Deduction does not work as a basis of science only if we move from the singular to the universal, but if we go from the universal back to the singular then deduction does work. Indeed, this move from the universal back to the singular is exactly, Popper argues, how science operates. We do not start with facts and then make laws, rather we start with laws and then we attempt to test them with facts. The logical point is that we can’t go from observations to theories, even if the observations themselves are true, but it is possible the other way around. We can go from theories then back to observational statements to show that the theory is false. Thus to use Chalmers example, if someone was to see a white raven outside the lecture room today, then this would prove deductively that the statement ‘All Ravens are black’ is false. Such deductive arguments are known as modus tollens, which take the form if P, then Q. ⌐Q, therefore ⌐P (Chalmers 1999, p.61).

When we look at the history of science, this seems exactly what happens. Take for the example, Eddington’s proof of Einstein’s theory that gravity bends light. If the theory was correct then a star that was beyond the sun should be displaced from the direction of the observer so that we could see it. Normally the light from the sun would mean that these starts would not be visible to us, but would be if the light of the sun was blocked. Eddington managed to measure just such a displacement with the eclipse of the sun in 1919. For Popper, the point of this story is that he could have proved otherwise. In other words, Einstein’s theory could have been falsified, if there had not been any displacement.

The real difference between science and religion or any other discourse is not the theories or hypotheses put forward, but how they are tested. Popper is adamant that science is creative as any other human discourse and that the origin of this creativity is outside any logical explanation. That someone comes up with such an idea at such a time cannot be rationally explained. Thus we don’t know how Galileo or Einstein came up with their ideas, and why not someone else, or at different time and place, but what we do know that what makes these creations scientific, as opposed to anything else is that they can be falsified (this is the difference between context of discovery and context of justification). In the opposite case, it does not seem possible to falsify a religion logically. I can always find a reason to believe something. Think for example of the classic problem of evil in theology. How do I justify the existence of God with evil in the world? It is perfectly possible to find such a reason, as Leibniz did that this is the ‘best of all possible worlds’, and it is just our lack of human understanding that prevents us from seeing it so.

Here we might need to know a little of the story of Popper’s life. When he was young he was a communist and of course Marxism was treated as a science. He says that one day in went on a march with his friends and they were attacked by the police and some of them were killed. He was so shaken by this incident that he had to speak about to his political leaders. They told him that these deaths were necessary for the political emancipation of the workers as was explained by scientific Marxism. But what then would falsify Marxism, for they did not seem to be any instance, including the death of his friends that could not be explained by it.[2]

This is precisely the difference between a science and a pseudo-science (religion is only a pseudo-science when it takes itself to be answering scientific questions, otherwise it is perfectly meaningful for Popper): a pseudo-science has the answer to everything and can never not be true, whereas a science does not have the answer to everything and can always be false. It is this that demarcates, to use Popper’s word, empirical science, from anything else and it is a question of method, rather than logical form, by which he means the positivist obsession with the correlation of statements with aspects of reality. Metaphysics and religion are only pseudo sciences when they pretend to be sciences. If they do not, then there is nothing intrinsically wrong with them. They are certainly not meaningless which is just derogatory word, rather than having any useful philosophical sense.

If what makes a scientific theory scientific is falsification, what exactly makes a falsification? Can any falsification be scientific? Such a broad generalisation does not seem to be correct because just to falsify something would not make it a scientific theory. I could falsify physics, by quoting Genesis but no one would think I was being scientific. The answer here is intersubjective testability. One cannot conceive of how it would be possible to set up an experiment that would test my falsification of physics that claimed God had created the universe in the way that it is described in Genesis. One can imagine, however how it might be possible to test the falsification of Newtonian science through the prediction made by Einstein, which is entirely what the example from Eddington proves, and it is perfectly possible that other scientists could conceive of such an experiment, whether in principal or in practice.[3]

Could a theory always secure itself by simple adding an ad hoc modification every time a falsification was produced? Thus, to use Chalmers’s example, we could take the generalisation that all bread was nutritious to be falsified by the death all the members of French village who ate bread. We could then qualify our theory by saying that all bread is nutritious except when it is eaten by these members of the French village and we could do this every time any falsification was discovered. Such ad hoc modification would completely destroy any progress in scientific discovery. How then can we distinguish between an authentic and inauthentic ad hoc modification (Chalmers 1999, p.75). In this example, the modification cannot be falsified, so it does not tell us anything new about the world. It in fact tells us less than the original theory that all bread nourishes. So an authentic modification must be one that is also falsifiable. If we had said instead that all bread nourishes except one that is contaminated by certain fungus called Claviceps purpurea, then this would be an authentic ad hoc modification, since it could be tested and falsified, and thus does tell us something new about the world.

This distinction between authentic and inauthentic ad hoc modifications of scientific theories, however, tells us that we should not over-estimate falsifications of theories. When we look at the history of science we can see that ad hoc modifications can confirm rather than deny a theory. Take the case of the discovery of Neptune. Irregularities in the orbit of Uranus predicted that there must be another planet that had not be observed. Rather than reject Newton’s theory, scientists argued that a planet must exist that would explain it. Thus, the fact that Neptune was found in 1846 confirmed Newton’s theory rather than falsified it. Rather than seeing science as just a series of falsifications which lead from theory to the next, Aristotelianism to Newtonism to Einstein, we should see it as the confirmation of bold conjectures and the falsification of cautious ones. For what difference does it make to science if one falsifies conjectures such as the universe is made of porridge or confirms a cautious one? But how then do we determine what makes a bold conjecture? The only answer to this must be background theories themselves, for only in relation to them could we know what would be bold or timid. The background knowledge is therefore the cautious conjecture (what we take to be correct) and the bold conjecture flies in the face of what everyone thinks is the case. We can see, then, what the real fundamental difference between the falsificationist and inductionist is. The first takes the history of science seriously, and the second has no conception of the history of science at all. There is no background knowledge. Rather facts are accumulated as though there were no context at all and science existed in the eternal present.

Is falsification immune to criticism then? The answer must be unfortunately not. The real problem is still the relation to the theory and the observation. All we can say deductively is that if there is O, then the falsity of T follows if the O is not given, but it tells us nothing about the standard of the evidence itself. What if the evidence is incorrect? Perhaps when person who said that the raven was white and no idea what white was. Perhaps the photograph of the white raven was created in Photoshop, and no such evidence exists. Popper does not have a better story about the correctness of evidence than the positivist.

Moreover, when we actually look at science, it does not take the simple form of ‘All swans are White’…. Rather, sciences are made up of complex collection of universal statements which are interrelated to one another. Now if a prediction tells us the theory is false it tells is that one of the premises might be wrong but not which one or even that our own experience might be the problem. It might not the theory that is out, but the ‘test situation’ itself, because we cannot isolate the premise which allows us to falsify the theory (this is known as the Duhem/Quine thesis). So to use Ladyman’s example, if we were to try and predict the path of a comet, the law of gravity would not be sufficient, so if the predication were incorrect we would not know that it was the theory of gravity that was being falsified or something else (Ladyman 2002, pp.77–8).

Even if such an isolation were possible, falsification does not seem to capture actually what science and scientists do, for when we look at the history of science we do not find one great conjecture following another, but that scientists stick to their theories despite the fact that they can be falsified or they adopt a new hypothesis even though all the known evidence at the time should have killed them off at birth. This is what we find when we look at the detail of the eventual transition from the Aristotelian to the Copernican view of the world as Feyerabend and Kuhn describe it. It is certainly was not the simple falsification of the one by the other. Science works, to some extent, because scientists are dogmatic and not open to falsification. If that is the case, how is it possible to differentiate, or demarcate, science from any other dogma? Will we not have to use different criteria?

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Ladyman, J., 2002. Understanding Philosophy of Science, London; New York: Routledge.

Popper, K.R., 2002. Unended quest, London; New York: Routledge.


[1] When we look at science as a method this is a problem. We might ask, however, if we think of science as an activity, whether it is such a problem.

[2] The source of this story can be found in Popper’s autobiography (Popper 2002, pp.30–8).

[3] Does this open Popper to a more pragmatic account of science than an epistemological one? For if testability is inter-subjective how are we to describe it? Popper appears to want to separate questions of method from question of practice, but later criticisms will in turn want to question this distinction by asking whether it is really the case, when we look at the history of science, that scientists really are committed to the principle of falsifiability. This will be part of Kuhn’s critique of Popper.


Philosophy as a Way of Life

October 21, 2015

HadotNow a days we tend to think of philosophy as an academic discipline you study in university and that to be a philosopher is to be a professor of philosophy. But that is not always how it as been, according to the French philosophy and historian of ancient philosophy Pierre Hadot (Hadot 1995, pp.81–125). Thus, in ancient philosophy it was perfectly possible to be a philosopher without having written anything, for what mattered was not the discourse of philosophy in itself (being knowledgeable about philosophical theories), but living philosophically.

Living philosophy, Hadot tells us, was a spiritual exercise and he is very clear why it has to be called this, even though in our ears it might sound overtly religious, or particularly, because of Ignatius Loyola, Christian.[1] Spiritual, because it was more than merely moral or intellectual exercise, and consisted of a total transformation of one’s existence. Hadot divides these spiritual exercises into four distinctive disciplines, which we will describe in turn:

1. Learning to live

2. Learning to dialogue

3. Learning to die

4. Learning to read

Learning to Live

If the aim of philosophy is to teach one how to live one’s life better, what is it that prevents us from doing so? The answer for ancient philosophy is the passions. It is because we cannot control our passions that we end up being miserable and unhappy. The art of living well, therefore, is measured by the ability to control ones passions and this is what philosophy can teach you. One of the schools of philosophy, the Stoics, argued that there were two origins for human unhappiness: we seek satisfaction in possessions that we cannot have or can lose, and we try to avoid misfortunes that are inevitable. Philosophy teaches us is that the only matter that truly lies in our power are moral goods. The rest we should accept with indifference. I cannot control what happens to me, but what I can determine in my attitude to it. It is through the spiritual exercises of philosophy that we can free ourselves from our passions and view any misfortune that happens to us with equanimity. The most important of these exercises in Stoicism is ‘attention’ (prosoche). It is only through constant self-vigilance that I can learn how to control my passions. The fundamental rule of life is to be able to determine what depends on me and what does not, and I can only do that through permanent attention to myself and to the outside world. One of the most important aspects of the self-vigilance is attention to the present moment. Much human unhappiness is caused either by being weighed down by the past or hoping too much from the future. It is better to live in the present moment and accept reality as it, the simple joy of existing, as the other major school of ancient philosophy, Epicureanism, calls it.

The intellectual exercises of philosophy, reading and writing, listening and talking to other, were never simply for the sake of gaining more knowledge, but applying this knowledge to how one lives ones own life. Thus physics, for example, was never just about learning about the structure of the universe, but also demonstrating the scale of one’s own petty human worries. In an infinite universe, how much do my own fears and desires matter? Nature is indifferent to my unhappiness and only my own freedom should concern me, which is the freedom to be who I am.

Learning how to dialogue

Intellectual and spiritual activity is never a solitary affair. This is why the ancient philosophical schools were always communal in form. I learn to think for myself by thinking with others. It is not so much what is said that is important but that one speaks, because it is only through interacting with others that I can gain any self-knowledge. As Hadot writes:

The intimate connection between dialogue with others and dialogue with oneself is profoundly significant. Only he who is capable of a genuine encounter with the other is capable of an authentic encounter with himself, and the converse is equally true. (Hadot 1995, p.91)

What I learn is that philosophy is a journey and not an end. Wisdom is always something towards which I can only ever aim and never reach. Such a relation of authentic speech with others is always more important than writing and appearing to be knowledgeable. Again the aim of philosophy is self-transformation and not knowledge, if knowledge means here theory or discourse.

Learning to Die

Learning to die is not a morbid obsession with death. Quite the opposite, it is to learn not to fear death. For what is the most important aspect of human life is that it transcends death. Socrates, the most important philosopher for both the Stoics and the Epicureans, was willing to die for his beliefs, because he realised that what was the most important thing about him was not his body, but his ideas, and these would live on despite him. Far more important than ones individual life is truth itself. To learn to die, therefore is not to be obsessed about death in a morbid way, but to aim for a higher existence. To realise that thought is more important than the passions of the body. It is to transcend the individual existence of the sensible body (which will perish as part of the natural cause of things) for the sake of the universality and objectivity of thought. It is in thought that we find our true freedom, whereas our body, through which our passions affect us, is a kind of tyranny and prison. The fact of death highlights the insignificance of our affairs which torment and worry us. Our deaths could arrive at any time, so we shouldn’t become too attached to our possessions nor try and find meaning in what is inauthentic. To think of one’s death in one’s life is to realise what is and is not important. It is the very possibility of an authentic life.

Learning to Read

To read, to gain knowledge, is not an end in itself but for the sake of self-formation, to understand oneself.[2] This means ridding ourselves of the inessential to find what is essential beneath, and what is essential is the life of reason, for this is what expresses the true essence of the human being. Only in the practice of thought can I truly be free, the rest is the slavery of passive emotions. The aim of all spiritual exercises is therefore the same: return to the true self so as to liberate yourself from the passions that control you from the outside. For the Cynics, the third great school of ancient philosophy, this meant breaking from all social conventions and morality, since society’s rules themselves are only the result of people’s fears and desires and not a true reflection of human virtue.

Even the written masterpieces of philosophy that we still read today are not important in themselves. One reads and writes philosophy not so that one could be clever about it, but that the practice of reading and writing itself is directed towards self-mastery and control. Thus what is important is first of all is teaching (learning how to dialogue) and writing only has a function within this practice. Such then is the origin our own confusion. For us, philosophy is about systems, discourses and books. So when we go back to read ancient philosophy, we are troubled by the absence of systematic thought. But this is because we have failed to understand the context and the reasons for this writing. It was never for the sake of philosophical discourse itself, but the practice of self mastery and freedom.

Why then have we ended up with such a different conception of philosophy as an academic discipline? Hadot’s answer is that with the rise of Christianity as the sole religion of the state there was no reason to have competing philosophical schools all contesting their own interpretation of truth and so they were closed (by emperor Justin in 529 AD). More importantly than this mere historical event, however, is the relation between theology and philosophy itself in the Medieval University. If theology is the source of truth about how to live one’s life, then the only function of philosophy would be secondary. Its purpose was to rationalise the dogmas of religion, but it was religion itself, and not philosophy, that was the guide to life. In the modern age, however, with the rise of secularism and the end of the domination of theology, philosophy as a way of life can emerge once more, and there is no doubt in modern philosophers such as Kierkegaard, Nietzsche and Heidegger (and even Foucault in more recent times), we see that philosophy again has a direct bearing on how one lives one life, rather than being an academic discourse. Of course one might wonder, if this is the authentic voice of philosophy, what academic philosophy in universities is meant to be and whether it truly can take up its spiritual vocation.

Works Cited

Hadot, P., 1995. Philosophy as a Way of Life : Spiritual Exercises from Socrates to Foucault, Oxford: Blackwell.


[1] Worse than this it might even sound stupid as much of the industry around spirituality is.

[2] This conception of education is entirely absent from our current society which tends to believe that that the only function of education is to earn more money. See for example Lord Browne’s report on the funding of Higher Education in the England (the basis of the privatisation of universities), which can only conceive of education as a private economic benefit. See, http://goo.gl/CrRYl.


The Problem of Induction – Lecture 3

October 16, 2015

HumeThe justification of science appears at first glance to be the generalisation of experience. I heat metal x and see that it expands, I heat metal y and see that it expands, I heat metal z and see that it expands, and so on, such that it seems natural that I can claim that all metals expand when I heat them. Most scientists think this is what a scientific argument is, and most would also think this is what we might mean by objectivity. There are, however, two questions we might ask of them. First of all, does the inductive method really produce knowledge, and secondly even if it did is this how science itself operates in its own history?

Let us take the first question first, because it is the more traditional problem of induction, and has its canonical form in the argument of Hume. To understand his problem with induction we first of all need to understand his epistemology. For Hume, there are two kinds of propositions: relations of ideas, and matters of facts. In the first relation, the truth of our ideas is confined to our ideas alone. Thus if you understand the concept ‘bachelor’ you know the idea ‘unmarried man’ is contained within it. When it comes to matters of fact, however, we have to go beyond our concepts to experience. They tell us something new about the world and not just the ideas we already know. A matter of fact would be that Paris is the capital of France, or metals expand when heated. Of course when you know the idea then you know what is contained in it, but to obtain the idea you first of all have to get the knowledge.

There can be false relations of ideas as there can be false matters of fact. Thus if you think that a whale is a fish, then you have made an error about a relation of ideas (you don’t know that a whale is a mammal), and if you think that Plato died in 399 BC, then you have made an error at the level of facts (Ladyman 2002, p.32). Relations of ideas can be proved true by deduction since the negation is a contraction. Basically relations of ideas are tautologies, you cannot assert that Peter is not a bachelor at the same time as asserting that he isn’t married as well, since being unmarried and being a bachelor are one the same thing. On the other hand, matters of fact cannot be proved by deduction, but can only be derived from experience and their contradiction is not a fallacy. If I say that Everest is the tallest mountain on Earth, none of the terms have a logical relation to one another, so I could assume that there is taller mountain. I would have to experience the different tall mountains on Earth to know which one was tallest or not (Ladyman 2002, p.33). For this reason Hume was extremely sceptical about what one could claim to know deductively. All that one could claim are logical relations between concepts that we already known (whose origin anyway would be the senses). What we cannot claim is to produce new knowledge about the world simply through examining our concepts (as theology and metaphysics is wont to do in his opinion).[1]

These distinctions seem very straight forward and at first glance appear to back up the inductivist view of science. The problem for Hume, however, is whether the idea that matters of fact could have the same necessary conclusions as relations of ideas, as the idea of expanding metals as a universal law implies. The key to this problem for Hume is whether I can assert that what happens in the past is a certain kind for what will happen in the future. I have experienced the fact that the sun rises every morning. Does this give me the right to say it will rise again tomorrow, when I haven’t actually experience this dawn yet? If it does rise then I will be certain, and in terms of the past, I know that it did rise, but now can I know that I will rise again tomorrow? It is perfectly possible, even if it were unexpected, that the sun might not rise.

Induction for Hume is based upon causal arguments. Our only knowledge of cause and effect is through experience itself because there is no logical reason why any causal relation should hold or not hold. I know matches cause fires, because I know that from experience, not because matches logically contain fire. Just as we can only infer future behaviour of the world from the actual experience of the world, then we can only understand the category of causality from experience. In other words without experience we would not have the concept of causality as a generality. If I always experience the dawn as the rising of the sun then I conjoin this events. If A always follows B, then I will say that A causes B. This because I believe that the future always follows the same path as the past. So that if A happens, then B will happen. Linked to conjunction is contiguity and precedence. Contiguity means that B follows A in time and space, and precedence is that the effect is always after the cause. (the flame is after the lighted match and not before). It is because of conjunction, contiguity, and precedence, that we feel that we have good reason to say that A causes B, or that the sun will rise tomorrow. Hume assertion, however, is that this can never be a necessary reason, as is suggested by generalisation of a universal law however compelling I feel this causality to be.

Take the example of billiard balls, which seems the most basic relation of causality. The ball X hits the ball Y and causes it to move. But what do we mean by that? Do we mean that the ball X makes the ball Y move or that it produces its movement? We think there is a necessary connection between the two events. X moving and Y moving. What we experience is conjunction, contiguity and precedence, what we do not experience is some mysterious ‘necessary connection’. What we see is ball X and ball Y, what we do not see is some other third thing (like an invisible connection, indeed what we do not see is causality). What does it add to our explanation of the events, even if we were to add this mysterious cause. Wouldn’t the ball X and the ball Y just move in exactly the same way?

The point for Hume is just because two events have always in the past be conjoined, does not mean that we can be universally certain that they will always do so. The conclusion of inductive argument could be false but that would never make it invalid (indeed it might make it more interesting, as if the sun did not rise the next day), but this is never the case with a deductive argument if the premises are true, then the conclusion is necessarily true. What underpins the inductive generalisation is the belief that nature is well ordered spatially and temporally, that what happens many times will happen again in the same way. But that is just an assumption. Why must the future always be the same as the past and it certainly is not a logical contradiction if it were not.

Now of course we make these kind of inferences all the time, and Hume accepts that. I probably would not be able to live if I really though the sun would not rise tomorrow every time I went to bed. But this uniformity is a result of our psychology (perhaps it is an evolutionary trait) rather than reason or logic. We find regularity in nature because our habitual associations of events, and not because these events are necessarily connected.[2]

There is no doubt that Hume’s problem is very profound and does make us look at induction more critically, but we might think that the idea that science itself is inductive in the simple way that inductivism implies is too simplistic. It is important to note that this is a very different critique from the methodological one. In the first case, we investigate the method of induction, and like Hume say that is flawed, or might even argue that Hume’s own account of induction is not a correct description of induction.[3] Whereas in the historical account of science, we are arguing whether the description of method is actually how scientists themselves work. One is a description of the content of scientific knowledge, the other is a description of the activity of scientists themselves. Do scientists really act the way that Hume’s example suggests they do? This is a completely different way of doing philosophy of science. For it does not first of all describe a method of doing science and then apply it to scientists, rather it examines what scientists do and from that derives the method. We shall see that this way of understanding science is going to be very important to Kuhn.

Why might we think that scientists do not use the inductive method in the way that induction has been described so far? Take the example of Newton’s Principia (Ladyman 2002, pp.55–6). Newton presents in this work the three laws of motion and the law of gravity. From these laws in explains natural phenomena like planetary motion. He says that he has inferred these laws through induction from observation. Now it is French philosopher of science Duhem that points out that there is a problem with Newton’s explanation. The data he is using is Kepler’s. His data proves that the planet will move in circles, whereas Newton’s in ellipses. This means that he could not have inferred gravity from Kepler’s data, rather he already the hypothesis of the law of gravity to interpret Kepler’s data. Again Newton’s first law state that bodies will maintain their state of motion unless acted upon by another body, but we have not observed a body that has not been acted upon, so this law could not be obtained through observation. Even Kepler’s theory could not have be derived from observation, because he took his data from Brahe, but could only organise it by already assuming that planets moved in circles, a hypothesis he didn’t receive from data, but from the mystical Pythagorean tradition.

So there are two reasons why we might be sceptical of the simple inductive explanation of science. One is methodological through the problem of induction (though we might come up with a better inductive method to solve this), and the other is historical, that science does not work in the way that theory of induction describes. I think the latter is the more serious issue than the former. For in the end science is what scientists do, and not what philosophers might idealise that they do. If you like, the problem of induction is a problem for philosophers. It isn’t one for scientists.

Works Cited

Ladyman, J., 2002. Understanding Philosophy of Science, London; New York: Routledge.


[1] A group of philosophers from the 20th century called logical positivists also liked this distinction, and differentiated mathematical and logical truths, on the one hand, and science on the other. Anything that didn’t fit this schema was said to be nonsense or meaningless. I am not sure that Hume would have gone that far.

[2] Kant’s argument against Hume is that causality is not merely a habit of the mind but a necessary part of our representation of the world. It would not make sense without it.

[3] This is what Ladyman does when he lists all the different ways in which we might counter Hume, the most telling being induction as the ‘best explanation’ (Ladyman 2002, pp.46–7).


Sexual Difference in Freud

October 12, 2015

FreudFreud was supposed once to have said at a party, ‘What does a woman want?’ (in German, Was will das Weib).[1] Why should we think that women would know less what they want than men do? We might want to dismiss out of hand Freud’s remark as being sexist. Obviously there are many places in Freud’s work that one could find evidence for such a thing, and this would just be one more example out of many. I don’t want to defend Freud in this regard. I would think it would be very hard for someone at that time , from our point of view, not to be sexist, and Freud is hardly special concerning these matters. After we have made our accusations, however, there might be something more interesting to say. I am reminded of something that Adorno said about Freud that when he is his at most exaggerated that is when he is true.[2]

Why was it that most of Freud’s patients were women? Do we have to look at the answer to this question in some aspect of Freud’s personality? Is not the real answer that it is entirely unsurprising that if you were an educated women of the early 20th that you would not have been driven quite literally hysterical? The fact that Freud’s treatment room was full of women tells us nothing about women (that women are more susceptible to hysteria than men, for example), but tells us everything about the society they lived in at the time, which pretty much closed off every opportunity to them. Take for example, the patient at the heart of Freud’s first case study (though it was his friend Breuer’s patient), Anna O., whose real name was Bertha Pappenheim, who later became had a leading role in the development of German social work. Could we not say that her symptoms were caused by the world that she lived in? The real question, then, is why was this world more damaging to woman than men, and is it still so today? What then is the difference between men and women that one could be more damaged than the other? The answer to this question, I want to convince you, must be social rather than biological. There is nothing in the nature of women that would make them less equal than men.

In speaking, thinking and writing about sexual difference, you might imagine that the most important word in this expression is ‘sexual’ and not ‘difference’, since after all what we are interested in is sex. Yet, to understand the possibility of their being two kinds of sexes, one first of all has to know what kind of difference it is that you want, because this choice will determine completely how you understanding your own sexuality. There are two ways that we can think this difference. Either we think that it is real, or we think it is symbolic. In the first case, difference is determined by nature. This is a very old idea, even though today the new language of genes and evolutionary psychology might dress it up in apparent objective and neutral discourse. The difference between men and women, then, has been laid down in 250,000 year ago when the human species first emerged, and the whole search for equality and justice between the sexes is just liberal wish fulfilment. What might make us a little sceptical about this thesis is that the behaviour of our distant ancestors, which we know very little about, just happens to be exactly the same as the prejudices of our more traditional and conservative fellow citizens. In the symbolic universe, on the contrary, it is not nature that determines the difference between the sexes, but language; that is to say, sexual differences are symbolic, and if there is a biological element within sexuality, then it is moulded, shaped and transformed by social and individual pressures and forces that interpret and place a certain value on them. This is the line that Freud takes, but we might conclude that he does not take it far enough, because he still wants to look for something universal that determines the difference between the sexes, even though it is no longer natural. Or if we want to be more precise, it is not that he stills seeks for something universal that makes his interpretation of sexual difference finally inadequate, but that he finds it in the wrong place. This is why we’re going to end with Lacan (well at least Lacan as he is reinterpreted by Zizek).

It is to Freud that we must thank for the invention of the symbolic interpretation of sexual difference. It is in his Three Essays on Sexuality, where we first see a committed and resolute argument against a biological and natural interpretation of human sexuality, which only sees sexuality in teleological or utilitarian terms. We only have sex for the sake of something else, for procreation or serial monogamy. For Freud, on the contrary, human sexuality is highly complex and differentiated, and what we find sexuality expands well beyond any purpose or useful value, a general sexuality, which he called ‘polymorphous perversity’(Freud, 1991, p. 109). To understand, however, the meaning of this perversity, we have to go back to the genesis of human sexuality. How is it that the child becomes a man or a woman, and takes on sexual difference, which is something that we are born into rather than are?

First of all, this is not primarily a biological process, although biology, of course must come into it, but an accomplishment. You have to become a man or woman in the full sense of the terms. You aren’t just naturally a man or a woman. The key essay for us here is a much latter work of Freud’s, ‘Some Psychical Consequences of the Anatomical Distinction between the Sexes’(Freud, 1991, pp. 323–43). When we look at this essay in detail, we can see that there are two different series one for the girl and one for the boy. This series is not innate as such. This means that you have to live in a society, and society, as such, determines how this series works. Now Freud has a word for how this pressure of society works and it is the Phallus. It is very important not to confuse this with the penis. The phallus is not biological, but symbolic and as we have said above what characterises human sexuality is that it is symbolic.

Of course this does not mean that there are no biological elements in human sexuality, otherwise it might be hard to imagine how we could have ended up with the difference between the sexes, but this difference is not enough by itself to explain the complexity of our sexuality (what it is to be a man or a woman), however many chimpanzees one looks at. Our biology is always interpreted through a symbolic universe, which is given in advance and determines how we are going to interpret the fact we have a penis or we do not.

It is Freud’s absolute conviction that we live in a male society. Many people will say that he is sexist, and when I tell you about his theories about human sexual development you might agree with him, but I think he is quite correct about this. We do live in a male society.[3] It is certainly the case when Freud was writing (it is no surprise, as we said right at the beginning, that nearly all his cases where women) and I think it is still the case now, even though there might have been all kinds of advancements in the meantime in terms of the law and work. If we do live in a male society, then being born biologically a girl means that you are going to be seriously disadvantaged from the start and this drawback has nothing at all to do with biology, but how this biological destiny is interpreted. Or in Freud’s words, how the logic of the Phallus operates on one’s sexual development.

Let is then see how Freud himself explains how one becomes a boy or a girl; that is to say, how one ends up fulfilling one’s destiny and become what one already was. First of all let us take the series of the little boy. At the earliest stage of the child’s relation to the parents, which Freud calls the ‘phallic phase’, there is no distinction between the sexes. This is because what determines one’s sexual identity is the object of ones desire and it is clear that both the girl and the boy have the same object which is the mother (or more precisely the mother’s breast). For the little girl, however, to become a woman, she has to change her object of desire from her mother to her father. The explanation of this transformation is given by what Freud calls the ‘masculine ideal’. It is this ideal which gives to the physical differences of the sexes their negative and positive significance and explains the divergence in their development: from the phallic to the Oedipal phase to the castration complex and its dissolution for the little boy; from the phallic phase to penis envy to the Oedipal complex for the little girl. You might notice in these divergent series that the little girl never leaves the Oedipal complex.

What one has to understand, however, is that none of this makes sense without the masculine ideal being in operation from the very start. It is this ideal which ensures that the development of the two series is divergent, and at one end we end up with the little girl and at the other the little boy. For why would the little girl feel different in this way unless she did not measure herself against the masculine ideal? Now such an idealisation cannot be made sense of biologically. Sure there is a difference between the sexes, but that is not sufficient to explain why having a penis is a good thing and not having one is bad. The possibility of such a structure of idealisation is not to found in our bodies but in language, and how it already structures our experience of them, and how the little girl experiences her body as lacking something which then affects the rest of her psychological development.

It is Freud’s disciple Lacan who, following the teaching of the French linguist Saussure, who showed that this process of the sexual differentiation depended entirely on the structure of language and not on our biological fate alone. For Lacan, Saussure’s fundamental discovery was that language was divided between the function of the signifier and the signified. The signifier being the word itself and the signified what the word represented or signified. Such a difference is not important in itself, but the realisation that the signifier can operate without the signified. It is this separation of the two aspects of language that explains the possible existence of the ideal which can structure our experience. I am already immersed in language before I speak through the others who speak to me and the culture they bear in this speaking. These others name and place me in the division masculine/feminine. I will then constitute myself through this placing. I refuse or accept it. This is the law and I become a subject through it. Women have a different relation to the law, and this has ethical consequences. For, to some extent, if there if femininity exists, it is only because she escapes it, though this might only be expressed negatively, as it is in Freud’s text.[4]

No one more than the Slovenian philosophy Slavoj Zizêk has explained more clearly how this split works and the example he gives is the ordinary coke bottle (Zizek, 1989, p. 96). How is it that this object, when I look at it, somehow represents the ideal of America? Common sense tells us that the idea of America is first, and then this idea is somehow ‘symbolised’ by the coke bottle. This is to interpret the relation between the two elements however after the relation has been constructed. It is not an explanation of how it is created, for it is clear that it is the coke bottle, which is the origin of the idea of America and not the other way around by capturing something that is rather hazy and ill-defined into a definite object which then can pin this picture of the ideal of the American life down for us. It is not, of course, the properties of Coke which make it this symbol, for there is no reason that such a strange tasting liquid should do that; rather it is its formal function. In this instance, the coke bottle (and it does not have to this object, it could have been anything else), is operating as a pure signifier. It is a kind of like an empty box in which we can project our fantasy of what America is and which can then organise and consolidate this reality. It isn’t that the coke bottle signifies the American ideal, because it could not exist without it; rather it is the place through which this ideal is produced. It is precisely because it doesn’t mean anything, that it is ‘it’, as the advertisement goes, that it can act as the empty signifier through which the idea of America can be coalesced.

The masculine ideal operates in exactly the same way as the coke bottle. There is nothing empirical about the male sex that would make it ideal. Rather masculinity has to go through a process of idealisation through which it can then be translated into a norm by which the status of the two sexes can be measured, the one as positive and the other as negative. Although there is something fixed about sexual differences there is nothing stable about the ideal which fixes our fantasies. One day the coke bottle could just be a container for a strange tasting brown liquid, and nothing else. And equally the male sex may no longer occupy the space of the ideal from which the development of the two sexes is measured. The ideal space is precisely empty. Anything can occupy it, so that one might imagine in the future, for example, a feminine ideal, where the little boy would experience himself as mutilated rather than the little girl. What then is universal is not the masculine ideal, as such, but the ideality which language make possible. Equally, even when an ideal works, it is never a total success. This is why the elevation of coke to an ideal strikes us as a bit corny and over the top. Surely reality just isn’t like that, and we only have to visit the real America to think that what is represents is a fantasy. In the same way, the reality of women is always escaping the masculine ideal all the time and in fact it is the men who are more under its power than her. Reality might be structured by language, but it is always being destabilized by it from within.

Works Cited

Adorno, T.W., 2010. Minima Moralia : Reflections on a Damaged Life, London: Verso.

Elms, A.C., 2001. Apocryphal Freud: Sigmund Freud’s Most Famous “Quotations.” In J. A. Winer et al., ed. Sigmund Freud and his Impact on the Modern World. New York: Routledge, pp. 83–104.

Freud, S., 1991. On Sexuality : Three Essays on the Theory of Sexuality and Other Works A. Richards & J. Strachey, eds., London: Penguin Books.

Graeber, D., 2011. Debt : the First 5,000 Years, Brooklyn, N.Y.: Melville House.

Zizek, S., 1989. The Sublime Object of Ideology, London; New York: Verso.


[1] According to Freud’s biographer Ernst Jones, he was supposed to have said this to Marie Bonaparte who was a patient of his, though this phrase never appears in his work or his diaries (Elms, 2001, pp. 84–8).

[2] What he actually wrote is ‘In psychoanalysis nothing is true except the exaggerations’ (Adorno, 2010, p. 49).

[3] Anthropologists tell us that there have been examples of female societies in the past, but they have long since disappeared with the rise of agriculture and the state (Graeber, 2011, pp. 176–82).

[4] Freud writes, ‘I cannot evade the notion (though I hesitate to give it expression) that for women the level of what is ethically normal is different from what it is in men. Their super-ego is never so inexorable, so impersonal, so independent of its emotional origins as we require it to be in men. Character-traits which critics of every epoch have brought up against women – that they show less sense of justice than men, that they are less ready to submit to the great exigencies of life, that they are more often influenced in their judgements by feelings of affection or hostility – all these would be amply accounted for by the modification in the formation of their super-ego which we have inferred above.’ (Freud, 1991, p. 342)


Follow

Get every new post delivered to your Inbox.

Join 1,325 other followers